TECHNICAL FIELDThe present disclosure relates generally to determining differences in an event-driven application accessed in different client-tier environments.
BACKGROUNDEvent-driven applications typically may be accessed in different client-tier environments. However, in many cases, a first client-tier environment may provide a different end-user experience of the event-driven application than a second client-tier environment.
BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 illustrates an example system for accessing one or more event-driven applications.
FIG. 2 illustrates an example Venn diagram depicting various types of differences that may occur in an event-driven application that is accessed in different client-tier environments.
FIG. 3 illustrates an example system for generating one or more behavioral models of an event-driven application.
FIG. 4 illustrates an example behavioral model generated by dynamically crawling an event-driven application in a client-tier environment.
FIG. 5 illustrates an example method for determining one or more differences between two behavioral models of an event-driven application.
FIG. 6 illustrates an example algorithm for comparing two behavioral models at a trace-level.
FIG. 7 illustrates an example algorithm for comparing two behavioral models, and outputting the differences between the two behavioral models.
FIG. 8 illustrates example end-user experiences of an event-driven application accessed in different client-tier environments.
FIG. 9 illustrates a table displaying example internal DOM level differences that occur when an event-driven application is accessed using different web browsers.
FIG. 10aillustrates an example behavioral model generated by dynamically crawling an event-driven application in a first client-tier environment.
FIG. 10billustrates an example behavioral model generated by dynamically crawling the same event-driven application in a second client-tier environment.
FIG. 11 illustrates example screen shots of detected DOM level differences of an event-driven application accessed in different client-tier environments.
FIG. 12 illustrates example tables of detected differences between two behavioral models.
FIG. 13 illustrates an example computer system.
DESCRIPTION OF EXAMPLE EMBODIMENTSFIG. 1 illustrates anexample system10 for accessing one or more event-drivenapplications30.System10 includes auser14, one ormore clients26, one or more event-drivenapplications30, anetwork38, and one ormore servers46.
Auser14 may interact with aclient26 to access one or more event-drivenapplications30. As an example and not by way of limitation, auser14 may include a person, a program, a device, an automation, any other suitable entity, or a combination of two or more of these.
Aclient26 may send and receive signals to and from one ormore servers46 in order to allow auser14 to access one or more event-drivenapplications30. As an example and not by way of limitation, aclient26 may be an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC) (such as, for example, a computer-on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, or a combination of two or more of these. Aclient26 may send and receive any suitable type of signals for accessing an event-drivenapplication30. For example and not by way of limitation, aclient26 may send and receive hypertext transfer protocol (HTTP) signals, file transfer protocol (FTP) signals, or any other suitable signals.
Aclient26 may further include an I/O interface (not shown) that enables auser14 to interact with aclient26. As an example and not by way of limitation, an I/O device may include a keyboard, keypad, microphone, monitor, mouse, printer, scanner, speaker, still camera, stylus, tablet, touchscreen, trackball, video camera, another suitable I/O device, or a combination of two or more of these. An I/O interface of aclient26 may provide auser14 with aviewable display22 of an event-drivenapplication30. As an example and not by way of limitation, an I/O device may be a monitor that provides aviewable display22 to auser14 by displaying an event-drivenapplication30 on the monitor. An I/O interface of aclient26 may further allow auser14 to interact with an event-drivenapplication30 by allowing auser14 to perform one ormore events18. Anevent18 may include any suitable type of user-initiated event. As an example and not by way of limitation, anevent18 may include clicking a mouse, moving a mouse, pressing one or more keys on a keypad, touching a touchscreen, moving a trackball, speaking into a microphone, any other event that may be initiated by auser14, or any combination of two or more of these.
Aclient26 may further include one or more client-tier environments (not shown). A client-tier environment of aclient26 may allow auser14 to access one or more event-drivenapplications30. As an example and not by way of limitation, a client-tier environment may include an operating system (OS) installed on aclient26, a web browser installed on aclient26, one or more settings of a client26 (e.g., such as the screen resolution of a monitor of a client26), one or more variations in a web browser installed on a client26 (e.g., the version and configuration of the web browser, including one or more web browser plug-ins and one or more web browser settings), or any combination of two or more of these.
An OS installed on aclient26 may run one or more web browsers installed on aclient26. As an example and not by way of limitation, the OS may include a Windows® 95/98/NT/XPNista/Mobile OS, an OS-X® OS, a UNIX® OS, a LINUX OS, or any other suitable OS. The web browser installed on aclient26 may allow auser14 to access event-drivenapplications30. For example and not by way of limitation, the web browser may include Microsoft Internet Explorer, Mozilla Firefox, Google® Chrome, Opera®, or any other suitable web browser. In particular embodiments, the web browser may initiate the transmittal of one or moreserver request signals34 from aclient26 to one ormore servers46 over anetwork38. Aserver request signal34 may be based on one ormore events18 from auser14 or web flow from an event-drivenapplication30. As an example and not by way of limitation, auser14 may enter an address for an event-driven application30 (e.g., such as a uniform resource locator (URL) or a uniform resource indicator (URI)) into an address box of the web browser, and the web browser may send aserver request signal34 to aserver46 to request content from an event-drivenapplication30. In particular embodiments, theserver46 may respond to theserver request signal34 by transmitting aserver response signal42 including content corresponding to an event-drivenapplication30 to the web browser in aclient26. After receiving the content, the web browser may render the content into a viewable form so that it may be displayed to auser14 through the I/O interface of aclient26.
An event-drivenapplication30 may provide one more media objects for auser14 to interact with. As an example and not by way of limitation, an event-drivenapplication30 may include a web application, a web 2.0 application, an AJAX-based web application, or any other suitable application that provides media objects. In particular embodiments, an event-drivenapplication30 may be run on aserver46 and interacted with by auser14 through a browser on aclient26. For example and not by way of limitation, content for an event-drivenapplication30 may be sent to the web browser in a programming language, and the web browser may render the programming language viewable on a display so that auser14 may interact with the event-drivenapplication30. In particular embodiments, an event driven application may include one or more contents that may be executed by the web browser.
The media objects provided by an event-drivenapplication30 may be changed (e.g., such as by adding, removing, or modifying the media objects) by one ormore events18 or web flow from the event-drivenapplication30. As an example and not by way of limitation, auser14 may enter data using a keyboard, causing the event-drivenapplication30 to change the media objects provided to theuser14. In particular embodiments, when an event-drivenapplication30 changes the media objects, the altered media objects may be provided to auser14 as a new a screen (or state). An event-drivenapplication30 may include any suitable programming language or combination of programming languages. In particular embodiments, an event-drivenapplication30 may include source code or object code. In particular embodiments, an event-drivenapplication30 may include a higher-level programming language, such as, for example, C, Perl, or a suitable extension thereof. In particular embodiments, an event-drivenapplication30 may include a lower-level programming language, such as assembly language (or machine code). In particular embodiments, an event-drivenapplication30 may include JAVA. In particular embodiments, an event-drivenapplication30 may include Hyper Text Markup Language (HTML), Extensible Markup Language (XML), Javascript (JS), Java Server Pages (JSP), Hypertext Preprocessor (PHP), or other suitable markup language.
Anetwork38 connects one ormore clients26 to one ormore servers46, transporting one or more signals to and from the one ormore clients26 and the one ormore servers46. Anetwork38 may refer to any interconnecting system capable of transmitting audio, video, signals, data, messages, or any combination of the preceding. Anetwork38 may comprise all or a portion of a public switched telephone network (PSTN), a public or private data network, a local area network (LAN), a metropolitan area network (MAN), a wide area network (WAN), a local, regional, or global communication or computer network such as the Internet, a wireline or wireless network, an enterprise intranet, other suitable communication link, or any combination of the preceding. Anetwork38 may transport any suitable signal for accessing an event-drivenapplication30 on one ormore servers46. For example and not by way of limitation, anetwork38 may transport HTTP signals, FTP signals, or any other suitable signals.
Aserver46 may store one or more event-drivenapplications30, and may further send and receive signals to and from one ormore clients26 in order to allow auser14 to access one or more event-drivenapplications30 stored in theserver46. As example and not by way of limitation, aserver46 may be an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC) (such as, for example, a computer-on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, or a combination of two or more of these. In particular embodiments, aserver46 may receive one or more server request signals34 from a web browser installed on aclient26. In particular embodiments, aserver46 may respond to aserver request signal34 by transmitting aserver response signal42 that includes content corresponding to an event-drivenapplication30 to a web browser in aclient26. Aserver46 may send and receive any suitable signals in order to allow aclient26 to access an event-drivenapplication30. For example and not by way of limitation, aserver46 may send and receive HTTP signals, FTP signals, or any other suitable signals.
In particular embodiments, an event-drivenapplication30 may be accessed in different client-tier environments. As example and not by way of limitation, an event drivenapplication30 may be accessed in a first client-tier environment that includes a Microsoft® Internet Explorer web browser, and the same event-drivenapplication30 may also be accessed in a second client-tier environment that includes a Mozilla® Firefox web browser. In particular embodiments, although the event-drivenapplication30 may be accessed in both client-tier environments, an end-user experience of the event-drivenapplication30 may be different on each client-tier environment. In particular embodiments, such differences may be caused by an OS installed on aclient26, a browser installed on aclient26, any other numerous differences in the client-tier environments, or any combination of two or more of these.
Unfortunately, these different end-user experiences may cause problems because an event-drivenapplication30 may modify the content provided based on one ormore events18 that occur. As an example and not by way of limitation, an event-drivenapplication30 may modify its content based on a user scrolling over certain content in the event-drivenapplication30. However, if the content is displayed differently in certain client-tier environments (or not displayed at all) anevent18 may never occur, and the content may not be modified—causing further differences in the end-user experience As such, even minor differences in an end-user experience may turn into much bigger differences.
FIG. 2 illustrates an example Venn diagram depicting various types of differences that may occur in an event-driven application that is accessed in different client-tier environments. Aset104 may include one or more differences in the Document Object Model (DOM) representations of one or more screens of an event-driven application, one or more differences in a client-side state (e.g., such as in the set or specific values of Javascript® variables on a client), and one or more differences in a set of possible traces (e.g., alternating sequences of screens and events causing the screen transitions) on a client-side. Aset112 may include one or more differences that may be observed by a human user on a web browser. Aset108 may include one or more differences that fall into bothset104 and set112. In particular embodiments, one or more differences that are included in aset104, aset108, aset112, or any combination of two of these may be determined, as is discussed inFIGS. 3-12.
FIG. 3 illustrates anexample system200 for generating one or more behavioral models216 of an event-drivenapplication204. In particular embodiments, a behavioral model216 may be generated for an event-drivenapplication204 in a plurality of client-tier environments212 using one or more crawlers208. As such, the behavioral models216 may be generated automatically and dynamically.
An event-drivenapplication204 ofFIG. 3 may be similar to an event-drivenapplication30 ofFIG. 1. As such, an event-drivenapplication204 may provide one more media objects for a user to interact with.
Acrawler208 may dynamically crawl an event-drivenapplication204. In particular embodiments, the dynamic crawl of a crawler may allow acrawler208 to dynamically analyze an event-drivenapplication204 in order to generate a behavioral model216. As example and not by way of limitation, acrawler208 may include Crawljax, WebCrawler, Methabot, Googlebot, or any other suitable crawler. In particular embodiments, acrawler208 may exercise code on a client (e.g., such asclient26 ofFIG. 1) in order to detect and execute one or more doorways (e.g., clickables) of an event-drivenapplication204. As such, in particular embodiments, acrawler208 may dynamically analyze one or more screens of an event-drivenapplication204 that are rendered by the web browser on a client. Furthermore, acrawler208 may analyze how the one or more executed doorways effect the one or more rendered screens of an event drivenapplication204. In particular embodiments, this may involve replicating (or firing) one or more events, such asevents18 ofFIG. 1 (e.g., clicking on a mouse, typing on a keyboard), in order to analyze how such events effect the dynamic DOM tree in a browser before and after the event is replicated. In particular embodiments, by analyzing and storing the effects that occur before and after each of the replicated events, acrawler208 may incrementally build a behavioral model216 for an event-drivenapplication204.
A client-tier environment212 may allow one or more event-drivenapplications204 to be accessed at a client. In particular embodiments, a client-tier environment may be similar to a client-tier environment ofFIG. 1. As an example and not by way of limitation, a client-tier environment212 may include an operating system (OS) installed on a client, a web browser installed on a client, one or more settings of a client (e.g., such as the screen resolution of a monitor of a client), one or more variations in the web browser installed on a client (e.g., the version and configuration of the web browser, including one or more web browser plug-ins and one or more web browser settings), or any combination of two or more of these.
According to the illustrated embodiment,system200 includes three client-tier environments212. In particular embodiments, each of the client-tier environments212 may be different. For example and not by way of limitation, a client-tier environment212amay include a Windows® Vista OS and a Google® Chrome web browser; a client-tier environment212bmay include a Windows® XP OS, a Microsoft® Internet Explorer web browser, and a particular screen resolution; and a client-tier environment212cmay include a UNIX® OS, a Mozilla® Firefox web browser, and one or more plug-ins for the web browser. Althoughsystem200 illustrates three client-tier environments212,system200 may include more than three client-tier environments212 or less than three client-tier environments212. As an example and not by way of limitation,system200 may include two or more client-tier environments212.
According to the illustrated embodiment, acrawler208 may dynamically crawl an event-drivenapplication204 in each of the client-tier environments212 in order to analyze the event-drivenapplication204. Accordingly, acrawler208 may generate a behavioral model216 for each client-tier environment212. In particular embodiments, the crawling conducted by acrawler208 may be performed in an identical fashion for each client-tier environment212. As an example and not by way of limitation, acrawler208 may replicate the same events (and do so in the same order) while crawling the event-drivenapplication204 on each client-tier environment212. As such, the only differences in the behavioral models216 (if there are any at all) may be caused by the different client-tier environments212. In particular embodiments, the crawling conducted by acrawler208 may be automatic. As an example and not by way of limitation, acrawler208 may be initiated for a particular event-drivenapplication204 in a particular client-tier environment212, and thecrawler208 may perform the entire crawl (including the analysis and behavioral model216 generation) for that particular event-drivenapplication204 in the particular client-tier environment212 without any further prompting.
A behavioral model216 may be generated based on the analysis conducted by acrawler208 on an event-drivenapplication204. According to the illustrated embodiment,system200 includes three behavioral models216—one behavioral model216 for each client-tier environment212. Behavioral models216 are further described inFIG. 4.
FIG. 4 illustrates an examplebehavioral model300 generated by dynamically crawling an event-driven application in a client-tier environment. Abehavioral model300 ofFIG. 4 may be similar to a behavioral model216 ofFIG. 3.
In particular embodiments, abehavioral model300 may be generated by dynamically crawling an event-driven application in a client-tier environment, as is discussed inFIG. 3. For example, abehavioral model300 may be generated by a crawler, such as acrawler208 ofFIG. 3.
In particular embodiments, abehavioral model300 may be a finite state machine (FSM) with partial transition functions. As an example and not by way of limitation, abehavioral model300 may include each of the states of the event-driven application, and each of the transitions that caused each of the states. In particular embodiments, a state of abehavioral model300 may refer to a screen observed by a user when the event-driven application is accessed in a particular client-tier environment. In particular embodiments, a transition may refer to an event (e.g., clicking a mouse, moving a mouse, pressing one or more keys on a keypad) that caused the observed screen (or state) to change. Abehavioral model300 may include astate graph304 and a set of onemore screen models308 for one more states of thestate graph304.
Astate graph304 may include a graphical representation of the finite-state machine with the states represented as unnamed vertices. In particular embodiments, astate graph304 captures the set of traces (e.g., alternating sequences of events and screen transitions caused by the events) without reference to the details of each state. In particular embodiments, astate graph304 may be a labeled, directed graph, with a special designed start vertex. It may be denoted by a 5-tuple, G (V, E, o, Σ, L), where V is the set of vertices, E is the set of (directed) edges, o is the special designated start vertex, E is an alphabet of labels, and L: E→Σ is a labeling function that assigns a label from E to each edge. According to the illustrated embodiment, eachnode312 in astate graph304 of abehavioral model300 may represent a state of a screen of the event-driven application. Furthermore, eachedge316 in astate graph304 of abehavioral model300 may represent an event (e.g., such as a user-initiated interaction with an event-driven application) causing a transition from one of the screens to another. In particular embodiments, eachedge316 of astate graph304 of abehavioral model300 may be labeled (not shown) with the event that caused the transition between states.
Ascreen model308 may include a programmatic representation of each state (screen). In particular embodiments, ascreen model308 captures details of each screen without any knowledge of transitions leading up to or out of the screen. In particular embodiments, ascreen model308 may be a rooted, directed, labeled tree. It may be denoted by a 5-tuple, T(Q, D, r, Λ, δ), where Q is the set of vertices, D is the set of directed edges, r ε Q is the root vertex, Λ is a finite set of labels and δ: Q→Λ is a labeling function that assigns a label from Λ to each vertex in Q. In particular embodiments, ascreen model308 may be an abstracted version of the DOM tree of a given state (screen) displayed on a web browser. Althoughbehavioral model300 illustrates asingle screen model308, abehavioral model300 may include any suitable number ofscreen models308. As an example and not by way of limitation, abehavioral model300 may include ascreen model308 for every state (screen) of astate graph304 of abehavioral model300.
FIG. 5 illustrates anexample method400 for determining one or more differences between two behavioral models of an event-driven application. In particular embodiments, by determining one or more differences between two behavioral models of an event-driven application, one or more differences in an end-user experience of the event-driven application in a client-tier environment may be determined with respect to one or more other end-user experiences of the event-driven application in one or more other ones of the client-tier environments. In particular embodiments,method400 may further include determining one or more pairwise equivalences for each of the behavioral models with respect to one or more other ones of the behavioral models.
The method begins atstep404. Atstep408, a first behavioral model (behavioral model M1) and a second behavioral model (behavioral model G2) are read. Atstep412, a first state graph (state graph G1) is extracted from behavioral model G1. Atstep416, a second state graph (state graph G2) is extracted from behavioral model G2. Atstep420, state graph G1 and state graph G2 are compared at a trace-level. A trace-level comparison may determine one or more pairwise equivalences between state graphs G1 and G2. In particular embodiments, determining one or more pairwise equivalences between state graphs G1 and G2 may include determining whether any edges (e.g., events that cause a transition between states or screens) of state graph G1 are isomorphic with any edges of state graph G2; and whether any nodes (e.g., representing states or screens) of state graph G1 are isomorphic with any nodes of state graph G2. In particular embodiments, the comparison may start from the root node of each state graph and traverse through a depth-first search. In particular embodiments, determining whether any edges are isomophoric may include retrieving the DOM element and the event from each corresponding node, and attempting to reconcile them.
In particular embodiments, the edges may be considered isomorphic if the edit distance between the two edges is lower than a similarity threshold. In particular embodiments, the edit distance may be based on event types, tag names, attributes and their values, Xpath positions of the two edges on their corresponding DOM trees, ID attributes, or any combination of two or more of these. In particular embodiments, the nodes may be considered isomorphic if and only if each of the previous nodes, successive nodes, and edges are isomorphic. In particular embodiments, the trace-level comparison may also determine which states of behavioral model M1 may be considered to be matching particular states of behavioral model M2.
Atstep424, if any edges or nodes of state graphs G1 and G2 were determined to not be isomorphic, the trace-level differences of the state graphs G1 and G2 are outputted atstep428. In particular embodiments, the output may include a list of all the edges and the nodes that were determined to not be isomorphic. However, atstep420, if all of the edges and nodes of state graphs G1 and G2 were determined to be isomorphic, no trace-level differences may be outputted, and the method moves to step432.
Atstep432, a vertex (vertex v1) is picked from state graph G1. Atstep436, a screen model (screen model T1) corresponding to the state (or screen) represented by vertex v1 is retrieved from the behavioral model M1. Atstep440, it is determined if the trace-level comparison between state graphs G1 and G2 (step416) produced a matching node v2 in state graph G2 corresponding to vertex v1 in state graph G1. If not, the algorithm moves to step460. Otherwise, atstep444, a screen model (screen model T2) corresponding to the state (or screen) represented by vertex v2 is retrieved from the behavioral model M2. In particular embodiments, vertex v1 and vertex v2 may be determined as matching during the trace-level comparison discussed above instep416. Atstep448, screen model T1 and screen model T2 are compared at a screen-level. A screen-level comparison may determine one or pairwise equivalences between the behavioral models M1 and M2. In particular embodiments, determining one or more pairwise equivalences between the behavioral models M1 and M2 may include determining whether screen model T1 and screen model T2 are isomorphic. In particular embodiments, the screen-level comparison may include retrieving the DOM tree and Javascript® state variables for both screen model T1 and screen model T2, and attempting to reconcile them. In particular embodiments, screen model T1 and screen model T2 may be determined to be isomorphic by checking the equivalence between the DOM trees corresponding to screen model T1 and screen model T2 and matching the name-value pairs of Javascript® variables of screen model T1 and screen model T2.
Atstep452, if screen model T1 and screen model T2 were determined to not be isomorphic, the screen-level differences for the screen model T1 and the screen model T2 are outputted atstep456. In particular embodiments, outputting the screen-level differences may include determining whether the differences may be irrelevant. In particular embodiments, an irrelevant difference may not be outputted. In particular embodiments, an irrelevant difference may result from certain web browsers rendering DOM changes differently than other web browsers. As an example and not by way of limitation, certain stylistic differences between various web browsers (e.g., case sensitivity, white space, attribute order, and node text values) may be determined to be an irrelevant difference, and may not be outputted. After the state-level differences are outputted, the method moves to step460.
Furthermore, if, atstep452, screen model T1 and screen model T2 were determined to be isomorphic, the method also moves to step460. Atstep460, a determination may be made regarding whether there are any more vertices in state graph G1 (or in state graph G2). If there are additional vertices,steps432 through460 are repeated for each additional vertex in state graph G1 (or in state graph G2). If there are no additional vertices, the method ends atstep464.
In particular embodiments, themethod400 may be repeated for each pairing of behavioral models. As an example and not by way of limitation, if three behavioral models are generated by a crawler for an event-driven application in three different client-tier environments, themethod400 may be repeated for each pairing of the three behavioral models. As such, each trace-level difference (or incompatibility) and each screen-level difference between each of the behavioral models may be determined and outputted. Furthermore, for each of the behavioral models, one or more pairwise equivalences with respect to one or more other ones of the behavioral models may be determined. In particular embodiments, the outputting of the trace-level differences and the screen-level differences may include sorting, classifying, and/or filtering the differences before outputting them.
FIG. 6 illustrates anexample algorithm500 for comparing two behavioral models at a trace-level. In particular embodiments,algorithm500 may be used to determine whether any edges and any nodes of a first behavioral model are isomorphic with any edges and any nodes of a second behavioral model. In particular embodiments,algorithm500 may be used inFIG. 5.
In particular embodiments,algorithm500 implements a trace-level equivalence check on the state graph (G1) of the first behavioral model and the state graph (G2) of the second behavioral model as an isomorphism check. The function OUT(v) returns the set of outgoing edges of vertex v, LABEL(e) returns the label of edge e and the function LOOKUP (l, edgeSet) returns an edge having thelabel1 from the set of edges edgeSet or null if none exists. DEST(e) returns the destination vertex of edge e. It may be assumed that the match field of each edge visited field of each vertex (in both G1and G2) is initialized to false and the match field of each vertex in G1is initialized to null. In particular embodiments,algorithm500 is a simple variant of depth-first search and linear-time in the sizes of G1, G2(e.g., O(|V1|+|V2|+|E1|+|E2|)).
In particular embodiments, G
1(V
1,E
1,o
1,Σ,L
1), and G
2(V
2,E
2,o
2,Σ,L
2) may be trace-equivalent if and only there exists a bijective mapping function M: V, V
1→V
2such that the following are true:
·∀
u,v εV1.(
u,v)ε
E(
(
u).
(
v))ε
V2 (1)
∀
e1(
u1,v1)ε
E1,e2(
u2,v2)ε
E2such that
(
u1)=
u2and
(
v1)=
v21(
e1)=
2(
e1) (2)
(
o1)=
o2 (3)
In particular embodiments,algorithm500 may determine that G1and G2are equivalent if and only if they satisfy equations (1), (2), and (3), above.
In the case wherealgorithm500 determines that G1is not equivalent to G2, thealgorithm500 may determine that one or more sub-graphs implied by the partial match produced by the algorithm (e.g., the sub-graph implied by all nodes and edges that were matched to a counter-part in the other graph) are trace-equivalent based on equations (1), (2), and (3). For example and not by way of limitation, the subgraph:
G′1(V′1,E′1,o1,Σ,L′2) of G1, where E′1={eε E1: e.match=true}, V′1={v ε V1: v.match≠null}, L′1: E′1→E and L′1(e)=L1(e)∀e ε E′1. (and a similarly defined sub-graph for G′2of G2(V′2,E′2,o2,Σ,L′2)) may be determined to be trace-equivalent per equations (1), (2), and (3).
Further, in the case where G1is not equivalent to G2,algorithm500 not only produces a trace equivalent partial match but actually a maximal partial match (e.g., there do not exist a pair of edges e1and e2, where e1ε E1but e1ε E′1and e2ε E2but e2ε E′2which can be added to G′1and G′2respectively), along with their source and sink nodes such that the resulting graphs may also be trace-equivalent.
In particular embodiments, althoughalgorithm500 computes maximal matches, the match need not be the maximum match possible. In particular embodiments, a variant ofalgorithm500 may back-track on matching decisions made in order to compute the absolute maximum match.
FIG. 7 illustrates anexample algorithm600 for comparing two behavioral models, and outputting the differences between the two behavioral models. In particular embodiments,algorithm600 may be used inFIG. 5.
The function STATEGRAPH returns the underlying state graph for comparison through the Trace Equivalence Check (shown inFIG. 6). traceMatch is an object that receives the result ofalgorithm500 ofFIG. 6, including the partial match, in case where G1is not equivalent to G2. OUTPUTTRACEDIFFextracts and presents the user with the trace-level differences. Similarly, at the screen-level, the function GETSCREEN extracts and returns the detailed screen representation of a vertex v of a state graph from its corresponding behavioral model, to be used for the equivalence check byalgorithm600. scrnMatch is the object receiving the result of this comparison and the function OUTSCRDIFFextract and presents these screen-level differences to the user.
In particular embodiments, since the screen model is represented as a rooted, directed, labeled tree, a screen model T
1of a first behavioral model and a screen model T
2of a second behavioral model may be compared based on the isomorphism of the respective trees. Thus, in particular embodiments, screen models T
1(Q
1, D
1, r
1, Λ, δ
1) and T
2(Q
2, D
2, r
2, Λ, δ
2) may be determined to be equivalent, (e.g., T
1≡T
2) if and only if there exists a bijective mapping function N: Q
1→Q
2such that:
(
r1)=
r2 (4)
∀
q εQ1,δ
1(
q)=δ
2(
(
q)) (5)
∀
u,v εQ1,(
u,v)ε
D1(
(
u),
(
v))ε
D2 (6)
Since screen models are rooted, labeled trees, screen matching (the function call SCRNEQUIVCHECK(T1, T2) in algorithm600) may be performed in linear time by a simple variant of the tree isomorphism algorithm. In particular embodiments, if two screens are determined to not be matching,algorithm600 may utilize snapshots of the screens in outputting the differences to a user. As an example and not by way of limitation, during the dynamic crawling of an event-driven application, a snapshot may be made of each screen change in the web browser. In particular embodiments, when a screen mismatch is found byalgorithm600, the snapshots corresponding to each of the mismatched screens may be used to create a visualization report that is outputted to a user. In particular embodiments, the visualization report may include the transition paths that lead to the mismatched screens, and may also include the DOM trees corresponding to the mismatched screens. In particular embodiments, the differences between the screens may be highlighted in the DOM trees provided to the user.
FIG. 8 illustrates example end-user experiences of an event-driven application accessed in different client-tier environments. According to the illustrated embodiment,screenshot704 provides a different end-user experience of a particular event-driven application thanscreenshot708. In particular embodiments, the different end-user experiences may be caused by the different client-tier environments. As an example and not by way of limitation,screenshot704 may include a screen of an event-driven application displayed to a user using a Google® Chrome web browser, andscreenshot708 may include a screen of the event-driven application displayed to a user using a Mozilla® Firefox web browser.
FIG. 9 illustrates a table800 displaying example internal DOM level differences that occur when an event-driven application is accessed using different web browsers. According to the illustrated embodiment,row804 includes a list of internal DOM level differences (e.g., elements, attributes, and their values and order of appearance) that may occur when an event-driven application is accessed using a Mozilla® Firefox web browser. Row808 includes a list of internal DOM level differences that may occur when an event-driven application is accessed using a Microsoft® Internet Explorer web browser. Row812 includes a list of internal DOM level differences that may occur when an event-driven application is accessed using a Google® Chrome web browser. In particular embodiments, one or more of these internal DOM level differences may be determined to be irrelevant differences. As such, one or more of these differences may not be outputted to a user. In particular embodiments, one or more of these DOM level differences may be determined to be relevant differences. As such, one or more of these differences may be outputted to a user.
FIG. 10aillustrates an example behavioral model generated by dynamically crawling an event-driven application in a first client-tier environment that includes a Google® Chrome web browser.FIG. 10billustrates an example behavioral model generated by dynamically crawling the same event-driven application in a second client-tier environment that includes a Mozilla® Firefox web browser.
FIG. 11 illustrates example screen shots of detected DOM level differences of an event-driven application accessed in different client-tier environments. As an example and not by way of limitation,screenshot1004 may include detected DOM level differences of an event-driven application accessed in a first client-tier environment that includes a Google® Chrome web browser, andscreenshot1008 may include detected DOM level differences of the same event-driven application accessed in a second client-tier environment that includes Mozilla Firefox.
FIG. 12 illustrates example tables of detected differences between two behavioral models. Table1104 includes differences between two behavioral models that may be considered relevant. As such, the differences in table1104 may be outputted to a user. Table1108 includes differences between two behavioral models that may be considered irrelevant. As such, the differences in table1108 may not be outputted to a user.
FIG. 13 illustrates anexample computer system1200. In particular embodiments, one ormore computer systems1200 perform one or more steps of one or more methods described or illustrated herein. In particular embodiments, one ormore computer systems1200 provide functionality described or illustrated herein. In particular embodiments, software running on one ormore computer systems1200 performs one or more steps of one or more methods described or illustrated herein or provides functionality described or illustrated herein. Particular embodiments include one or more portions of one ormore computer systems1200.
This disclosure contemplates any suitable number ofcomputer systems1200. This disclosure contemplatescomputer system1200 taking any suitable physical form. As example and not by way of limitation,computer system1200 may be an embedded computer system, a system-on-chip (SOC), a single-board computer system (SBC) (such as, for example, a computer-on-module (COM) or system-on-module (SOM)), a desktop computer system, a laptop or notebook computer system, an interactive kiosk, a mainframe, a mesh of computer systems, a mobile telephone, a personal digital assistant (PDA), a server, or a combination of two or more of these: Where appropriate,computer system1200 may include one ormore computer systems1200; be unitary or distributed; span multiple locations; span multiple machines; or reside in a cloud, which may include one or more cloud components in one or more networks. Where appropriate, one ormore computer systems1200 may perform without substantial spatial or temporal limitation one or more steps of one or more methods described or illustrated herein. As an example and not by way of limitation, one ormore computer systems1200 may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein. One ormore computer systems1200 may perform at different times or at different locations one or more steps of one or more methods described or illustrated herein, where appropriate.
In particular embodiments,computer system1200 includes aprocessor1202,memory1204,storage1206, an input/output (I/O)interface1208, a communication interface710, and a bus712. Although this disclosure describes and illustrates a particular computer system having a particular number of particular components in a particular arrangement, this disclosure contemplates any suitable computer system having any suitable number of any suitable components in any suitable arrangement.
In particular embodiments,processor1202 includes hardware for executing instructions, such as those making up a computer program. As an example and not by way of limitation, to execute instructions,processor1202 may retrieve (or fetch) the instructions from an internal register, an internal cache,memory1204, orstorage1206; decode and execute them; and then write one or more results to an internal register, an internal cache,memory1204, orstorage1206. In particular embodiments,processor1202 may include one or more internal caches for data, instructions, or addresses. The present disclosure contemplatesprocessor1202 including any suitable number of any suitable internal caches, where appropriate. As an example and not by way of limitation,processor1202 may include one or more instruction caches, one or more data caches, and one or more translation lookaside buffers (TLBs). Instructions in the instruction caches may be copies of instructions inmemory1204 orstorage1206, and the instruction caches may speed up retrieval of those instructions byprocessor1202. Data in the data caches may be copies of data inmemory1204 orstorage1206 for instructions executing atprocessor1202 to operate on; the results of previous instructions executed atprocessor1202 for access by subsequent instructions executing atprocessor1202 or for writing tomemory1204 orstorage1206; or other suitable data. The data caches may speed up read or write operations byprocessor1202. The TLBs may speed up virtual-address translation forprocessor1202. In particular embodiments,processor1202 may include one or more internal registers for data, instructions, or addresses. The present disclosure contemplatesprocessor1202 including any suitable number of any suitable internal registers, where appropriate. Where appropriate,processor1202 may include one or more arithmetic logic units (ALUs); be a multi-core processor; or include one ormore processors1202. Although this disclosure describes and illustrates a particular processor, this disclosure contemplates any suitable processor.
In particular embodiments,memory1204 includes main memory for storing instructions forprocessor1202 to execute or data forprocessor1202 to operate on. As an example and not by way of limitation,computer system1200 may load instructions fromstorage1206 or another source (such as, for example, another computer system1200) tomemory1204.Processor1202 may then load the instructions frommemory1204 to an internal register or internal cache. To execute the instructions,processor1202 may retrieve the instructions from the internal register or internal cache and decode them. During or after execution of the instructions,processor1202 may write one or more results (which may be intermediate or final results) to the internal register or internal cache.Processor1202 may then write one or more of those results tomemory1204. In particular embodiments,processor1202 executes only instructions in one or more internal registers or internal caches or in memory1204 (as opposed tostorage1206 or elsewhere) and operates only on data in one or more internal registers or internal caches or in memory1204 (as opposed tostorage1206 or elsewhere). One or more memory buses (which may each include an address bus and a data bus) may coupleprocessor1202 tomemory1204. Bus712 may include one or more memory buses, as described below. In particular embodiments, one or more memory management units (MMUs) reside betweenprocessor1202 andmemory1204 and facilitate accesses tomemory1204 requested byprocessor1202. In particular embodiments,memory1204 includes random access memory (RAM). This RAM may be volatile memory, where appropriate Where appropriate, this RAM may be dynamic RAM (DRAM) or static RAM (SRAM). Moreover, where appropriate, this RAM may be single-ported or multi-ported RAM. The present disclosure contemplates any suitable RAM.Memory1204 may include one ormore memories1204, where appropriate. Although this disclosure describes and illustrates particular memory, this disclosure contemplates any suitable memory.
In particular embodiments,storage1206 includes mass storage for data or instructions. As an example and not by way of limitation,storage1206 may include an HDD, a floppy disk drive, flash memory, an optical disc, a magneto-optical disc, magnetic tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these.Storage1206 may include removable or non-removable (or fixed) media, where appropriate.Storage1206 may be internal or external tocomputer system1200, where appropriate. In particular embodiments,storage1206 is non-volatile, solid-state memory. In particular embodiments,storage1206 includes read-only memory (ROM). Where appropriate, this ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), electrically alterable ROM (EAROM), or flash memory or a combination of two or more of these. This disclosure contemplatesmass storage1206 taking any suitable physical form.Storage1206 may include one or more storage control units facilitating communication betweenprocessor1202 andstorage1206, where appropriate. Where appropriate,storage1206 may include one ormore storages1206. Although this disclosure describes and illustrates particular storage, this disclosure contemplates any suitable storage.
In particular embodiments, I/O interface1208 includes hardware, software, or both providing one or more interfaces for communication betweencomputer system1200 and one or more I/O devices.Computer system1200 may include one or more of these I/O devices, where appropriate. One or more of these I/O devices may enable communication between a person andcomputer system1200. As an example and not by way of limitation, an I/O device may include a keyboard, keypad, microphone, monitor, mouse, printer, scanner, speaker, still camera, stylus, tablet, touchscreen, trackball, video camera, another suitable I/O device or a combination of two or more of these. An I/O device may include one or more sensors. This disclosure contemplates any suitable I/O devices and any suitable I/O interfaces1208 for them. Where appropriate, I/O interface1208 may include one or more device or softwaredrivers enabling processor1202 to drive one or more of these I/O devices. I/O interface1208 may include one or more I/O interfaces1208, where appropriate. Although this disclosure describes and illustrates a particular I/O interface, this disclosure contemplates any suitable I/O interface.
In particular embodiments, communication interface710 includes hardware, software, or both providing one or more interfaces for communication (such as, for example, packet-based communication) betweencomputer system1200 and one or moreother computer systems1200 or one or more networks. As an example and not by way of limitation, communication interface710 may include a network interface controller (NIC) or network adapter for communicating with an Ethernet or other wire-based network or a wireless NIC (WNIC) or wireless adapter for communicating with a wireless network, such as a WI-FI network. This disclosure contemplates any suitable network and any suitable communication interface710 for it. As an example and not by way of limitation,computer system1200 may communicate with an ad hoc network, a personal area network (PAN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), or one or more portions of the Internet or a combination of two or more of these. One or more portions of one or more of these networks may be wired or wireless. As an example,computer system1200 may communicate with a wireless PAN (WPAN) (such as, for example, a BLUETOOTH WPAN), a WI-FI network, a WI-MAX network, a cellular telephone network (such as, for example, a Global System for Mobile Communications (GSM) network), or other suitable wireless network or a combination of two or more of these.Computer system1200 may include any suitable communication interface710 for any of these networks, where appropriate. Communication interface710 may include one or more communication interfaces710, where appropriate. Although this disclosure describes and illustrates a particular communication interface, this disclosure contemplates any suitable communication interface.
In particular embodiments, bus712 includes hardware, software, or both coupling components ofcomputer system1200 to each other. As an example and not by way of limitation, bus712 may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a front-side bus (FSB), a HYPERTRANSPORT (HT) interconnect, an Industry Standard Architecture (ISA) bus, an INFINIBAND interconnect, a low-pin-count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a serial advanced technology attachment (SATA) bus, a Video Electronics Standards Association local (VLB) bus, or another suitable bus or a combination of two or more of these. Bus712 may include one or more buses712, where appropriate. Although this disclosure describes and illustrates a particular bus, this disclosure contemplates any suitable bus or interconnect.
Herein, reference to a computer-readable storage medium encompasses one or more tangible computer-readable storage media possessing structure. As an example and not by way of limitation, a computer-readable storage medium may include a semiconductor-based or other integrated circuit (IC) (such, as for example, a field-programmable gate array (FPGA) or an application-specific IC (ASIC)), a hard disk, an HDD, a hybrid hard drive (HHD), an optical disc, an optical disc drive (ODD), a magneto-optical disc, a magneto-optical drive, a floppy disk, a floppy disk drive (FDD), magnetic tape, a holographic storage medium, a solid-state drive (SSD), a RAM-drive, a SECURE DIGITAL card, a SECURE DIGITAL drive, or another suitable computer-readable storage medium or a combination of two or more of these, where appropriate. Herein, reference to a computer-readable storage medium excludes any medium that is not eligible for patent protection under 35 U.S.C. §101. Herein, reference to a computer-readable storage medium excludes transitory forms of signal transmission (such as a propagating electrical or electromagnetic signal per se) to the extent that they are not eligible for patent protection under 35 U.S.C. §101.
This disclosure contemplates one or more computer-readable storage media implementing any suitable storage. In particular embodiments, a computer-readable storage medium implements one or more portions of processor1202 (such as, for example, one or more internal registers or caches), one or more portions ofmemory1204, one or more portions ofstorage1206, or a combination of these, where appropriate. In particular embodiments, a computer-readable storage medium implements RAM or ROM. In particular embodiments, a computer-readable storage medium implements volatile or persistent memory. In particular embodiments, one or more computer-readable storage media embody software. Herein, reference to software may encompass one or more applications, bytecode, one or more computer programs, one or more executables, one or more instructions, logic, machine code, one or more scripts, or source code, and vice versa, where appropriate. In particular embodiments, software includes one or more application programming interfaces (APIs). This disclosure contemplates any suitable software written or otherwise expressed in any suitable programming language or combination of programming languages. In particular embodiments, software is expressed as source code or object code. In particular embodiments, software is expressed in a higher-level programming language, such as, for example, C, Perl, or a suitable extension thereof. In particular embodiments, software is expressed in a lower-level programming language, such as assembly language (or machine code). In particular embodiments, software is expressed in JAVA. In particular embodiments, software is expressed in Hyper Text Markup Language (HTML), Extensible Markup Language (XML), Javascript (JS), Java Server Pages (JSP), Hypertext Preprocessor (PHP), or other suitable markup language.
The present disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments herein that a person having ordinary skill in the art would comprehend. Similarly, where appropriate, the appended claims encompass all changes, substitutions, variations, alterations, and modifications to the example embodiments herein that a person having ordinary skill in the art would comprehend.