Movatterモバイル変換


[0]ホーム

URL:


CN113778429A - Walk-through method, walk-through device and storage medium - Google Patents

Walk-through method, walk-through device and storage medium
Download PDF

Info

Publication number
CN113778429A
CN113778429ACN202011040808.4ACN202011040808ACN113778429ACN 113778429 ACN113778429 ACN 113778429ACN 202011040808 ACN202011040808 ACN 202011040808ACN 113778429 ACN113778429 ACN 113778429A
Authority
CN
China
Prior art keywords
walked
walkthrough
target object
control
difference information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011040808.4A
Other languages
Chinese (zh)
Other versions
CN113778429B (en
Inventor
黄彭一
张超
祖征
王甜莉
窦西河
陈瑞
史凯欣
隋思晗
孙同
乔萌
王雪
刘雨晴
李航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Wodong Tianjun Information Technology Co LtdfiledCriticalBeijing Jingdong Century Trading Co Ltd
Priority to CN202011040808.4ApriorityCriticalpatent/CN113778429B/en
Publication of CN113778429ApublicationCriticalpatent/CN113778429A/en
Application grantedgrantedCritical
Publication of CN113778429BpublicationCriticalpatent/CN113778429B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The application provides a walkthrough method, a walkthrough device and a storage medium, wherein the method comprises the following steps: acquiring an object to be walked; carrying out image folding processing on the object to be walked and the target object, and displaying an image folding; responding to the operation acted on the overlay to obtain the pattern difference information of the object to be walked relative to the target object; and generating a walkthrough report of the object to be walkthrough according to the pattern difference information. Through the walkthrough scheme provided by the application, the communication cost can be effectively reduced, and the walkthrough efficiency is improved.

Description

Walk-through method, walk-through device and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a walkthrough method, apparatus, and storage medium.
Background
The walkthrough, also called visual walkthrough, is an important link in the product development process, and designers keep the development page consistent with the design draft through the visual walkthrough of the development page of the front-end engineer, so that the online visual effect and the user experience of the product are ensured.
However, in the walking link, because the thinking, expression mode, etc. of the designer and the front-end engineer are inconsistent, the communication cost between the two parties is high, and the walking efficiency is low.
Disclosure of Invention
The application provides a walkthrough method, a walkthrough device and a storage medium, so that the communication cost is effectively reduced, and the walkthrough efficiency is improved.
In a first aspect, the present application provides a walkthrough method, including: acquiring an object to be walked; carrying out image folding processing on the object to be walked and the target object, and displaying an image folding; then, responding to the operation acted on the overlay to obtain the pattern difference information of the object to be walked relative to the target object; and finally, generating a walkthrough report of the object to be walkthrough according to the pattern difference information. By the automatic walkthrough scheme provided by the application, the walkthrough efficiency can be effectively improved; in addition, because the walkthrough report is generated by a machine and contains relatively standard contents, the communication cost between the designer and the front-end engineer can be reduced.
In a possible implementation manner, when the type of the object to be walked is a computer end page, acquiring the object to be walked may include: and loading the object to be walked through the browser.
Or, when the type of the object to be walked is H5 page, acquiring the object to be walked may include: loading an object to be walked through a browser; and starting the mobile phone simulator of the browser.
Or, when the type of the object to be walked is an application page or an applet page, acquiring the object to be walked may include: responding to the selection operation acting on the first area, and displaying the two-dimensional code so that the mobile terminal uploads the object to be walked through scanning the two-dimensional code; and receiving the object to be walked. The scene selection interface comprises a first area, the first area is used for indicating a walkthrough application or an applet picture, and the scene selection interface is a starting interface of a walkthrough tool.
In a possible implementation, after receiving the object to be walked, the method further includes: and displaying the object to be walked and the operation bar of the walking tool.
In a possible implementation manner, performing overlay processing on an object to be walked and a target object to obtain an overlay, includes: and displaying the target object with preset opacity above the object to be walked to obtain a superimposed image.
In one possible implementation, obtaining the style difference information of the object to be walked relative to the target object in response to the operation acting on the overlay may include: in response to an operation acting on the overlay, identifying a location at which the operation acts; responding to an operation acted on a first control, and displaying a selectable annotation type, wherein the first control is contained in the operation bar and is used for adding an annotation at the position; and responding to the operation of the selectable marking type, and obtaining the pattern difference information of the object to be walked relative to the target object.
In a possible implementation manner, after obtaining the style difference information of the object to be walked relative to the target object, the method further includes: displaying the style difference information and a corresponding deleting control; responding to the operation acting on the deleting control, and prompting a user whether to determine the style difference information corresponding to the deleting control; and deleting or reserving the style difference information corresponding to the deletion control according to the input operation of the user.
In one possible implementation, generating a walkthrough report of the object to be walkthrough according to the pattern difference information includes: and generating a walkthrough report of the object to be walkthrough in response to the operation acting on the second control. The walkthrough report comprises style difference information and the position of the style difference information in the object to be walked through, the second control is contained in the operation bar, and the second control is used for generating the report.
In a possible implementation manner, before performing overlay processing on an object to be walked and a target object to obtain an overlay, the method further includes: calling a system folder in response to a click operation acted on a third control, wherein the system folder comprises a target object, the third control is contained in an operation bar, and the third control is used for importing the target object; the target object is loaded in response to a selected operation acting on the target object in the system folder.
In a possible implementation, the walkthrough method may further include: the operation bar is displayed in response to an opening operation applied to the walkthrough tool.
Further, in response to an opening operation applied to the walkthrough tool, displaying the operation bar may include: responding to the opening operation acting on the walkthrough tool, and displaying a scene selection interface, wherein the scene selection interface comprises a first area and a second area, the first area is used for indicating walkthrough application or small program pictures, and the second area is used for indicating walkthrough H5 or a computer end page; and displaying the operation bar in response to the selected operation on the second area.
In a possible implementation manner, after generating a walkthrough report of the object to be walkthrough according to the pattern difference information, the method further includes: and saving the walkthrough report to a preset position.
In a second aspect, an embodiment of the present application provides a walkthrough device, including:
the acquisition module is used for acquiring an object to be walked;
the processing module is used for carrying out image folding processing on the object to be walked and the target object and triggering the display module to display the image folding; responding to the operation acted on the overlay to obtain the pattern difference information of the object to be walked relative to the target object; and generating a walkthrough report of the object to be walkthrough according to the pattern difference information.
In a possible implementation manner, when the type of the object to be walked is a computer end page, the obtaining module is specifically configured to: and loading the object to be walked through the browser.
Or, when the type of the object to be walked is H5 page, the obtaining module is specifically configured to: loading an object to be walked through a browser; and starting the mobile phone simulator of the browser.
Or, when the type of the object to be walked is an application page or an applet page, the obtaining module is specifically configured to: responding to the selection operation acting on the first area, triggering a display module to display the two-dimensional code, so that the mobile terminal uploads the object to be walked through scanning the two-dimensional code; and receiving the object to be walked. The scene selection interface comprises a first area, the first area is used for indicating a walkthrough application or an applet picture, and the scene selection interface is a starting interface of a walkthrough tool.
In a possible implementation, the obtaining module is further configured to: and the trigger display module displays the object to be walked and the operation bar of the walked tool.
In a possible implementation manner, the processing module, when performing overlay processing on the object to be walked and the target object to obtain an overlay, is specifically configured to: and displaying the target object with preset opacity above the object to be walked to obtain a superimposed image.
In a possible implementation manner, when the processing module obtains the style difference information of the object to be walked relative to the target object in response to the operation applied to the overlay, the processing module is specifically configured to: in response to an operation acting on the overlay, identifying a location at which the operation acts; responding to the operation of a first control, triggering a display module to display a selectable annotation type, wherein the first control is contained in an operation bar and is used for adding an annotation at the position; and responding to the operation of the selectable marking type, and obtaining the pattern difference information of the object to be walked relative to the target object.
In one possible implementation, the processing module is further configured to: after obtaining the style difference information of the object to be walked through relative to the target object, triggering a display module to display the style difference information and a corresponding deletion control; responding to the operation acting on the deleting control, and triggering a display module to prompt a user whether to determine the style difference information corresponding to the deleting control; and deleting or reserving the style difference information corresponding to the deletion control according to the input operation of the user.
In a possible implementation manner, when generating a walkthrough report of an object to be walkthrough according to the style difference information, the processing module is specifically configured to: and generating a walkthrough report of the object to be walkthrough in response to the operation acting on the second control. The walkthrough report comprises style difference information and the position of the style difference information in the object to be walked through, the second control is contained in the operation bar, and the second control is used for generating the report.
In one possible implementation, the processing module is further configured to: before the object to be walked and the target object are subjected to image folding processing to obtain an image folding, a system folder is called in response to clicking operation acting on a third control, the system folder contains the target object, the third control is contained in an operation bar, and the third control is used for leading in the target object; the target object is loaded in response to a selected operation acting on the target object in the system folder.
In one possible implementation, the processing module is further configured to: and triggering the display module to display the operation bar in response to the opening operation acting on the walkthrough tool.
Further, the processing module is further configured to: responding to the opening operation acting on the walkthrough tool, triggering a display module to display a scene selection interface, wherein the scene selection interface comprises a first area and a second area, the first area is used for indicating walkthrough application or small program pictures, and the second area is used for indicating walkthrough H5 or a computer end page; and triggering the display module to display the operation bar in response to the selected operation acting on the second area.
In one possible implementation, the processing module is further configured to: and after generating a walkthrough report of the object to be walkthrough according to the pattern difference information, storing the walkthrough report to a preset position.
On the basis of any one of the possible embodiments:
optionally, the mobile terminal and the server realize uploading of the object to be walked by adopting a technology of combining a Websocket and an HTTP request; or the mobile terminal and the server realize the uploading of the object to be walked by adopting a technology of combining Websocket and HTTPS requests; or the mobile terminal and the server only adopt HTTP or HTTPS request to realize the uploading of the object to be walked.
Alternatively, the preset opacity may be set by an opacity control included in the operation bar.
Optionally, the operation bar further includes:
at least one of a control for hiding the target object and a control for displaying the target object;
at least one of a control for locking the target object and a control for unlocking the target object.
Optionally, the selectable annotation types include at least one of:
font inconsistencies, color inconsistencies, spacing inconsistencies, picture blurring, ICON blurring, and others that instruct a user to enter custom content.
Optionally, the walkthrough report is in the form of a picture or a web page.
In a third aspect, the present application provides an electronic device, comprising: a memory and a processor;
the memory is used for storing instructions;
the processor is configured to invoke instructions in the memory to perform the method according to any of the first aspect.
In a fourth aspect, the present application provides a readable storage medium having a computer program stored thereon; the computer program, when executed, implements a method as defined in any one of the first aspects.
In a fifth aspect, the present application further provides a program product comprising a computer program, the computer program being stored on a readable storage medium, from which the computer program can be read by a processor, the processor executing the computer program to implement the method according to any one of the first aspect.
The application provides a walkthrough method, a walkthrough device and a storage medium, wherein the method comprises the following steps: after the object to be walked is obtained, carrying out image folding processing on the object to be walked and a target object, and displaying an image folding; then, responding to the operation acted on the overlay to obtain the pattern difference information of the object to be walked relative to the target object; and finally, generating a walkthrough report of the object to be walkthrough according to the pattern difference information. By the automatic walkthrough scheme provided by the application, the walkthrough efficiency can be effectively improved; in addition, because the walkthrough report is generated by a machine and contains relatively standard contents, the communication cost between the designer and the front-end engineer can be reduced.
Drawings
FIG. 1 is a schematic view of a download interface of the walkthrough tool provided herein;
FIG. 2 is a flow chart of a walkthrough method according to an embodiment of the present application;
FIG. 3 is a flow chart of a walkthrough method according to another embodiment of the present application;
FIG. 4 is a schematic view of an interface of a walkthrough tool according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a scene selection interface provided in an embodiment of the present application;
FIG. 6 is a schematic interface diagram of another walkthrough tool provided in an embodiment of the present application;
FIG. 7 is a schematic diagram of an operation bar according to an embodiment of the present disclosure;
FIG. 8 is a schematic interface diagram of another walkthrough tool provided in an embodiment of the present application;
FIG. 9a is a schematic view of another operation bar provided in the present application;
FIG. 9b is a schematic view of another operation bar provided in the embodiment of the present application;
FIG. 10 is a schematic interface diagram of another walkthrough tool provided in an embodiment of the present application;
FIG. 11 is a schematic interface diagram of another walkthrough tool provided in an embodiment of the present application;
FIG. 12 is a schematic diagram of alternative types of labels provided in embodiments of the present application;
FIG. 13 is a schematic interface diagram of another walkthrough tool provided in an embodiment of the present application;
FIG. 14 is a schematic illustration of a walkthrough report provided by an embodiment of the present application;
FIG. 15 is a flow chart of a walkthrough method according to yet another embodiment of the present application;
FIG. 16 is a schematic interface diagram of another walkthrough tool provided in an embodiment of the present application;
FIG. 17 is a diagram of a browser window provided in an embodiment of the present application;
FIG. 18 is a schematic diagram of a mobile phone interface provided in the embodiments of the present application;
fig. 19 is a schematic diagram of another interface provided in the embodiment of the present application;
fig. 20 is a schematic diagram of another interface provided in the embodiment of the present application;
FIG. 21 is a flow chart of a walkthrough method according to yet another embodiment of the present application;
FIG. 22 is a flow chart of a walkthrough method according to yet another embodiment of the present application;
fig. 23 is a schematic structural diagram of a walkthrough device according to an embodiment of the present application;
fig. 24 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
It should be understood that "and/or" referred to in the embodiments of the present application describes an association relationship of associated objects, and indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
First, some terms related to the present application will be described:
canvas (Canvas) technology is used to generate images in real time on a web page and can manipulate the image content, essentially a bitmap that can be manipulated in JavaScript.
The hypertext transfer Protocol (HTTP), which is a simple request-response Protocol, typically runs on top of a single Transmission Control Protocol (TCP). It specifies what messages the client may send to the server and what responses to get.
A hypertext Transfer Protocol over secure token Layer (HTTPS for short) is an HTTP channel that targets security, and the security of a transmission process is ensured through transmission encryption and identity authentication on the basis of HTTP.
WebSocket, which is a network transmission protocol, can perform full-duplex communication on a single TCP connection, and is located in an application layer of an Open System Interconnection (OSI) model.
Extensible Markup Language (XML), a subset of standard universal Markup Language, is a Markup Language used to mark electronic documents to make them structured.
HyperText Markup Language (HTML) is a standard Markup Language for creating web pages.
A Cascading Style Sheets (CSS) is a computer language used to represent file styles such as HTML (an application of standard universal markup language) or XML ().
DOM: is a programming interface for HTML and XML documents that links web pages to scripting or programming languages.
ICON, is a type of ICON format for system ICONs, software ICONs, etc., with the ICON extension being x-ICON, x-ico. Those ICONs on a common software or windows desktop are typically in ICON format.
The technical problems to be solved by the present application and the inventive concept thereof are further explained below based on the prior art.
The inventor finds that: in the step of walking, the communication cost of the two parties is very high due to the fact that thinking, expression and implementation modes of a designer and a front-end engineer are inconsistent; in addition, because the front-end engineer is insensitive to the design style and the page reduction degree is not high, the walk-through cost of the designer is further increased.
Specifically, in the conventional walkthrough process, a designer needs to import a screenshot of a development page into design software, such as PS or Sketch, compare the screenshot with a design draft, find a problem point and explain the problem point, and finally output a walkthrough report to send to a front-end engineer. In the process, a designer captures a development page through various modes to obtain a development page screenshot, if the mobile terminal captures the page screen, the mobile terminal is required to transmit the development page screenshot to a computer terminal through various modes, and a design draft and design software are both on the computer terminal. After the development page screenshot is compared with the design draft, the problem annotation and the walkthrough report output by each designer are different, so that the problem annotation and the walkthrough report are not beneficial to browsing and reading of front-end engineers, and the communication cost is further increased.
In the front-end engineer self-checking process, a Chrome extension program can be installed at present, a design draft can be stacked above a development page by using the tool, the difference from the design draft can be found by comparison, and a walkthrough report cannot be marked or generated. The prior art related to the above scenario mainly includes the following points:
(1) chrome extension program: are applets that can modify or enhance Chrome browser functionality. The extension program can be written by using various Web technologies, such as HyperText Markup Language (HTML), JavaScript, and Cascading Style Sheets (CSS). The extension program integrates the files required by the extension program into one file, and provides the user with downloading and installation. This integration means that the extension programs do not need to rely on web content, as is the case with normal web applications, have very little user interface, such as a small icon on the Chrome browser toolbar, or no user interface at all.
(2) Jquery: a fast, compact JavaScript framework is another excellent JavaScript code base (or JavaScript framework) after implementing an object-oriented mechanism (Prototype).
In summary, the current walkthrough has the following problems:
(1) at present, products aiming at the whole process of the walkthrough do not exist, namely, none of the products completely comprises the functions of uploading pictures, overlapping picture comparison, webpage labeling, generation of a walkthrough report and the like.
(2) In the existing Chrome extension program, a pattern walkthrough tool only has a function of overlapping image comparison, and the operations of moving, changing transparency and the like are carried out on an image inserted in a page, so that a walkthrough report cannot be marked and generated.
(3) In the existing design software, only the picture can be marked, and the opened real webpage cannot be directly marked.
All the above problems affect the efficiency of the walkthrough, and therefore, the present application provides a walkthrough method, apparatus and storage medium to improve the efficiency of the walkthrough. In particular, a walkthrough tool (or called a "walkthrough plug-in") is provided, which comprises functions of uploading pictures, comparing stacked pictures, marking web pages and generating reports.
It should be noted that, before executing the walkthrough solution provided by the present application, a walkthrough tool needs to be installed at an execution end (e.g., a computer end). Illustratively, open the Same official network, interface as shown in FIG. 1, click "install walkthrough tool", browser automatically downloads. The walkthrough tool can be realized based on a development mode of a Chrome browser plug-in and needs to be installed in an extension program of a Chrome browser. The installation path may specifically be:
the first installation path is ' open Chrome online application store → search ' Same design walkthrough tool ' → after finding the Same design walkthrough tool, click ' add to Chrome ' button → complete installation.
The second installation path is ' right click of mouse on tool bar of Chrome browser → click on ' manage extension ' button in floating layer → launch developer mode → click on ' load extension ' button in floating layer → find decompressed installation package of the Same design walkthrough tool in system folder → complete installation.
The technical solution of the present application will be described in detail below with reference to specific examples. It should be noted that the following specific embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments.
Fig. 2 is a flowchart of a walkthrough method according to an embodiment of the present disclosure. The method is explained by taking a computer (also called computer) as an execution subject. It should be understood that the execution subject of the walkthrough method provided by the present application is not limited to a computer as long as the Chrome browser available to the electronic device supports the installation of the extension program.
As shown in fig. 2, the method comprises the steps of:
s201, obtaining an object to be walked.
It will be appreciated that the object to be walked, i.e., the page or interface developed by the front-end engineer based on the design draft provided by the designer. According to different use scenes, the object to be walked can comprise the following objects: h5 pages, computer side pages (i.e., PC pages), APPlication (APP) pages, and applet pages, among others. In different use scenarios, the implementation manner of acquiring the object to be walked by the computer is different, and the following embodiments are specifically described.
S202, carrying out image folding processing on the object to be walked and the target object, and displaying an image folding.
The target object is a design draft provided by a designer, and the expected development page. And displaying the object to be walked on the target object in an overlapping manner to obtain an overlapping diagram, so that a designer and/or a front-end engineer can compare the overlapping diagram.
S203, responding to the operation acted on the overlay, and obtaining the style difference information of the object to be walked relative to the target object.
In practical application, a designer and/or a front-end engineer searches for a difference between an object to be walked and a target object on a superimposed graph, and when the difference between the object to be walked and the target object is found, the difference is acted on by clicking, double clicking, long pressing and the like. Accordingly, the computer obtains the style difference information corresponding to the difference in response to the operation.
And S204, generating a walkthrough report of the object to be walkthrough according to the pattern difference information.
Because the pattern difference information of the object to be walked relative to the target object is determined and the position of the pattern difference information is known, the contents contained in the walked report are determined, and the walked report is generated according to the contents.
According to the walkthrough method, after the object to be walkthrough is obtained, the object to be walkthrough and the target object are subjected to image folding processing, and an image folding is displayed; then, responding to the operation acted on the overlay to obtain the pattern difference information of the object to be walked relative to the target object; and finally, generating a walkthrough report of the object to be walkthrough according to the pattern difference information. By the automatic walkthrough scheme provided by the application, the walkthrough efficiency can be effectively improved; in addition, because the walkthrough report is generated by a machine and contains relatively standard contents, the communication cost between the designer and the front-end engineer can be reduced.
On the basis of the above, the following embodiments exemplify different usage scenarios.
In some embodiments, from the perspective of the designer, the walk H5 page, i.e., the type of object to be walked, is H5 page. Specifically, referring to fig. 3, the walkthrough method in the present embodiment may include the following steps:
s301, loading the object to be walked through the browser.
And S302, starting a mobile phone simulator of the browser.
Wherein, S301 and S302 are further explained for S201 in the embodiment shown in fig. 2 when the type of the object to be walked is H5 page.
Illustratively, a Chrome browser is used to open a development page link (i.e., an object to be walked), followed by opening the browser's handset simulator, as shown in fig. 4. When the type of the object to be walked is a PC page, acquiring the object to be walked, wherein the method comprises the following steps: and loading the object to be walked through a browser. That is, when the type of the object to be walked is a PC page, the object to be walked may be acquired through S301. The mobile phone simulator converts the PC page into the H5 page.
And S303, responding to the opening operation acted on the walkthrough tool, and displaying a scene selection interface.
The scene selection interface comprises a first area and a second area, wherein the first area is used for indicating a walkthrough Application (APP) or an applet picture, and the second area is used for indicating a walkthrough H5 or a PC page.
For example, at the Chrome browser toolbar, click on the icon of the walkthrough tool, launch the walkthrough tool, and display the scene selection interface. Illustratively, the scene selection interface implements the custom style and function of a pop-up (popup) panel by using Jquery in combination with CSS technology, as shown in FIG. 5. The position relationship between the scene selection interface and the object to be walked can be as illustrated in fig. 6.
S304, responding to the selected operation acted on the second area, and displaying an operation bar.
Since the present embodiment is described by taking the object to be walked as an H5 page as an example, the designer may input a selection operation for the second area, specifically, the operation may be pointing to the second area by using a mouse, and selecting the second area by clicking, double clicking, or long-pressing the mouse, and so on.
Illustratively, the computer communicates with the browser web page through an Application Programming Interface (API) developed by the Chrome extension program in response to the selected operation on the second area, and starts an operation bar of the walkthrough tool when the listener in the content.js file hears the message, as shown in fig. 7. The position relationship between the operation bar and the object to be walked can be as illustrated in fig. 8.
Referring to fig. 7, the operation bar includes: the control for adding the label, the control for generating the report and the control for importing the design draft are respectively marked as follows: "add annotation", "generate report", and "import draft". For convenience of description, the above controls are respectively defined as: a first control, a second control, and a third control. It should be noted that the application does not limit the types and the number of the controls included in the operation bar; and when the controls contained in the operation bar are fixed, the relative positions of the controls are not limited, and the controls can be specifically set according to actual requirements.
Alternatively, the walkthrough tool may also be designed to: the operation bar is displayed in response to an opening operation applied to the walkthrough tool. I.e. the step of scene selection is omitted.
S305, responding to the click operation acted on the third control, and calling the system folder.
Wherein the system folder contains the target object. The designer may find the target object in the system folder and select it. Illustratively, after the designer clicks the "import design draft" control of the operation bar, the computer calls the system folder through the file upload mode of the input tag of the HTML, finds the save position of the target object in the system folder, and selects the target object.
S306, responding to the selected operation of the target object in the system folder, and loading the target object.
For loading the target object, in one example, the target object may be converted to base64 format, which is implemented by IMG tag insertion. Uploading pictures is currently supported in the formats jpg, jpeg, and png.
The target object is acquired through S305 and S306.
And S307, displaying the target object with preset opacity above the object to be walked to obtain a superposed image.
It is understood that this step is a further description of S202 in the embodiment shown in fig. 2.
Typically, the preset opacity is displayed at 50% opacity by default.
In the process of image folding comparison, in order to conveniently view an object to be walked, the opacity of the target object can be customized by setting the transparency attribute value of the CSS, wherein the range is 0-100%, 0% is equivalent to complete hiding, and 100% is equivalent to complete displaying.
Alternatively, the preset opacity may be set by an opacity control included in the operation bar. For example, as shown in fig. 9a and 9b, the operation bar may further include:
at least one of a control for hiding the target object and a control for displaying the target object, respectively identified as "hidden overlay" and "displayed overlay";
at least one of the controls for locking the target object and the controls for unlocking the target object are identified as a "locked overlay" and an "unlocked overlay," respectively.
The above labels are merely exemplary, and the present application is not limited thereto. For convenience of distinction, after the control is clicked, the control can be displayed differently from other controls, for example, by attribute information such as color and size.
Clicking the 'hidden overlay' control can hide the target object, and clicking the 'display overlay' control can display the target object again. Meanwhile, the target object can be locked, which is a function to avoid that the target object is moved by the mouse by mistake, so that the designer needs to align with the object to be checked again, for example: this may be accomplished by disabling the browser's drag event.
In addition, in the embodiment of the application, the control for hiding the target object and the control for displaying the target object are alternatively displayed. That is, after the designer clicks the "hide overlay" control, the computer displays the control for displaying the target object, or after the designer clicks the "show overlay" control, the computer displays the control for hiding the target object, with the other controls being unchanged by default. Similarly, a control for locking the target object and a control for unlocking the target object are alternatively displayed.
Alternatively, referring to fig. 9a and 9b, since the target object is already imported, the control for importing the target object, i.e., the third control as described above, may no longer be included in the operation bar.
Alternatively, all of the above-mentioned controls may be included in the operation bar at the same time.
For example, after the position of the target object on the object to be walked is adjusted and aligned, the overlay may be as shown in fig. 10. And then, the designer compares the contents on the superimposed graph one by one to find out the style of the object to be walked, which is not realized according to the target object.
Optionally, S203 in the embodiment shown in fig. 2 may be further subdivided into S308, S309, and S310. Wherein:
s308, responding to the operation acted on the overlay, and identifying the position acted by the operation.
When the object to be walked finds a pattern that is not realized by the target object, click on the position to make the computer identify it, for example, the position is identified by a box, and the positions are respectively identified as ' r ', ' c ', ' and ' c ', … … according to the found sequence, as illustrated in fig. 11.
S309, responding to the operation acted on the first control, and displaying the selectable annotation type.
Wherein the first control is used for adding a label at the position.
Further, the selectable annotation types may include at least one of:
font inconsistencies, color inconsistencies, pitch inconsistencies, picture blurring, ICON blurring, and others. Wherein the other is used to instruct the user to enter custom content.
Illustratively, the designer clicks on a first control identified with "add annotation" in the operation bar, and in response to the operation, the computer starts the annotation function of the walk-through tool and inserts a canvas area in the object to be walked through canvas (canvas) technology, wherein the height of the canvas area is consistent with the height of the object to be walked. At this time, the designer clicks or drags a box on the Object to be walked by using a mouse, a dynamic Document Object Model (DOM) structure is inserted, and the computer calculates the position of the inserted DOM structure and the width and height of the dragged box by acquiring the XY coordinates of the current clicked position, thereby generating a selection box labeled with a type.
S310, responding to the operation of the selectable annotation type, and obtaining the style difference information of the object to be walked relative to the target object.
For example, referring to FIG. 12, the designer clicks on an option in the selection box, such as "font inconsistent", clicks on the "OK" button, and the annotation is successfully added. If "other" is clicked, the designer may input custom content of, for example, 100 characters (which may be adjusted according to actual requirements). And the computer stores each added marking information in a front-end variable, and controls to display marked content through a display (display) attribute of the CSS if a designer clicks a serial number icon of an existing mark.
In some embodiments, after S310, the walkthrough method may further include: and displaying the style difference information of the object to be walked relative to the target object and the corresponding deletion control. For example, still referring to fig. 11, style difference information with the added annotation type of "inconsistent pitch" is shown with its corresponding delete control at the back. If the designer inputs an operation on the delete control, clicks the delete control behind the style difference information, and the computer responds to the operation to prompt the user whether to determine to delete the style difference information corresponding to the delete control, for example, as shown in fig. 13, prompt "do you determine to delete the label? ". And then, the computer deletes or retains the style difference information corresponding to the deletion control according to the subsequent input operation of the user. For example, if the designer clicks "ok", the label is successfully deleted, and the corresponding entire dynamic DOM structure and data information are deleted; if the designer clicks "Cancel," the label is not deleted.
If all the styles which are not realized according to the target object are found in the object to be walked, and are marked, the walked is finished, and a designer clicks a second control marked with 'report generation' in the operation bar. Accordingly, the computer responds, executing S311.
And S311, responding to the operation acted on the second control, and generating a walkthrough report of the object to be walkthrough.
It is understood that this step is a further description of S204 in the embodiment shown in fig. 2. And the walkthrough report comprises the pattern difference information and the position of the pattern difference information in the object to be walked through.
In one implementation, the computer may convert the object to be walked and the label information into a picture by an HTML-to-Image (Image) technique, and automatically generate a walking report of the object to be walked. In this implementation, the walkthrough report is in the form of a picture, such as that shown in fig. 14. In the walkthrough report shown in fig. 14, the left part is the object to be walked; the right part is a walkthrough annotation, i.e. style difference information.
In another implementation, the walkthrough report may optionally be in the form of a web page, that is, a web page version is used to store the current annotation information, so as to edit and modify and store data at a later stage. The type of the walkthrough report can be specifically set according to actual requirements.
In some embodiments, the walk through method may further include:
and S312, saving the walkthrough report to a preset position.
For example, the walkthrough report is saved by default in a download folder of the computer system for review by designers and front-end engineers or other personnel.
Through S301 to S312, the computer realizes walk-through of the H5 page from the designer perspective.
In other embodiments, from the perspective of the designer, the APP or applet page is walked, that is, the type of the object to be walked is the APP page or applet page. At this time, the format of the object to be walked may be PNG, JPG, JPEG, or the like, and its size may be 10 mega (M) or others.
Specifically, referring to fig. 15, the walkthrough method in the present embodiment may include the following steps:
s401, responding to the opening operation of the walkthrough tool, and displaying a scene selection interface.
The scene selection interface comprises a first area and a second area, the first area is used for indicating walkthrough APP or small program pictures, and the second area is used for indicating walkthrough H5 or PC pages. The scene selection interface is the launch interface of the walkthrough tool.
For example, at the Chrome browser toolbar, click on the icon of the walkthrough tool, launch the walkthrough tool, and display the scene selection interface. Illustratively, the scene selection interface implements the custom style and function of a pop-up (popup) panel by using Jquery in combination with CSS technology, as shown in FIG. 5.
S402, responding to the selection operation acted on the first area, and displaying the two-dimensional code so that the mobile terminal can transmit the object to be walked through scanning the two-dimensional code.
Wherein, the two-dimensional code can be a dynamic two-dimensional code.
Since the present embodiment is described by taking the object to be walked as an application page or an applet page as an example, a designer may input a selection operation for the first area, specifically, the operation may be pointing to the first area by using a mouse, and selecting the first area by clicking, double clicking, or long-pressing the mouse, and so on.
Illustratively, the computer responds to the selection operation on the first area, uses JS to generate a two-dimensional code by using a Uniform Resource Locator (URL) and a dynamic picture ID, displays the two-dimensional code, and prompts "please scan the two-dimensional code to upload a screenshot page by using mobile phone WeChat", as shown in fig. 16. The URL is composed of a fixed path part and a dynamic picture ID, the fixed part comprises a domain name and a path name, and the dynamic picture ID is a unique character string generated randomly.
Optionally, the browser window displays a "wait for upload" or similar prompt, as shown in FIG. 17. The computer informs a corresponding listener in content.js through a communication API of the Chrome extension program, sends a long link of WebsSocket, and requests the uploaded picture information from the server by using a heartbeat reconnection mechanism.
Optionally, the mobile terminal and the server realize uploading of the object to be walked by adopting a technology of combining a Websocket and an HTTP request; or the mobile terminal and the server realize the uploading of the object to be walked through the technology of combining the Websocket and the HTTPS request; or the mobile terminal and the server only adopt HTTP or HTTPS request to realize the uploading of the object to be walked.
After the designer scans the two-dimensional code by using the mobile phone Wechat, illustratively, a page as shown in FIG. 18 is displayed in the mobile phone; then, a designer clicks a button of 'uploading one-time image (375 px)' or 'uploading two-time image (750 px)', a photo in the mobile phone is called, a development page screenshot image is selected as an object to be checked, the object is uploaded to the server through an HTTP or HTTPS request, and in the uploading process, the mobile phone prompts 'uploading', as shown in FIG. 19; after the upload is successful, the mobile phone prompts that "the page is uploaded to the computer for walking by", as shown in fig. 20.
And S403, receiving the object to be walked.
The embodiment acquires a target object through S401 and S403.
After a designer inputs a selection operation aiming at the first area, a data transmission channel is built between the computer and the server through WebSocket; and after the mobile terminal uploads the object to be walked and the server receives the object to be walked, the server transmits the received object to be walked to a corresponding computer through the data transmission channel by matching the dynamic picture ID. And after the computer receives the object to be checked successfully, the computer actively closes the data transmission channel. In the process, the computer can also use a timer to periodically poll the request picture interface, and the timer is cleared after the object to be walked is successfully received.
And S404, displaying the object to be walked and the operation bar of the walking tool.
And when the computer receives the object to be walked through the data transmission channel, the computer displays the object to be walked.
The description of the operation bar is as described above, and is not repeated here. In addition, S405 to S412 are respectively the same as S305 to S312, and the description of this embodiment is omitted.
Through S401 to S412, the computer realizes walkthrough of the application page or the applet page from the designer perspective.
The above-mentioned embodiment shown in fig. 3 and fig. 15 performs the walkthrough from the perspective of the designer, and unlike the designer, the front-end engineer only needs to find the difference between the object to be walked and the target object, and does not need to label and generate the walkthrough report, so the front-end engineer only uses the walkthrough tool to the overlay comparison function, i.e., S301 to S307 in the embodiment shown in fig. 3, or S401 to S407 in the embodiment shown in fig. 15.
Fig. 21 is a flowchart of a walkthrough method according to another embodiment of the present application. This embodiment shows the designer walk through process: install walkthrough tool- > select whether to walk through H5 or a PC page, or walk through an application or applet page:
if the page is walked through H5 or PC, further selecting to walk through H5 page or PC page:
if the H5 page is walked, the H5 page address is opened- > the mobile phone simulator of the browser is opened- >
Starting a walkthrough tool- > uploading a design draft- > comparing and marking- > generating a walkthrough report and sending a mail; if the PC page is walked, opening the PC page address- > starting a walked-through tool- > uploading the design draft
-comparing and marking- > generating a walking check report and sending a mail;
and if the application or the small program page is walked, starting a walked-through tool- > scanning the two-dimensional code- > uploading a screenshot of the walked-through page- > uploading a design draft- > comparing and marking- > generating a walked-through report and sending a mail.
Fig. 22 is a flowchart of a walkthrough method according to another embodiment of the present application. This embodiment shows the front-end engineer walk-through flow: install walkthrough tool- > select whether to walk through H5 or a PC page, or walk through an application or applet page:
if the page is walked through H5 or PC, further selecting to walk through H5 page or PC page:
if the H5 page is walked, the H5 page address is opened- > the mobile phone simulator of the browser is opened- >
Starting a walking check tool- > uploading a design draft- > comparing and self-checking;
if the PC page is walked, opening the PC page address- > starting a walked-through tool- > uploading the design draft
- > comparative self-check;
and if the application or the small program page is walked, starting a walked check tool- > scanning the two-dimensional code- > uploading a screenshot of the walked check page- > uploading a design draft- > comparing and self-checking.
According to the method, the User Interface (UI for short) self-check of a front-end development engineer is realized through a Chrome extended program mode, and a designer automatically generates design walkthrough, annotation and walkthrough reports of an H5 page or a PC page, an application page or an applet page. The walkthrough scheme provided by the application has at least the following advantages:
(1) repeated and complicated operations in the designer walkthrough link are simplified, standard walkthrough documents are generated quickly, and the working efficiency is improved;
(2) the communication cost between the front-end engineer and the designer is reduced;
(3) the visual self-checking function is provided for the front-end engineer, and the problem that the front-end engineer is insensitive to the visual restoration degree is solved.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Fig. 23 is a schematic structural diagram of a walkthrough device according to an embodiment of the present application. The embodiment of the application provides a walkthrough device which can be integrated on electronic equipment such as a computer. As shown in fig. 23, thewalkthrough apparatus 50 includes: an acquisition module 51, a processing module 52 and a display module 53. Wherein:
the obtaining module 51 is configured to obtain an object to be walked.
The processing module 52 is configured to perform overlay processing on the object to be walked and the target object, and trigger the display module 53 to display an overlay; responding to the operation acted on the overlay to obtain the pattern difference information of the object to be walked relative to the target object; and generating a walkthrough report of the object to be walkthrough according to the pattern difference information.
Optionally, when the type of the object to be walked is a computer end page, the obtaining module 51 is specifically configured to: and loading the object to be walked through the browser.
Or, when the type of the object to be walked is H5 page, the obtaining module 51 is specifically configured to: loading an object to be walked through a browser; and starting the mobile phone simulator of the browser.
Or, when the type of the object to be walked is an application page or an applet page, the obtaining module 51 is specifically configured to: in response to the selection operation acting on the first area, triggering the display module 53 to display the two-dimensional code, so that the mobile terminal can upload the object to be walked through scanning the two-dimensional code; and receiving the object to be walked. The scene selection interface comprises a first area, the first area is used for indicating a walkthrough application or an applet picture, and the scene selection interface is a starting interface of a walkthrough tool.
Further, the obtaining module 51 is further configured to: the trigger display module 53 displays the object to be walked and the operation bar of the walking tool.
In some embodiments, when the processing module 52 performs overlay processing on the object to be walked and the target object to obtain an overlay, the overlay may be specifically configured to: and displaying the target object with preset opacity above the object to be walked to obtain a superimposed image.
In some embodiments, when the processing module 52 responds to the operation applied to the overlay to obtain the style difference information of the object to be walked relative to the target object, it is specifically configured to: in response to an operation acting on the overlay, identifying a location at which the operation acts; in response to an operation on a first control, triggering the display module 53 to display a selectable annotation type, where the first control is included in the operation bar and is used for adding an annotation at the position; and responding to the operation of the selectable marking type, and obtaining the pattern difference information of the object to be walked relative to the target object.
Optionally, the processing module 52 is further configured to: after obtaining the style difference information of the object to be walked through relative to the target object, triggering the display module 53 to display the style difference information and the corresponding deletion control; in response to the operation on the delete control, triggering the display module 53 to prompt the user whether to determine the style difference information corresponding to the delete control; and deleting or reserving the style difference information corresponding to the deletion control according to the input operation of the user.
When the processing module generates the walkthrough report of the object to be walkthrough according to the pattern difference information, the processing module may be specifically configured to: and generating a walkthrough report of the object to be walkthrough in response to the operation acting on the second control. The walkthrough report comprises style difference information and the position of the style difference information in the object to be walked through, the second control is contained in the operation bar, and the second control is used for generating the report.
Further, the processing module 52 may be further configured to: before the object to be walked and the target object are subjected to image folding processing to obtain an image folding, a system folder is called in response to clicking operation acting on a third control, the system folder contains the target object, the third control is contained in an operation bar, and the third control is used for leading in the target object; the target object is loaded in response to a selected operation acting on the target object in the system folder.
In some embodiments, the processing module 52 is further configured to: in response to the opening operation applied to the walkthrough tool, the display module 53 is triggered to display the operation bar.
Further, the processing module 52 may be further configured to: in response to the opening operation acting on the walkthrough tool, triggering the display module 53 to display a scene selection interface, wherein the scene selection interface comprises a first area and a second area, the first area is used for indicating walkthrough application or small program pictures, and the second area is used for indicating walkthrough H5 or a computer end page; in response to the selection operation applied to the second region, the display module 53 is triggered to display the operation bar.
Optionally, the processing module 52 is further configured to: and after generating a walkthrough report of the object to be walkthrough according to the pattern difference information, storing the walkthrough report to a preset position.
In any embodiment of the present application:
optionally, the mobile terminal and the server realize uploading of the object to be walked by adopting a technology of combining a Websocket and an HTTP request or a technology of combining a Websocket and an HTTPs request; or the mobile terminal and the server only adopt HTTP or HTTPS request to realize the uploading of the object to be walked.
Alternatively, the preset opacity may be set by an opacity control included in the operation bar.
Optionally, the operation bar further includes:
at least one of a control for hiding the target object and a control for displaying the target object;
at least one of a control for locking the target object and a control for unlocking the target object.
Optionally, the selectable annotation types include at least one of:
font inconsistencies, color inconsistencies, spacing inconsistencies, picture blurring, ICON blurring, and others that instruct a user to enter custom content.
Optionally, the walkthrough report is in the form of a picture or a web page.
It should be noted that the division of the modules of the above apparatus is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these modules can be realized in the form of software called by processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware. For example, the processing module may be a processing element separately set up, or may be implemented by being integrated in a chip of the apparatus, or may be stored in a memory of the apparatus in the form of program code, and a function of the processing module may be called and executed by a processing element of the apparatus. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
For example, the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when one of the above modules is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. For another example, these modules may be integrated together and implemented in the form of a System-On-a-Chip (SOC).
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Fig. 24 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 24, the electronic device may include: a processor 61, a memory 62, a communication interface 63, and a system bus 64. Wherein, the memory 62 and the communication interface 63 are connected with the processor 61 through the system bus 64 and complete mutual communication, the memory 62 is used for storing instructions, the communication interface 63 is used for communicating with other devices, and the processor 61 is used for calling the instructions in the memory to execute the scheme as described in the above method embodiments.
The system bus mentioned in fig. 24 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The system bus may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus. The communication interface is used for realizing communication between the database access device and other equipment (such as a client, a read-write library and a read-only library). The Memory may include a Random Access Memory (RAM), and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory.
The Processor may be a general-purpose Processor, and includes a central processing unit, a Network Processor (NP for short), and the like; but also a digital signal processor DSP, an application specific integrated circuit ASIC, a field programmable gate array FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components.
The embodiment of the present application further provides a readable storage medium, in which a computer program is stored, and when the computer program is executed, the method described in the above method embodiment is implemented.
The embodiment of the application also provides a chip for running the instructions, and the chip is used for executing the method in the embodiment of the method.
The present invention further provides a program product, which includes a computer program, where the computer program is stored in a readable storage medium, and at least one processor can read the computer program from the readable storage medium, and when the at least one processor executes the computer program, the at least one processor can implement the method described in the above method embodiment.
In the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship; in the formula, the character "/" indicates that the preceding and following related objects are in a relationship of "division". "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
It is to be understood that the various numerical references referred to in the embodiments of the present application are merely for descriptive convenience and are not intended to limit the scope of the embodiments of the present application. In the embodiment of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiment of the present application.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (21)

CN202011040808.4A2020-09-282020-09-28Walk-checking method, walk-checking device and storage mediumActiveCN113778429B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202011040808.4ACN113778429B (en)2020-09-282020-09-28Walk-checking method, walk-checking device and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202011040808.4ACN113778429B (en)2020-09-282020-09-28Walk-checking method, walk-checking device and storage medium

Publications (2)

Publication NumberPublication Date
CN113778429Atrue CN113778429A (en)2021-12-10
CN113778429B CN113778429B (en)2024-10-18

Family

ID=78835097

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202011040808.4AActiveCN113778429B (en)2020-09-282020-09-28Walk-checking method, walk-checking device and storage medium

Country Status (1)

CountryLink
CN (1)CN113778429B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114442892A (en)*2021-12-282022-05-06北京沃东天骏信息技术有限公司Page information analysis method and device, electronic equipment and computer readable medium
CN114442890A (en)*2021-12-282022-05-06北京沃东天骏信息技术有限公司Information analysis method, device, equipment and readable medium for mobile terminal webpage
CN114596388A (en)*2022-03-082022-06-07北京字节跳动网络技术有限公司 Marking method, device, computer equipment and medium for component spacing in interface
CN114860521A (en)*2022-04-232022-08-05中电万维信息技术有限责任公司 An automatic test method for UI interface restoration degree
CN117952817A (en)*2024-03-262024-04-30腾讯科技(深圳)有限公司Image comparison display method and related device

Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2016078479A1 (en)*2014-11-172016-05-26广州市动景计算机科技有限公司Method and device for monitoring web page changes
CN105740364A (en)*2016-01-262016-07-06腾讯科技(深圳)有限公司Page processing method and related apparatus
CN106528887A (en)*2016-12-192017-03-22广州视源电子科技股份有限公司Webpage design method and system
US10043255B1 (en)*2018-02-202018-08-07Capital One Services, LlcUtilizing a machine learning model to automatically visually validate a user interface for multiple platforms
CN108984399A (en)*2018-06-292018-12-11上海连尚网络科技有限公司Detect method, electronic equipment and the computer-readable medium of interface difference
CN109062573A (en)*2018-08-202018-12-21北京知本源信息技术有限公司Design drawing design element information sharing apparatus and method
CN111240786A (en)*2020-01-092020-06-05北京字节跳动网络技术有限公司Walkthrough method and device, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2016078479A1 (en)*2014-11-172016-05-26广州市动景计算机科技有限公司Method and device for monitoring web page changes
CN105740364A (en)*2016-01-262016-07-06腾讯科技(深圳)有限公司Page processing method and related apparatus
CN106528887A (en)*2016-12-192017-03-22广州视源电子科技股份有限公司Webpage design method and system
US10043255B1 (en)*2018-02-202018-08-07Capital One Services, LlcUtilizing a machine learning model to automatically visually validate a user interface for multiple platforms
CN108984399A (en)*2018-06-292018-12-11上海连尚网络科技有限公司Detect method, electronic equipment and the computer-readable medium of interface difference
CN109062573A (en)*2018-08-202018-12-21北京知本源信息技术有限公司Design drawing design element information sharing apparatus and method
CN111240786A (en)*2020-01-092020-06-05北京字节跳动网络技术有限公司Walkthrough method and device, electronic equipment and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114442892A (en)*2021-12-282022-05-06北京沃东天骏信息技术有限公司Page information analysis method and device, electronic equipment and computer readable medium
CN114442890A (en)*2021-12-282022-05-06北京沃东天骏信息技术有限公司Information analysis method, device, equipment and readable medium for mobile terminal webpage
CN114596388A (en)*2022-03-082022-06-07北京字节跳动网络技术有限公司 Marking method, device, computer equipment and medium for component spacing in interface
CN114860521A (en)*2022-04-232022-08-05中电万维信息技术有限责任公司 An automatic test method for UI interface restoration degree
CN117952817A (en)*2024-03-262024-04-30腾讯科技(深圳)有限公司Image comparison display method and related device
CN117952817B (en)*2024-03-262024-06-11腾讯科技(深圳)有限公司Image comparison display method and related device

Also Published As

Publication numberPublication date
CN113778429B (en)2024-10-18

Similar Documents

PublicationPublication DateTitle
CN113778429B (en)Walk-checking method, walk-checking device and storage medium
CN113190781B (en)Page layout method, device, equipment and storage medium
US20220414326A1 (en)Document applet generation
CN109614424B (en)Page layout generation method, device, computing equipment and medium
US10324828B2 (en)Generating annotated screenshots based on automated tests
AU2012370492B2 (en)Graphical overlay related to data mining and analytics
CN109408764B (en)Page area dividing method, device, computing equipment and medium
US9959257B2 (en)Populating visual designs with web content
CN107818143A (en)A kind of page configuration, generation method and device
US20170357486A1 (en)Enhancing a crowdsourced integrated development environment application
CN112817817B (en)Buried point information query method, buried point information query device, computer equipment and storage medium
CN113382083B (en)Webpage screenshot method and device
US8706778B2 (en)Methods and systems for an action-based interface for files and other assets
JP7280388B2 (en) Apparatus and method, equipment and medium for implementing a customized artificial intelligence production line
US20160117335A1 (en)Systems and methods for archiving media assets
CN113220381B (en) A click data display method and device
CN113568621B (en) A data processing method and device for page embedding
US20190026272A1 (en)Information processing system, information processing method, and operator terminal
US20250156786A1 (en)Information processing apparatus, information processing system, and information processing method
CN113391808A (en)Page configuration method and device and electronic equipment
CN117093386B (en)Page screenshot method, device, computer equipment and storage medium
CN111428452B (en)Annotation data storage method and device
CN116595284B (en)Webpage system operation method, device, equipment, storage medium and program
CN106021612A (en)Image processing method and device and equipment
CN111949266B (en)Webpage generation method and device and electronic equipment

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp