Movatterモバイル変換


[0]ホーム

URL:


CN112578971B - Page content display method and device, computer equipment and storage medium - Google Patents

Page content display method and device, computer equipment and storage medium
Download PDF

Info

Publication number
CN112578971B
CN112578971BCN202011431434.9ACN202011431434ACN112578971BCN 112578971 BCN112578971 BCN 112578971BCN 202011431434 ACN202011431434 ACN 202011431434ACN 112578971 BCN112578971 BCN 112578971B
Authority
CN
China
Prior art keywords
target
picture
page
text
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011431434.9A
Other languages
Chinese (zh)
Other versions
CN112578971A (en
Inventor
刘艳峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co LtdfiledCriticalTencent Technology Shenzhen Co Ltd
Priority to CN202011431434.9ApriorityCriticalpatent/CN112578971B/en
Publication of CN112578971ApublicationCriticalpatent/CN112578971A/en
Application grantedgrantedCritical
Publication of CN112578971BpublicationCriticalpatent/CN112578971B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The application relates to a page content display method, a page content display device, computer equipment and a storage medium, and belongs to the technical field of interface interaction. The method comprises the following steps: displaying a target page in a screen; determining a target amplification area in a target page based on a first trigger operation on the target page in the screen; acquiring a local page picture corresponding to a target amplification area; obtaining an identification text in a local page picture; amplifying the size of the text content in the local page picture by the target multiple to obtain the display size of the identification text; and displaying the recognition text on the target page according to the display size of the recognition text. By the scheme, the situation that the amplified text content is distorted due to the fact that the text content is not clear enough is avoided, and therefore the distortion rate of the text content after the page is amplified is reduced.

Description

Page content display method and device, computer equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of interface interaction, in particular to a page content display method and device, computer equipment and a storage medium.
Background
With the wider application of the terminal, the content of the page needing to be displayed through the terminal screen is richer and richer.
In the related art, in order to ensure that a user can clearly obtain page content through a terminal screen, an overall page content amplification operation may be generally performed on a page, and the currently displayed overall page content is displayed according to a certain amplification factor.
However, when the whole page content is amplified by adopting the above scheme, the situation that the amplified text content is distorted due to the fact that the text content of part of the text content on the page is not clear enough can occur, so that a user cannot distinguish part of the text content, and the display effect of the amplified page is further influenced.
Disclosure of Invention
The embodiment of the application provides a page content display method, a page content display device, computer equipment and a storage medium, which can reduce the distortion rate of text content after page amplification. The technical scheme is as follows:
in one aspect, a method for displaying page content is provided, where the method includes:
displaying a target page in a screen;
determining a target amplification area in a target page based on a first trigger operation on the target page in the screen;
acquiring a local page picture corresponding to the target amplification area; the local page picture is a picture comprising the content of the target amplification area;
obtaining an identification text in the local page picture; the identification text is the text content determined based on the picture text identification;
amplifying the size of the text content in the local page picture by a target multiple to obtain the display size of the identification text;
and displaying the recognition text on the target page according to the display size of the recognition text.
In one aspect, an apparatus for displaying page content is provided, the apparatus including:
the page display module is used for displaying a target page in a screen;
the area determining module is used for determining a target amplification area in a target page based on a first trigger operation on the target page in the screen;
the image acquisition module is used for acquiring a local page image corresponding to the target amplification area; the local page picture is a picture comprising the content of the target amplification area;
the text acquisition module is used for acquiring the identification text in the local page picture; the identification text is the text content determined based on picture text identification;
the size acquisition module is used for amplifying the size of the text content in the local page picture by a target multiple to obtain the display size of the identification text;
and the text display module is used for displaying the identification text on the target page according to the display size of the identification text.
In one possible implementation manner, the text obtaining module includes:
and the first text acquisition sub-module is used for responding to the fact that the local page picture is the screenshot of the content of the target amplification area, performing picture character recognition on the local page picture, and acquiring a recognition text in the local page picture.
In a possible implementation manner, the text obtaining module includes:
the identification determining sub-module is used for determining the image identification corresponding to the local page image in response to the fact that the local page image comprises partial image content corresponding to the target amplification area in the target page;
and the second text acquisition sub-module is used for inquiring and acquiring the identification text corresponding to the picture identification from a server.
In one possible implementation, the apparatus further includes:
the picture position acquisition sub-module is used for acquiring the position information of the part of picture content in the corresponding picture before the identification text corresponding to the picture identification is inquired and acquired from the server;
the second text acquisition sub-module includes:
a character position obtaining unit, configured to obtain, from the server, each identification character corresponding to the picture identifier and position information of each identification character;
and the text acquisition unit is used for acquiring the identification text based on the position information of each identification character and the position information of the part of the picture content in the corresponding picture.
In one possible implementation, the apparatus further includes:
the background size acquisition module is used for amplifying the size of the background content by the target multiple to obtain the display size of the background content; the background content is a part of the target amplification area except the text content;
and the background display module is used for displaying the background content on the upper layer of the target page according to the display size of the background content.
In one possible implementation manner, the region determining module includes:
the picture display submodule is used for responding to the received first trigger operation and displaying a first picture on the screen; the first picture comprises a selection icon;
the trigger position acquisition submodule is used for acquiring a trigger position based on the received second trigger operation; the trigger position is the position of the selection icon when the second trigger operation is received;
and the region determining submodule is used for determining the target amplification region in the target page based on the trigger position.
In one possible implementation, the region determining sub-module includes:
a first area determination unit, configured to acquire a trigger area, where the trigger area is an area within a first area around the trigger position;
and the region determining unit is used for acquiring the trigger range region as the target amplification region.
In one possible implementation, the region determining sub-module includes:
the path acquisition unit is used for acquiring a trigger path of the selection icon; the trigger path is used for indicating a moving path of the trigger position;
and the second area determining unit is used for generating at least one closed graph in response to the trigger path and determining an area surrounded by the closed graph as the target amplification area.
In one possible implementation manner, the region determining sub-module includes:
a third area determination unit, configured to determine, in response to that the trigger position is located in a designated area, that the designated area is the target enlargement area in the target page; the target page comprises at least one of the designated regions.
In one possible implementation, the apparatus further includes:
and the multiple determining module is used for amplifying the size of the text content in the local page picture by a target multiple, and determining the target multiple based on the received first operation before obtaining the display size of the identification text.
In one possible implementation manner, the multiple determining module includes:
the interface display submodule is used for displaying a target multiple setting interface on the screen; the target multiple setting interface is used for setting target multiples;
and the magnification selection submodule is used for selecting the specified magnification or inputting the customized magnification as the target magnification on the target magnification setting interface.
In one possible implementation, in response to the terminal being a touch-enabled type terminal;
the multiple determination module comprises:
the distance acquisition submodule is used for acquiring the touch sliding distance of at least two contact points on the screen;
and the factor determining submodule is used for determining the target factor based on the corresponding relation between the touch sliding distance and the amplification factor.
In another aspect, a computer device is provided, which includes a processor and a memory, where at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the page content presentation method as described above.
In another aspect, a computer-readable storage medium is provided, in which at least one instruction, at least one program, a set of codes, or a set of instructions is stored, which is loaded and executed by a processor to implement the page content presentation method as described above.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the page content presentation method provided in the various alternative implementations of the above aspects.
The technical scheme provided by the application can comprise the following beneficial effects:
in the scheme shown in the embodiment of the application, a target amplification area of a target page is determined, then, the text content in the target amplification area is obtained based on image text recognition, the text content is amplified according to the target multiple, and the text content is displayed in the target page. Through the scheme, the text content of the target amplification area can be identified through the image text identification technology, the situation that the amplified text content is distorted due to the fact that the text content is not clear enough is avoided, the distortion rate of the text content after page amplification is reduced, therefore, the time consumed by a user for obtaining the text content is reduced, and the electric quantity of a terminal is saved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 is a system diagram illustrating a page content presentation system in accordance with an exemplary embodiment;
FIG. 2 is a flow diagram illustrating a method of presenting page content in accordance with an exemplary embodiment;
FIG. 3 is an interaction flow diagram for implementing a partial application page magnification process according to the embodiment shown in FIG. 2;
FIG. 4 is a flowchart illustrating a method of presenting page content in accordance with an exemplary embodiment;
FIG. 5 is a schematic diagram of a screenshot of the content of a target enlargement area as a partial page picture according to the embodiment shown in FIG. 4;
FIG. 6 is a schematic diagram of a target multiple setting interface according to the embodiment shown in FIG. 4;
FIG. 7 is an enlarged content display diagram according to the embodiment shown in FIG. 4;
FIG. 8 is a schematic diagram of another enlarged content presentation according to the embodiment shown in FIG. 4;
FIG. 9 is a block diagram illustrating a page content presentation system in accordance with an exemplary embodiment;
FIG. 10 is a block diagram illustrating a page content presentation device according to an exemplary embodiment;
FIG. 11 is a block diagram illustrating the structure of a computer device according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It is to be understood that reference herein to "a number" means one or more and "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
For convenience of understanding, terms referred to in the embodiments of the present application will be described below.
1) Artificial intelligence AI
AI is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the implementation method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Along with the research and progress of artificial intelligence technology, the artificial intelligence technology develops research and application in a plurality of fields, for example, common smart homes, intelligent wearable devices, virtual assistants, smart sound boxes, smart marketing, unmanned driving, automatic driving, unmanned aerial vehicles, robots, smart medical treatment, smart customer service, smart video services and the like.
2) Computer Vision technology (Computer Vision, CV)
Computer vision is a science for researching how to make a machine "see", and further, it means that a camera and a computer are used to replace human eyes to perform machine vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image Recognition, image semantic understanding, image retrieval, OCR (Optical Character Recognition), video processing, video semantic understanding, video content/behavior Recognition, three-dimensional object reconstruction, 3D (3 Dimensions) technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face Recognition and fingerprint Recognition.
The scheme provided by the embodiment of the application relates to technologies such as artificial intelligent OCR (optical character recognition), and is specifically explained by the following embodiment:
fig. 1 is a system configuration diagram illustrating a page content presentation system according to an exemplary embodiment. As shown in fig. 1, the system includes a terminal 110 and aserver 120.
The terminal 110 may be a terminal device having page presentation capabilities and having a display screen on which page content may be presented.
For example, the terminal 110 may be a smart home device such as a smart television, or the terminal 110 may also be a smart wearable device such as smart glasses and a smart watch, or the terminal 110 may also be a smart phone, a tablet computer, an e-book reader, a notebook computer, a desktop computer, and the like, but is not limited thereto.
Among them, the terminal 110 may have an application program with a page enlarging function installed therein.
Optionally, the application with the page enlarging function may be a third-party application installed in the terminal 110, or the application with the page enlarging function may also be a system application carried in an operating system of the terminal 110.
For example, the application programs may be a video-type application program, a page browsing-type application program, an instant messaging-type application program, a social platform-type application program, and the like.
Theserver 120 may be a server, or theserver 120 may be a server cluster formed by a plurality of servers, or theserver 120 may include one or more virtualization platforms, or theserver 120 may also be one or more cloud computing service centers, and the number of theterminals 110 and the number of theservers 120 are not limited in this embodiment of the application.
Theserver 120 may be a server device that provides a background service for the application program for page amplification installed in theterminal 110.
Theserver 120 may be composed of one or more functional units.
In one possible implementation, as shown in fig. 1, theserver 120 may include a picture character recognition module 120a and a database 120b.
The image character recognition module 120a may be configured to receive an image uploaded by the terminal 110, perform character recognition on the received image, and store a character content obtained by a recognition result in the database 120b.
The database 120b may be a Redis database, or may be another type of database. The database 120b is configured to store a corresponding relationship between the text content and the picture identifier obtained by the picture text recognition module 120 a.
The terminal 110 may be connected to theserver 120 through a communication network. Optionally, the communication network is a wired network or a wireless network.
Optionally, the wireless or wired networks described above use standard communication techniques and/or protocols. The Network is typically the Internet, but may be any Network including, but not limited to, a Local Area Network (LAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), a mobile, wireline or wireless Network, a private Network, or any combination of virtual private networks. In some embodiments, data exchanged over a network is represented using techniques and/or formats including Hypertext Mark-up Language (HTML), extensible Markup Language (XML), and the like. All or some of the links may also be encrypted using conventional encryption techniques such as Secure Socket Layer (SSL), transport Layer Security (TLS), virtual Private Network (VPN), internet Protocol Security (IPsec). In other embodiments, custom and/or dedicated data communication techniques may also be used in place of, or in addition to, the data communication techniques described above.
FIG. 2 is a flowchart illustrating a method of presenting page content, according to an exemplary embodiment. The page content presentation method can be executed by a terminal. For example, the terminal may be the terminal 110 shown in fig. 1. As shown in fig. 2, the page content presentation method includes the following steps:
instep 201, a destination page is presented in a screen.
In the embodiment of the application, the terminal displays the target page in the screen.
The target page may be a page in the third-party application program, a page in the system application program, or a display page in the operating system.
In one possible implementation, the target page includes text content and the rest of the content.
The text content may include text content on a picture and text content on a non-picture.
In one possible implementation, the target page is the page presented in the screen at the current time.
The terminal to which the screen belongs may be a terminal with a touch function, or may also be a terminal without a touch function.
For example, the terminal with the touch screen function may be an intelligent terminal with a touch screen, and the terminal without the touch function may include a television set without a touch screen and operated by using a remote controller, or a computer device which performs control operation by using a device such as a mouse.
Instep 202, a target enlargement area in a target page is determined based on a first trigger operation on the target page in the screen.
In the embodiment of the application, the terminal determines a partial area in a target page as a target amplification area based on a first trigger operation on the target page in a screen.
In a possible implementation manner, the first triggering operation is a triggering operation performed on the designated area of the target page according to an instruction issued by the user after the terminal receives the instruction sent by the user.
The target enlargement area may be any designated area in the target page, and the target enlargement area may include at least one designated area.
The trigger operation may include a contact operation such as a click trigger operation, a long-press trigger operation, a slide trigger operation, and the like, or may also include a non-contact operation such as a remote control trigger operation, a voice trigger operation, a gesture trigger operation, and the like, which is not limited in the present application.
For example, when the terminal has a touch screen, by receiving a click operation of a user on a target page in the screen, a designated area can be determined as a target amplification area according to each trigger point generated by the click operation; by receiving a trigger path for sliding operation, a specified area surrounded by the trigger path can be determined as a target amplification area; when the terminal does not have a touch screen, determining a designated area in a target page as a target amplification area by receiving an instruction sent by a remote controller; when the terminal has a voice recognition function, the terminal can determine the designated area as a target enlargement area by receiving a voice instruction sent by a user.
Instep 203, a local page picture corresponding to the target amplification area is obtained; the partial page picture is a picture including the contents of the target enlargement area.
In the embodiment of the application, the terminal acquires a local page picture corresponding to a target amplification area, wherein the local page picture is a picture including part or all of the content of the target amplification area.
The local page picture may be a screenshot corresponding to the target enlargement area, or may also be picture content included in the target enlargement area.
In one possible implementation manner, when the local page content is the picture content in the target enlargement area, the picture content may be the content located in the target enlargement area in the picture to which the picture content belongs in the target page.
Instep 204, an identification text in a local page picture is obtained; the recognized text is the text content determined based on the picture text recognition.
In the embodiment of the application, the terminal acquires the identification text in the local page picture.
In one possible implementation, the recognized text is the text content determined in real time by the picture text recognition technique; or the recognition text is the text content which is obtained by recognition through the picture character recognition technology in advance and is stored in the server and corresponds to the picture identification.
Instep 205, the size of the text content in the local page picture is enlarged by the target multiple to obtain the display size of the recognized text.
Instep 206, the recognition text is displayed on the target page in the display size of the recognition text.
In one possible implementation manner, the terminal draws the recognition text in a display size of the recognition text in an enlarged layer of an upper layer of the target page, and displays the recognition text on the upper layer of the target page.
For example, taking an example that the target page is a page in a video playing application, fig. 3 is an interaction flowchart of a process for implementing local page enlargement of the application according to an embodiment of the present application. As shown in fig. 3, a video playing application is started, a terminal displays a page of the application as a target page, a user selects a target zoom-in area in the target page (S31), the terminal inserts a new layer in the uppermost layer of the page, the new layer serves as a zoom-in layer (S32), the terminal acquires content to be zoomed in the target zoom-in area (S33), finally, the terminal draws the content to be zoomed in at a target multiple on the zoom-in layer, and the terminal displays the zoomed content on the upper layer of the target page (S34).
In summary, in the solution shown in the embodiment of the present application, a target enlargement area of a target page is determined, then text content in the target enlargement area is obtained based on image text recognition, and the text content is enlarged according to a target multiple and displayed in the target page. By the scheme, the text content of the target amplification area can be identified through the image text identification technology, the situation that the amplified text content is distorted due to the fact that the text content is not clear enough is avoided, the distortion rate of the text content after page amplification is reduced, time consumed by a user for acquiring the text content is reduced, and electric quantity of the terminal is saved.
FIG. 4 is a flowchart illustrating a method of presenting page content in accordance with an exemplary embodiment. The page content presentation method can be executed by a terminal. For example, the terminal may be the terminal 110 shown in fig. 1. As shown in fig. 4, the page content presentation method includes the following steps:
instep 401, a destination page is presented in a screen.
In the embodiment of the application, the terminal displays the target page at the current moment in the screen.
For example, the page in the third-party application and the page in the system application may be application pages, or may be corresponding browser pages, and the display page in the operating system may be a system page.
For example, the browser page may be a browser web page displayed in the screen of the terminal, the application page may be a video playing application, a social contact application, or a financial transaction application, and the system page may be an operation page of an intelligent appliance having a display screen or a page of the terminal during system operation.
And displaying the target page displayed in the screen according to a preset display proportion at the initial display moment.
Instep 402, a target enlargement area in a target page is determined based on a first trigger operation on the target page in the screen.
In the embodiment of the application, the terminal enables the terminal to enter a target amplification area selection state according to the received first trigger operation on the target page in the terminal screen, and the target amplification area in the target page is determined by acquiring the trigger position and the trigger form.
In one possible implementation manner, in response to the terminal having the touch function, the first trigger operation is a specified trigger operation performed on a terminal screen.
For example, when the time that the user presses the screen for a long time reaches a preset time, it is determined that the terminal receives a first trigger operation; or when the user draws a preset path on the screen, determining the first trigger operation received by the terminal.
In another possible implementation manner, in response to that the terminal does not have a touch function, the first trigger operation is a trigger operation that is received by operating a remote control device that can control the terminal so as to control the terminal.
For example, when the user presses a designated key on the remote controller for a long time, it is determined that the terminal receives the first trigger operation, or when the user presses the designated key on the remote controller, it is determined that the terminal receives the first trigger operation.
In one possible implementation manner, in response to receiving a first trigger operation, a first picture is displayed on a screen, based on a received second trigger operation, a position where a selected icon is located when the second trigger operation is executed is acquired as a trigger position, and based on the trigger position, a target enlargement area in a target page is determined.
Wherein the first screen includes a selection icon; the selection icon is used to display a trigger position of the first trigger operation.
In one possible implementation, the trigger form for selecting the icon includes a follow-up form trigger, a division form trigger, and a selection form trigger.
And the terminal acquires a trigger range area in response to the fact that the trigger form of the selection icon comprises a following form of trigger, and determines the trigger range area as a target amplification area, wherein the trigger range area is an area in a first range around the trigger position.
Wherein the first range may be a rectangular area or a circular area centered on the trigger position.
Illustratively, when the terminal has a touch function, the user determines that the terminal receives the first trigger operation and the terminal enters a state of target amplification area selection by pressing the screen for a long time with a finger for a preset time. In the state of target amplification area selection, each contact point of a finger and the terminal screen can be a trigger position, and an area of a first range around each trigger position is used as a target amplification area in a target page.
The terminal acquires a trigger path of the selection icon in response to the trigger form of the selection icon including the trigger in the division form, generates at least one closed graph in response to the trigger path, and determines an area surrounded by the closed graph as a target amplification area.
In a possible implementation manner, the closed graph is generated by a continuous touch position moving path in which at least two touch positions coincide on the moving path.
Wherein, the closed figure can be at least one, and the closed figure can be a regular figure or an irregular figure.
Illustratively, when the terminal has a touch function, the user determines that the terminal receives a first trigger operation by pressing the screen with a finger for a long time to reach a preset time, and the terminal enters a state of target amplification area selection. In the state of target amplification area selection, a user determines the starting point of a touch position by contacting a finger with a terminal screen, then slides on the screen until the starting point coincides with at least one touch position on a moving path, generates at least one closed graph, and takes the at least one closed graph as a target amplification area.
And determining the target amplification area by selecting the icon custom area division mode in response to the fact that the screen of the terminal has a touch function.
Wherein the response to the trigger path may be used to indicate a movement path of the trigger position.
In addition, in response to the trigger form of the selection icon including the trigger of the selection form, the terminal determines a trigger position, and in response to the trigger position being located in the designated area, the designated area may be determined to be a target enlargement area in the target page.
Wherein, the target page can contain at least one designated area.
In one possible implementation, the division of the designated area may be custom set by the user.
For example, in a setting interface of the terminal, a user may set a designated area where a target page is divided into 3 × 3, and when the terminal enters a state of target enlargement area selection, the designated area where the position of the selection icon is located when the trigger operation is received is determined, and the designated area may be determined as the target enlargement area.
In another possible implementation manner, the terminal identifies each functional area of the target page, divides the target page into designated areas according to the functional areas, determines the designated area where the position of the selection icon is located when the trigger operation is received after the terminal enters a state of target enlargement area selection, and determines the designated area as the target enlargement area.
For example, when the target page is a page of a video playing application, the terminal may identify a video playing area, a comment area, and a video introduction area in the target page. The terminal can divide the target page into three designated areas, namely a video playing area, a comment area and a video introduction area, and if the position of the selected icon is determined to be in the video playing area when the trigger operation is received, the video playing area is determined to be a target amplification area. The designated areas are automatically divided according to the functions, so that the efficiency of amplifying and displaying partial contents of the target page can be improved, and the convenience in the application process is enhanced.
Instep 403, a local page image corresponding to the target enlargement area is obtained.
In the embodiment of the application, the terminal acquires a local page picture corresponding to the target amplification area.
The local page picture may be a picture including the content of the target enlargement area.
In one possible implementation manner, the partial page picture contains the whole content of the target enlargement area or contains part of the content of the target enlargement area.
When the local page picture contains all contents of the target amplification area, the local page picture can be a screenshot of the target amplification area, and contains text contents and picture contents; when the partial page picture contains partial content of the target enlargement area, the partial page picture may be picture content on a target page in the target enlargement area, and the picture content may contain partial text content.
Instep 404, in response to that the local page picture is the screenshot of the content of the target amplification area, performing picture character recognition on the local page picture to obtain a recognition text in the local page picture.
In the embodiment of the application, when the local page picture is the screenshot of the content of the target amplification area, the terminal directly identifies the screenshot by picture characters, and the identification text in the screenshot can be obtained through real-time picture character identification.
For example, fig. 5 is a schematic diagram of screenshot of content of a target enlargement area as a local page picture according to an embodiment of the present application, and as shown in fig. 5, a page corresponding to a video playing application is taken as an example of a target page, an icon 51 is selected to perform a trigger operation, a first range around the selected icon is selected as atarget enlargement area 52, and thetarget enlargement area 52 is directly screenshot to determine alocal page picture 53.
Instep 405, in response to that the local page image includes a part of image content corresponding to the target enlargement area in the target page, an image identifier corresponding to the local page image is determined.
In the embodiment of the present application, when a local page picture is a partial picture content in a target enlargement area in a target page, a picture identifier of a complete picture corresponding to the partial picture content may be determined.
Each picture content in the target page can correspond to a unique picture identifier, and information corresponding to the picture can be inquired through the picture identifier.
In one possible implementation manner, the picture identifier is stored in a database or a data cache area corresponding to the target page.
Instep 406, position information of the partial picture content in the corresponding picture is obtained.
In the embodiment of the application, the terminal acquires the position information of the partial picture content corresponding to the partial page picture in the complete picture to which the partial picture content belongs.
The partial picture content may be a picture of any region in the complete picture.
In a possible implementation manner, the position information of the part picture content in the complete picture is determined by acquiring coordinate values corresponding to the boundary path of the part picture content in the complete picture.
In another possible implementation manner, the terminal first obtains a center point position of a part of the picture content, then obtains a maximum value of a distance between the center point position and a boundary path of the part of the picture content, then obtains a circular area by taking the center point position as a center of a circle and taking the maximum value as a radius, and then determines position information of the circular area in the picture. By determining the position information of part of the picture content in the mode, the text content in the picture can be acquired as comprehensively as possible to be identified to generate identification texts. The situation that the semantics are unclear due to the fact that the local page picture only contains part of the text contents in the text contents with continuous semantics in the complete picture for text recognition can be avoided, and therefore the meaning of text recognition is reduced.
Instep 407, the server queries and obtains the identification text corresponding to the picture identifier.
In the embodiment of the application, the terminal obtains the identification text in the local page picture by inquiring from the server.
Wherein the identification text is the text content determined based on the picture text identification. The corresponding relation between the picture identification and the recognition text is the corresponding relation stored in the server based on the picture character recognition.
Illustratively, before the identification text corresponding to the picture identifier is obtained by querying from the server based on the picture identifier, the terminal obtains the position information of the local page picture in the picture content, then queries from the server based on the picture identifier to obtain the identification words corresponding to the picture identifier and the position information of each identification word, and obtains the identification words on the local page picture based on the position information of each identification word.
Instep 408, a target multiple is determined based on the received first operation.
In the embodiment of the application, the terminal determines a target magnification for magnifying a target magnification area based on the received first operation.
In one possible implementation manner, in response to that the terminal is a touch-enabled terminal, touch sliding distances of at least two contact points on a screen of the terminal are acquired, and then a target multiple is determined based on a corresponding relationship between the touch sliding distances and the amplification factors.
The touch sliding distance is in positive correlation with the magnification factor, the larger the touch sliding distance is, the larger the magnification factor is, and on the contrary, the smaller the touch sliding distance is, the smaller the magnification factor is.
In another possible implementation manner, the terminal displays a target multiple setting interface on a screen, and in the target multiple setting interface, the terminal selects a specified magnification or inputs a customized magnification as a target multiple according to a received user instruction.
The target multiple setting interface can be used for performing user-defined setting of the target multiple.
For example, the terminal performing the target multiple custom setting through the target multiple setting interface may be a non-touch terminal, and before the target page is enlarged, the terminal enters the target multiple setting interface to select the target multiple. Fig. 6 is a schematic diagram of a target multiple setting interface according to an embodiment of the present application. As shown in fig. 6, setting items including image quality, zoom ratio, play setting, and screen ratio may be provided in the setting interface, the zoomratio setting area 61 includes selectable 1.5 times zoom, 2 times zoom, 3 times zoom, and other options, wherein the non-touch-enabled device may adjust the zoom ratio through the zoomratio setting area 61, and the other options are setting items for the user to customize the zoom ratio. The magnification may be any factor from 1 to 10.
Instep 409, the size of the text content in the local page picture is enlarged by the target multiple to obtain the display size of the identification text.
In the embodiment of the application, the terminal amplifies the size of the text content in the local page picture through the acquired target multiple to obtain the display size of the identification text.
In a possible implementation manner, the terminal obtains the display size of the recognition text and simultaneously enlarges the size of the background content by the same target times to obtain the display size of the background content.
The background content is a portion of the target enlargement area excluding the text content.
Instep 410, the recognition text is displayed on the target page in the display size of the recognition text.
In the embodiment of the application, the terminal displays the identification text on the upper layer of the target page in the display size on the screen.
In one possible implementation manner, the terminal displays the background content on the upper layer of the target page in the display size while displaying the recognition text on the upper layer of the target page.
That is, the terminal displays a new layer on the upper layer of the target page, and the new layer includes the identification text displayed in the display size and the background content displayed in the display size.
In one possible implementation, when the user selects to exit the page partial magnification state, the magnification layer is removed, and the page layout hierarchy is restored.
For example, fig. 7 is an enlarged content display diagram according to an embodiment of the present application. As shown in fig. 7, taking a page corresponding to a video playing application program as an example of a target page, selecting anicon 71 to perform a trigger operation, selecting a first range around the selected icon as atarget enlargement area 72, directly performing screenshot on thetarget enlargement area 72 to determine a local page picture, identifying characters in the local page picture and acquiring character content therein, creating anenlargement layer 73 on an upper layer of the target page, and respectively enlarging and drawing the character content and other contents on theenlargement layer 73 according to a target multiple.
Fig. 8 is a schematic diagram of another enlarged content display related to the embodiment of the present application. As shown in fig. 8, taking a page corresponding to a video playing application as an example of a target page, a terminal displays atarget page 81, selects a target zoom-in region 82 to belong to a video frame picture being played through a trigger operation, acquires whether the frame picture contains a text identifier and position information corresponding to text content from a server, matches the position information of the target zoom-in region 82 on the frame picture with the position information of the text content on the frame picture to obtain that the text content contained in the target zoom-in region 82 is "count-down time 3s", zooms in the text content by a target multiple to obtain a display size, zooms in other parts of the target zoom-in region 82 by the target multiple to obtain a display size, and displays a zoom-inlayer 83 on an upper layer of thetarget page 81, where the zoom-inlayer 83 contains the text content and other contents of the display size. By the scheme, AI identification can be performed on the fuzzy characters, and a user is assisted in checking the detailed character content in the picture.
In summary, in the solution shown in the embodiment of the present application, a target enlargement area of a target page is determined, then text content in the target enlargement area is obtained based on image text recognition, and the text content is enlarged according to a target multiple and displayed in the target page. Through the scheme, the text content of the target amplification area can be identified through the image text identification technology, the situation that the amplified text content is distorted due to the fact that the text content is not clear enough is avoided, the distortion rate of the text content after page amplification is reduced, therefore, the time consumed by a user for obtaining the text content is reduced, and the electric quantity of a terminal is saved.
FIG. 9 is a block diagram illustrating a page content presentation system in accordance with an exemplary embodiment. As shown in fig. 9, a target page 921 of a terminal 920 includes a text portion and a picture portion, each picture corresponds to its own picture identifier, the target page includes n pictures corresponding to picture identifiers 1 to n pictures corresponding to picture identifiers n, before the target page 921 is displayed, each picture in the target page 921 passes through a picture and text recognition module 911 in the server 910, and then the picture and text recognition technology is performed, each picture can be divided into a text content portion and other content portions, the correspondence between the text content portion and the picture identifiers is stored in a database 912, when a partial page magnification process is started, a target magnification area in the target page 921 is selected based on a second trigger operation, if the target magnification area includes the picture corresponding to the picture identifier 1 and the picture identifier 2 and other contents, a database of the server 910 can be queried to obtain a text content 1 and a text content 2 in the picture identifier 1, a text content 3 and a text content 4 in the picture identifier 2, a target magnification factor is determined by a computing module in the terminal 920 based on a first operation, the text content 1 and the text content 2 and the text content 3 and the text content 4 are displayed on the target page 921, and other contents are magnified by a magnification factor, and a text layer 922 is created on the target page 921, and other content layer 922.
In summary, in the solution shown in the embodiment of the present application, a target enlargement area of a target page is determined, then text content in the target enlargement area is obtained based on image text recognition, and the text content is enlarged according to a target multiple and displayed in the target page. By the scheme, the text content of the target amplification area can be identified through the image text identification technology, the situation that the amplified text content is distorted due to the fact that the text content is not clear enough is avoided, the distortion rate of the text content after page amplification is reduced, time consumed by a user for acquiring the text content is reduced, and electric quantity of the terminal is saved.
Fig. 10 is a block diagram illustrating a page content presentation apparatus according to an exemplary embodiment, and as shown in fig. 10, the page content presentation apparatus may be implemented as all or part of a computer device in hardware or a combination of hardware and software to perform all or part of the steps of the method shown in the corresponding embodiment of fig. 2 or 4. The page content presentation apparatus may include:
apage display module 1010, configured to display a target page in a screen;
aregion determining module 1020, configured to determine a target enlargement region in a target page in the screen based on a first trigger operation on the target page;
theimage obtaining module 1030 is configured to obtain a local page image corresponding to the target amplification area; the local page picture is a picture comprising the content of the target amplification area;
thetext acquisition module 1040 is configured to acquire an identification text in the local page picture; the identification text is the text content determined based on the picture text identification;
asize obtaining module 1050, configured to amplify the size of the text content in the local page picture by a target multiple, so as to obtain a display size of the identification text;
atext display module 1060, configured to display the recognition text on the upper layer of the target page in the display size of the recognition text.
In one possible implementation manner, thetext obtaining module 1040 includes:
and the first text acquisition sub-module is used for performing picture character recognition on the local page picture in response to the fact that the local page picture is the screenshot of the content of the target amplification area to acquire the recognition text in the local page picture.
In a possible implementation manner, thetext obtaining module 1040 includes:
the identification determining sub-module is used for determining the image identification corresponding to the local page image in response to the fact that the local page image comprises partial image content corresponding to the target amplification area in the target page;
and the second text acquisition sub-module is used for inquiring and acquiring the identification text corresponding to the picture identification from a server.
In one possible implementation, the apparatus further includes:
the picture position acquisition sub-module is used for acquiring the position information of the part of the picture content in the corresponding picture before the recognition text corresponding to the picture identification is inquired and acquired from the server;
the second text acquisition sub-module includes:
a character position acquiring unit, configured to acquire, from the server, each identification character corresponding to the picture identifier and position information of each identification character;
and the text acquisition unit is used for acquiring the identification text based on the position information of each identification character and the position information of the part of the picture content in the corresponding picture.
In one possible implementation, the apparatus further includes:
the background size acquisition module is used for amplifying the size of the background content by the target multiple to obtain the display size of the background content; the background content is a part of the target amplification area except the text content;
and the background display module is used for displaying the background content on the upper layer of the target page according to the display size of the background content.
In one possible implementation manner, theregion determining module 1020 includes:
the picture display submodule is used for responding to the received first trigger operation and displaying a first picture on the screen; the first picture comprises a selection icon;
the trigger position acquisition submodule is used for acquiring a trigger position based on the received second trigger operation; the trigger position is the position of the selection icon when the second trigger operation is received;
and the region determining submodule is used for determining the target amplification region in the target page based on the trigger position.
In one possible implementation manner, the region determining sub-module includes:
a first area determination unit, configured to acquire a trigger area, where the trigger area is an area within a first area around the trigger position;
and the area determining unit is used for determining the trigger range area as the target amplification area.
In one possible implementation manner, the region determining sub-module includes:
a path obtaining unit, configured to obtain a trigger path of the selection icon; the trigger path is used for indicating a moving path of the trigger position;
and the second area determining unit is used for generating at least one closed graph in response to the trigger path and determining an area surrounded by the closed graph as the target amplification area.
In one possible implementation manner, the region determining sub-module includes:
a third area determination unit, configured to determine, in response to that the trigger position is located in a designated area, that the designated area is the target enlargement area in the target page; the target page comprises at least one of the designated regions.
In one possible implementation, the apparatus further includes:
and the multiple determining module is used for amplifying the size of the text content in the local page picture by a target multiple, and determining the target multiple based on the received first operation before obtaining the display size of the identification text.
In a possible implementation manner, the multiple determining module includes:
the interface display submodule is used for displaying a target multiple setting interface on the screen; the target multiple setting interface is used for setting a target multiple;
and the magnification selection submodule is used for selecting a specified magnification or inputting the customized magnification as the target magnification on the target magnification setting interface.
In one possible implementation, in response to the terminal being a touch-enabled type terminal;
the multiple determination module comprises:
the distance acquisition sub-module is used for acquiring touch sliding distances of at least two contact points on the screen;
and the factor determining submodule is used for determining the target factor based on the corresponding relation between the touch sliding distance and the amplification factor.
In summary, in the solution shown in the embodiment of the present application, a target amplification area of a target page is determined, then, text content in the target amplification area is obtained based on image text recognition, and the text content is amplified according to a target multiple and is displayed on an uppermost layer of the target page. By the scheme, the text content of the target amplification area can be identified through the image text identification technology, the situation that the amplified text content is distorted due to the fact that the text content is not clear enough is avoided, the distortion rate of the text content after page amplification is reduced, time consumed by a user for acquiring the text content is reduced, and electric quantity of the terminal is saved.
FIG. 11 is a block diagram illustrating the structure of acomputer device 1100 according to an example embodiment. Thecomputer device 1100 may be a terminal in the face recognition system shown in fig. 1.
Generally, thecomputer device 1100 includes: aprocessor 1101, and amemory 1102.
Processor 1101 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. Theprocessor 1101 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). Theprocessor 1101 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in a wake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, theprocessor 1101 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, theprocessor 1101 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 1102 may include one or more computer-readable storage media, which may be non-transitory.Memory 1102 can also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium inmemory 1102 is used to store at least one instruction for execution byprocessor 1101 to implement the methods provided by the method embodiments herein.
In some embodiments, thecomputer device 1100 may also optionally include: aperipheral interface 1103 and at least one peripheral. Theprocessor 1101,memory 1102 andperipheral interface 1103 may be connected by buses or signal lines. Various peripheral devices may be connected to theperipheral interface 1103 by buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one ofradio frequency circuitry 1104,display screen 1105,camera assembly 1106,audio circuitry 1107,positioning assembly 1108, andpower supply 1109.
Theperipheral interface 1103 may be used to connect at least one peripheral device related to I/O (Input/Output) to theprocessor 1101 and thememory 1102. In some embodiments, theprocessor 1101,memory 1102, andperipheral interface 1103 are integrated on the same chip or circuit board; in some other embodiments, any one or two of theprocessor 1101, thememory 1102 and theperipheral device interface 1103 may be implemented on separate chips or circuit boards, which is not limited by this embodiment.
TheRadio Frequency circuit 1104 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals. Theradio frequency circuit 1104 communicates with communication networks and other communication devices via electromagnetic signals. Theradio frequency circuit 1104 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, theradio frequency circuit 1104 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. Theradio frequency circuit 1104 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, therf circuit 1104 may further include NFC (Near Field Communication) related circuit, which is not limited in this application.
Thedisplay screen 1105 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When thedisplay screen 1105 is a touch display screen, thedisplay screen 1105 also has the ability to capture touch signals on or above the surface of thedisplay screen 1105. The touch signal may be input to theprocessor 1101 as a control signal for processing. At this point, thedisplay screen 1105 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, thedisplay screen 1105 may be one, providing a front panel of thecomputer device 1100; in other embodiments, thedisplay screens 1105 may be at least two, each disposed on a different surface of thecomputer device 1100 or in a folded design; in still other embodiments, thedisplay 1105 may be a flexible display disposed on a curved surface or on a folded surface of thecomputer device 1100. Even more, thedisplay 1105 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. TheDisplay screen 1105 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
Camera assembly 1106 is used to capture images or video. Optionally,camera assembly 1106 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of a terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments,camera assembly 1106 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Theaudio circuitry 1107 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to theprocessor 1101 for processing or inputting the electric signals to theradio frequency circuit 1104 to achieve voice communication. The microphones may be provided in plural numbers, respectively, at different portions of thecomputer apparatus 1100 for the purpose of stereo sound collection or noise reduction. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from theprocessor 1101 or theradio frequency circuit 1104 into sound waves. The loudspeaker can be a traditional film loudspeaker and can also be a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, theaudio circuitry 1107 may also include a headphone jack.
Thepositioning component 1108 is used to locate the current geographic Location of thecomputer device 1100 for navigation or LBS (Location Based Service). ThePositioning component 1108 may be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.
Thepower supply 1109 is used to supply power to the various components in thecomputer device 1100. Thepower supply 1109 may be alternating current, direct current, disposable or rechargeable. When thepower supply 1109 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery can also be used to support fast charge technology.
In some embodiments, thecomputer device 1100 also includes one or more sensors 1110. The one or more sensors 1110 include, but are not limited to: acceleration sensor 1111, gyro sensor 1112, pressure sensor 1113, fingerprint sensor 1114, optical sensor 1115, and proximity sensor 1116.
The acceleration sensor 1111 can detect the magnitude of acceleration in three coordinate axes of a coordinate system established with thecomputer apparatus 1100. For example, the acceleration sensor 1111 may be configured to detect components of the gravitational acceleration in three coordinate axes. Theprocessor 1101 may control thetouch display screen 1105 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1111. The acceleration sensor 1111 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1112 may detect a body direction and a rotation angle of thecomputer device 1100, and the gyro sensor 1112 may cooperate with the acceleration sensor 1111 to acquire a 3D motion of the user on thecomputer device 1100. From the data collected by gyroscope sensor 1112,processor 1101 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensors 1113 may be disposed on the side bezel of thecomputer device 1100 and/or on the lower layer of thetouch display screen 1105. When the pressure sensor 1113 is disposed on the side frame of thecomputer device 1100, the holding signal of the user to thecomputer device 1100 can be detected, and theprocessor 1101 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1113. When the pressure sensor 1113 is arranged at the lower layer of thetouch display screen 1105, theprocessor 1101 controls the operability control on the UI interface according to the pressure operation of the user on thetouch display screen 1105. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1114 is configured to collect a fingerprint of the user, and theprocessor 1101 identifies the user according to the fingerprint collected by the fingerprint sensor 1114, or the fingerprint sensor 1114 identifies the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the user is authorized by theprocessor 1101 to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 1114 may be disposed on the front, back, or side of thecomputer device 1100. When a physical key or vendor Logo is provided on thecomputer device 1100, the fingerprint sensor 1114 may be integrated with the physical key or vendor Logo.
Optical sensor 1115 is used to collect ambient light intensity. In one embodiment, theprocessor 1101 may control the display brightness of thetouch display screen 1105 based on the ambient light intensity collected by the optical sensor 1115. Specifically, when the ambient light intensity is high, the display brightness of thetouch display screen 1105 is increased; when the ambient light intensity is low, the display brightness of thetouch display screen 1105 is turned down. In another embodiment,processor 1101 may also dynamically adjust the shooting parameters ofcamera assembly 1106 based on the ambient light intensity collected by optical sensor 1115.
A proximity sensor 1116, also known as a distance sensor, is typically provided on the front panel of thecomputer device 1100. The proximity sensor 1116 is used to capture the distance between the user and the front of thecomputer device 1100. In one embodiment, thetouch display screen 1105 is controlled by theprocessor 1101 to switch from a light screen state to a rest screen state when the proximity sensor 1116 detects that the distance between the user and the front face of thecomputer device 1100 is gradually decreasing; when the proximity sensor 1116 detects that the distance between the user and the front face of thecomputer device 1100 becomes progressively larger, thetouch display screen 1105 is controlled by theprocessor 1101 to switch from a breath-screen state to a light-screen state.
Those skilled in the art will appreciate that the configuration illustrated in FIG. 11 does not constitute a limitation of thecomputer device 1100, and may include more or fewer components than those illustrated, or may combine certain components, or may employ a different arrangement of components.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided that includes instructions, such as a memory that includes at least one instruction, at least one program, set of codes, or set of instructions executable by a processor to perform all or part of the steps of the method illustrated in any of the embodiments of fig. 2 or 4 described above. For example, the non-transitory computer readable storage medium may be a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in embodiments of the disclosure may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-device-readable medium. Computer device readable media includes both computer device storage media and communication media including any medium that facilitates transfer of a computer device program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer device.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the page content presentation method provided in the various alternative implementations of the above aspects.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (12)

CN202011431434.9A2020-12-072020-12-07Page content display method and device, computer equipment and storage mediumActiveCN112578971B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202011431434.9ACN112578971B (en)2020-12-072020-12-07Page content display method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202011431434.9ACN112578971B (en)2020-12-072020-12-07Page content display method and device, computer equipment and storage medium

Publications (2)

Publication NumberPublication Date
CN112578971A CN112578971A (en)2021-03-30
CN112578971Btrue CN112578971B (en)2023-02-10

Family

ID=75131024

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202011431434.9AActiveCN112578971B (en)2020-12-072020-12-07Page content display method and device, computer equipment and storage medium

Country Status (1)

CountryLink
CN (1)CN112578971B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113536173B (en)*2021-07-142024-01-16抖音视界有限公司Page processing method and device, electronic equipment and readable storage medium
CN118550433A (en)*2021-08-022024-08-27珠海金山办公软件有限公司 File processing method, device, electronic device and computer-readable storage medium
CN115097966B (en)*2022-06-232024-11-05北京字跳网络技术有限公司 A page display method, device, equipment and storage medium
CN115061603B (en)*2022-06-282024-06-28上海擎朗智能科技有限公司Display method and device of distribution interactive interface, robot and storage medium
CN115421828A (en)*2022-08-032022-12-02阿里巴巴(中国)有限公司Page rendering method and device, electronic equipment and storage medium
CN116107684B (en)*2023-04-122023-08-15天津中新智冠信息技术有限公司Page amplification processing method and terminal equipment
CN118860234A (en)*2023-04-262024-10-29北京有竹居网络技术有限公司 A data display method, device, equipment and storage medium
WO2025091279A1 (en)*2023-10-312025-05-08京东方科技集团股份有限公司Screen projection display system, screen projection display method and related apparatus

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN100505562C (en)*2003-11-072009-06-24英华达(南京)科技有限公司 How to enlarge and display a partial area of the screen
CN105824561A (en)*2016-03-172016-08-03广东欧珀移动通信有限公司 A text zooming method and device in a display interface
CN106648318A (en)*2016-12-192017-05-10广州视源电子科技股份有限公司Word processing method and system
TW201907285A (en)*2017-04-282019-02-16日商松下知識產權經營股份有限公司 Display device
FR3091371B1 (en)*2018-12-272021-06-25Forecomm Touch interface for displaying and handling a document and method of implementation.

Also Published As

Publication numberPublication date
CN112578971A (en)2021-03-30

Similar Documents

PublicationPublication DateTitle
CN112578971B (en)Page content display method and device, computer equipment and storage medium
CN109308205B (en)Display adaptation method, device, equipment and storage medium of application program
CN110321126B (en)Method and device for generating page code
CN111432245B (en)Multimedia information playing control method, device, equipment and storage medium
CN112749613A (en)Video data processing method and device, computer equipment and storage medium
CN112667835B (en) Works processing method, device, electronic device and storage medium
CN110570460A (en)Target tracking method and device, computer equipment and computer readable storage medium
CN111541907A (en) Item display method, device, equipment and storage medium
CN110933468A (en)Playing method, playing device, electronic equipment and medium
CN110941375A (en)Method and device for locally amplifying image and storage medium
CN111539795A (en)Image processing method, image processing device, electronic equipment and computer readable storage medium
CN110543350A (en)Method and device for generating page component
CN111062248A (en)Image detection method, device, electronic equipment and medium
CN110647881A (en)Method, device, equipment and storage medium for determining card type corresponding to image
CN112257006A (en)Page information configuration method, device, equipment and computer readable storage medium
CN112489006A (en)Image processing method, image processing device, storage medium and terminal
CN112565806A (en)Virtual gift presenting method, device, computer equipment and medium
CN111327819A (en)Method, device, electronic equipment and medium for selecting image
CN111370096A (en)Interactive interface display method, device, equipment and storage medium
CN113051485A (en)Group searching method, device, terminal and storage medium
CN113190302A (en)Information display method and device, electronic equipment and storage medium
CN114596215B (en)Method, device, electronic equipment and medium for processing image
CN111949341A (en)Method, device and equipment for displaying information and storage medium
CN114860363A (en)Content item display method and device and electronic equipment
HK40040441B (en)Method and apparatus for displaying page content, computer device and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
REGReference to a national code

Ref country code:HK

Ref legal event code:DE

Ref document number:40040441

Country of ref document:HK

GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp