Movatterモバイル変換


[0]ホーム

URL:


CN118363503A - Screen display method, electronic device and readable storage medium - Google Patents

Screen display method, electronic device and readable storage medium
Download PDF

Info

Publication number
CN118363503A
CN118363503ACN202310078366.XACN202310078366ACN118363503ACN 118363503 ACN118363503 ACN 118363503ACN 202310078366 ACN202310078366 ACN 202310078366ACN 118363503 ACN118363503 ACN 118363503A
Authority
CN
China
Prior art keywords
layer
user
interface
covering
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310078366.XA
Other languages
Chinese (zh)
Inventor
范静雅
言银
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Device Co Ltd
Original Assignee
Huawei Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Device Co LtdfiledCriticalHuawei Device Co Ltd
Priority to CN202310078366.XApriorityCriticalpatent/CN118363503A/en
Priority to PCT/CN2023/134778prioritypatent/WO2024152747A1/en
Publication of CN118363503ApublicationCriticalpatent/CN118363503A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

The application relates to the technical field of intelligent terminals, in particular to a screen display method, electronic equipment and a readable storage medium, which are applied to the electronic equipment and display a first interface; detecting a first user operation of a user on a first interface; and responding to the first user operation, and displaying a second interface, wherein the second interface is a display interface for covering at least part of the area of the first interface by adopting a first covering mode corresponding to the first user operation. Therefore, the user can start at least one screen covering mode to complete covering processing of the first interface, privacy information under more using scenes of the electronic equipment can be protected, and the operation is convenient and fast.

Description

Screen display method, electronic device and readable storage medium
Technical Field
The application relates to the technical field of intelligent terminals, in particular to a screen display method, electronic equipment and a readable storage medium.
Background
In recent years, a scene of using a mobile terminal is increasing. For example, when a user learns a foreign language using a mobile terminal, the user wants to cover any one of words and translations to achieve a better memory effect; or when a user makes a question, he hopes to cover the answers and the question solutions under the questions, so as to more clearly define the difference between the thought and the answers. In the above scenes, the user usually uses paper to cover the content to be covered manually, so that the method is inconvenient on one hand, and on the other hand, when no paper or other articles which can be used for covering are around, the screen is difficult to cover, so that the real-time performance is poor.
In addition, in public places such as subways, if the user does not want to see part of the content of the screen, such as private information or payment of two-dimensional codes, the user can only protect private information by reducing the brightness of the screen, locking the screen or using the peep-proof screen mobile phone film. However, the peep-proof screen mobile phone film needs to be replaced regularly, a certain cost is required to be consumed, and the screen definition may be affected to reduce the user experience. The screen brightness is reduced, eyes are injured, and the code scanning device is not easy to identify screen contents with lower brightness in a payment scene, so that the code scanning device is inconvenient for a user to use.
Therefore, how to realize convenient covering of a mobile phone screen under various use situations has become a problem to be solved urgently.
Disclosure of Invention
The embodiment of the application provides a screen display method, electronic equipment and a readable storage medium, wherein a first interface is displayed; detecting a first user operation of a user on a first interface; and responding to the first user operation, and displaying a second interface, wherein the second interface is a display interface for covering at least part of the area of the first interface by adopting a first covering mode corresponding to the first user operation. Therefore, the user can start at least one screen covering mode to complete covering processing of the first interface, privacy information under more using scenes of the electronic equipment can be protected, and the operation is convenient and fast.
In a first aspect, an embodiment of the present application provides a screen display method, which is applied to an electronic device, and the method includes: displaying a first interface; detecting a first user operation of a user on the first interface; and responding to the first user operation, displaying a second interface, wherein the second interface is a display interface which covers at least part of the area of the first interface by adopting a first covering mode corresponding to the first user operation.
The method comprises the steps that a shortcut control or a shortcut gesture for triggering screen coverage can be preset, at least one corresponding coverage mode is started when user operation on the shortcut control or the shortcut gesture is detected for the first time, and attribute parameters of a coverage layer are determined based on the corresponding coverage mode so as to cover at least part of areas in a first interface. Therefore, the user can start at least one screen covering mode through the shortcut control or the shortcut gesture, privacy information under the use scene of more electronic equipment can be protected, and the operation is convenient.
It will be appreciated that at least one of the above described masking modes may be used to mask all or part of the display area of the screen.
In some embodiments, the covering process of the whole display area of the screen may be implemented using a full screen covering mode, and the covering process of the local display area of the screen may be implemented using a custom covering mode or a picture covering mode. Thus, the electronic device may determine three different coverage modes through the first user operation and generate different coverage layers based on the different coverage modes. The electronic equipment can respond differently based on different covering modes, the covering function of flexibly generating the covering layer based on the user requirement is realized, the operation flow is simplified, the user can be enabled to be familiar with the screen covering operation quickly, and more screen covering scenes can be covered conveniently.
In a possible implementation manner of the first aspect, after displaying the second interface in response to the first user operation, the method further includes: the first interface is displayed in response to the first user operation being detected again.
That is, when the electronic device detects the first user operation again, the covering mode may be exited, and the covering layer is not displayed any more. For example, the user double-finger tap screen may be preset as the first user operation, and when the electronic device detects that the user double-finger taps the screen for the first time, the corresponding screen covering mode may be started. When the electronic device detects that the user double-finger taps the screen for the second time, the covering mode can be exited, for example, the covering layer is deleted, so that the screen does not display the covering layer any more, and other controls of the screen can respond to the user operation conveniently, and operation interference is avoided. Therefore, the target covering layer does not influence the subsequent use of the electronic equipment by the user.
In one possible implementation of the first aspect, displaying, in response to the first user operation, a second interface includes: creating a first layer for covering at least part of the area in the first interface according to a first covering mode corresponding to the first user operation; and synthesizing the first image layer and the first interface, and displaying a second interface for covering at least part of the area in the first interface.
I.e. a first layer corresponding to a first cover mode, which may be any one of a plurality of cover modes, may be created. The first layer may be a covering layer, the first interface may be a current view layer, and the second interface for covering at least a part of the area in the first interface is obtained by combining the covering layer and the current view layer, so that covering processing of the screen according to different covering modes is realized.
In a possible implementation manner of the first aspect, creating, according to a first coverage mode corresponding to a first user operation, a first layer for covering at least a part of an area in a first interface includes: creating a first layer for totally covering the display area on the first interface according to the first covering mode; or creating a first layer for partially covering the display area on the first interface according to the first covering mode.
That is, according to the first covering mode corresponding to the first user operation, the display area on the first interface can be covered completely or partially, so that the user can conveniently conduct covering operation, and the covering operation is simplified.
In a possible implementation manner of the first aspect, creating, according to the first coverage mode, a first layer for completely covering the display area on the first interface includes: determining that the first covering mode is a full-screen covering mode, and creating a first preset layer, wherein the first preset layer is a transparent layer arranged on the top layer, and the size of the first preset layer is adapted to the screen size of the electronic equipment; and acquiring a first color selected by a user, and filling the first preset layer by using the first color to obtain a first layer for completely covering the display area on the first interface.
It is understood that the above-mentioned full-screen covering mode may be to cover the screen full screen. The electronic device may generate a corresponding coverage layer based on the determined coverage pattern. For example, the electronic device may construct a transparent covering layer after detecting the first user operation, and place the transparent covering layer on a top layer, hereinafter referred to as "top-up". The electronic device may then determine a preset location on the transparent masking layer to generate the masking layer based on the determined masking pattern. In order to realize the full screen covering treatment, a full screen covering layer which is suitable for the screen size can be directly arranged on a first preset layer which is transparent and arranged on the top layer, and the filling is carried out based on the color selected by the user, so that a first layer for completely covering the display area on the first interface is obtained, and the full screen covering can be realized.
In a possible implementation manner of the first aspect, the creating a first layer for partially covering the display area on the first interface according to the first coverage mode includes any one of the following: creating a first layer for partially covering the display area on the first interface according to the first covering mode and the touch operation of the user on the screen; and creating a first layer for partially covering the display area on the first interface according to the first covering mode and the picture selected by the user.
In some embodiments, the first covering mode includes a custom covering mode and a picture covering mode, in the custom covering mode, a custom covering layer for partially covering the first interface may be created in response to a touch operation of the user on the screen, and in the picture covering mode, a picture covering layer for partially covering the first interface may be created using a picture selected by the user. Thus, various screen covering schemes are provided for users so as to meet covering scenes of various private information.
In one possible implementation of the first aspect, creating a first layer for partially covering the display area on the first interface according to the first covering mode and a touch operation of the screen by a user includes: determining the first covering mode as a user-defined covering mode, and acquiring touch operation of a user on a screen; drawing a second preset layer according to the starting point and the ending point of the touch operation of the user on the screen, wherein the second preset layer is a transparent layer arranged on the top layer; and acquiring a second color selected by the user, and filling a second preset layer by using the second color to obtain a first layer for partially covering the display area on the first interface.
It will be appreciated that the custom coverage mode described above may be for determining a screen coverage layer based on user operation. After the custom coverage mode is started, the sliding operation of the user on the screen is detected, and the custom coverage layer can be generated based on the sliding operation. For example, the user may draw a shape arbitrarily with the position where the user's finger falls as a start point and the position where the finger lifts as an end point, and fill in a preset color, thereby generating the custom mask layer.
It can be appreciated that the user operation of drawing the custom coverage layer pattern in the custom coverage mode may be a preset gesture operation, for example, the user uses a finger joint sliding screen, and then the electronic device may identify the position where the finger joint of the user falls and the position where the finger joint lifts. For another example, the user may define a start point and an end point using a double-click operation and arbitrarily draw a shape with the start point and the end point to generate the custom mask layer.
In a possible implementation manner of the first aspect, creating a first layer for partially covering the display area on the first interface according to the first covering mode and the picture selected by the user includes: determining that the first covering mode is a picture covering mode, and generating a third preset layer, wherein the third preset layer is a transparent layer arranged on the top layer; and acquiring a picture selected by a user, and filling a third preset layer by using the picture to obtain a first layer for partially covering a display area on the first interface.
It will be appreciated that the above-described picture masking mode may complete masking of a specified screen area for retrieving a user-specified picture. For example, after the picture covering mode is started, the picture can be filled into the transparent top-placed picture layer, and the picture covering picture layer is generated, so that a picture covering effect is achieved.
It will be appreciated that the pictures described above may be user-specified pictures, such as photos within an album, web-downloaded pictures, and the like. The pictures may also be preset pictures, which are not limited herein.
It is understood that the transparent designated layer may be a transparent layer at a predetermined position, for example, a transparent layer preset on the top of the screen and having the same size as the screen. When the picture designated by the user or the preset picture is larger than the screen size, the electronic device can locally display the filling picture on the screen, and can also reduce the filling picture to the screen size in an equal proportion so as to finish filling processing, so that a picture covering layer is obtained.
It will be appreciated that the above embodiment of filling the transparent layer with the picture may be implemented based on the needs of the user, for example, filling the transparent layer on top after freely stretching the picture, so as to implement that the partial and/or complete filling of the picture is displayed on the picture covering layer, which is not limited herein.
In a possible implementation of the first aspect, after displaying the second interface in response to the first user operation, the method further includes: detecting a second user operation acting on the first layer; updating the attribute parameters of the first layer based on the second user operation to obtain a second layer, wherein the second user operation comprises long-press operation of the user on the covering layer; and synthesizing the updated second layer and the first interface, and displaying a third interface for covering at least part of the area in the first interface.
It will be appreciated that the second user operation may include an operation to initiate a parameter adjustment and an operation to adjust a parameter, and the second user operation acts on the covering layer. For example, the second user operation may be a long press operation on the covering layer by the user, and the modification process on the attribute parameter of the covering layer may be started. So as to determine new attribute parameters based on subsequent user parameter adjustment operations.
In a possible implementation manner of the first aspect, updating the attribute parameter of the first layer based on the second user operation to obtain a second layer includes: and updating the filling color, the size, the position, the layer overlapping sequence and/or the transparency of the first layer based on the second user operation to obtain a second layer.
I.e. the second user operation may initiate a modification of the attribute parameters of the preset masking layer. The electronic device may determine new attribute parameters based on the subsequently detected modified content specified by the second user operation.
In some embodiments, the attribute parameters of the mask layer include, but are not limited to, fill color, size, position, layer overlap order, and transparency.
In a possible implementation manner of the first aspect, updating the transparency of the first layer based on the second user operation includes: starting a fourth interface for updating the transparency of the first layer according to the second user operation; acquiring a user touch operation acting on a designated area of the fourth interface or a transparency control in the fourth interface; and updating the transparency of the first layer according to the touch operation of the user.
I.e., the electronic device initiates modification of the transparency of the first layer in accordance with the second user operation, a transparency control may be initiated to provide the user with access to modify the attribute parameters, e.g., to modify the transparency of the overlay layer. The second user operation may be a sliding operation of the transparency control or a sliding operation acting on a designated area of the fourth interface, and the transparency of the cover layer may be updated based on the sliding operation. For example, determining that the sliding operation is sliding upward may reduce the transparency of the masking layer; while determining that the sliding operation is a downward sliding operation, the transparency of the cover layer may be increased. Therefore, the transparency of the covering layer can be updated in real time by a user, and more covering layer application scenes can be covered. For example, the answers of the questions can be covered when the questions are made, and the transparency of the covering layer can be adjusted after the answers are solved so as to facilitate the answer checking.
In still other embodiments, when the transparency of the covering layer is greater than or equal to a preset threshold, the screen function of the covering portion of the covering layer is turned off, preventing the user from erroneously touching the covered screen content in the covering mode. And when the transparency of the covering layer is smaller than a preset threshold value, opening the screen function of the covering part of the covering layer, for example, acquiring the click operation of a user on the originally covered application icon to start the application. When the user wants to perform interactive operation on the screen covering the covering part of the covering layer, the user can keep the interactive function of the covered screen area control while keeping the covering layer with high transparency, so that the user can use the electronic equipment without being influenced by the covering mode, and the user operation is more convenient. Under the use scene of the target covering layer with high transparency, the user is prevented from touching and starting the screen function of the part covered by the covering layer by mistake, so that the target covering layer with low transparency does not influence the use of the electronic equipment by the user, and the use experience of the user is improved.
In a possible implementation of the first aspect, detecting a first user operation of the first interface by a user includes any one of: detecting a first preset gesture operation with a corresponding relation with the first covering mode; detecting clicking operation acting on a first preset control, wherein the first preset control is arranged in a shortcut menu of the first interface, and the shortcut menu comprises a drop-down shortcut; and detecting clicking operation acting on a second preset control, wherein the second preset control is arranged in a system shortcut interface, and the system shortcut interface comprises a shortcut interface for calling a screen capturing function.
That is, the first user operation may be a preset shortcut gesture, for example, the preset shortcut gesture may tap the screen for a user knuckle, or the user clicks the screen with a preset number of times and frequency, so as to avoid the situation that the drop-down shortcut causes the exit failure of the covering mode, reduce the use of multiple continuous operations by the user to start or exit the corresponding covering mode, and only need a single gesture to easily start or exit the corresponding covering mode, thereby effectively simplifying the complexity of the user operation.
In other embodiments, the first user operation may be a touch operation for a first preset control in the drop-down shortcut that initiates the first covering mode.
In still other embodiments, the first user operation may be a touch operation for a second preset control in the system shortcut that initiates the first coverage mode.
It will be appreciated that by providing the user with a plurality of access points for activating the first masking mode, user operation may be simplified, facilitating user activation of the masking process for the screen.
In a possible implementation of the first aspect, detecting a click operation on the second preset control includes: detecting a second preset gesture operation of waking up a system shortcut, and starting a system shortcut interface, wherein the system shortcut comprises a screen capturing shortcut; and detecting clicking operation of a second preset control on the system shortcut interface by a user.
The electronic device can start the screen capturing shortcut after detecting the preset gesture operation, for example, the user is detected to use the finger joint sliding screen, and the sliding track is closed end to end, so that the electronic device can start the screen capturing shortcut and display a screen capturing shortcut interface. A plurality of controls are provided within the interface, including a second preset control. The second preset control is a starting control of a covering mode, and if the electronic equipment detects click operation of a user on the second preset control, the user-defined covering mode is started so as to generate a covering layer.
In a second aspect, an embodiment of the present application further provides an electronic device, including: one or more processors; one or more memories; the one or more memories store one or more programs that, when executed by the one or more processors, cause the electronic device to perform the screen display method provided by the first aspect and various possible implementations.
In a third aspect, embodiments of the present application further provide a computer readable storage medium, where instructions are stored on the storage medium, the instructions when executed on a computer cause the computer to perform the screen display method provided in the first aspect and various possible implementations.
In a fourth aspect, embodiments of the present application also provide a computer program product comprising a computer program/instruction which, when executed by a processor, implements the screen display method provided by the first aspect and various possible implementations.
Drawings
FIG. 1 illustrates a schematic diagram of an existing screen partial display operating scenario;
FIG. 2a is a diagram showing the effect of an interface with a screen fully covered in the case where the mobile phone 00 initiates a partial display mode;
FIG. 2b is a schematic view showing the interface effect of the content of the display portion after the mobile phone 00 detects that the transparency of the user's erase operation is set to 0;
FIG. 3a illustrates an interface diagram for invoking a screen mask shortcut control in accordance with some embodiments of the present application;
FIG. 3b illustrates another interface diagram for invoking a screen mask shortcut control in accordance with some embodiments of the present application;
Fig. 4a shows a schematic hardware structure of an electronic device 100 according to an embodiment of the application;
FIG. 4b is a diagram illustrating interactions between structures in a system architecture of the electronic device 100, according to an embodiment of the present application;
FIG. 5 is a flow chart of a screen display method according to an embodiment of the application;
FIG. 6a illustrates a schematic view of a scenario of a shortcut of an electronic device 100 in accordance with some embodiments of the present application;
FIG. 6b illustrates a schematic view of a scenario of a preset shortcut control for an electronic device 100 in accordance with some embodiments of the present application;
FIG. 7 illustrates a full-screen covered scene schematic in accordance with some embodiments of the application;
FIG. 8a illustrates a scene graph of updating a cover layer according to some embodiments of the application;
FIG. 8b illustrates a scene graph of another updated cover layer, according to some embodiments of the application;
FIG. 8c illustrates a scene graph of yet another updated cover layer, according to some embodiments of the application;
FIG. 9a illustrates a scene graph of adjusting mask layer attribute parameters according to some embodiments of the application;
FIG. 9b illustrates a scene graph with adjustment of mask layer attribute parameters based on preset parameter controls, according to some embodiments of the application;
FIG. 9c illustrates a scene graph with adjustment of mask layer attribute parameters based on screen specific regions in combination with preset parameter controls, according to some embodiments of the application;
FIG. 10 illustrates a layer structure diagram of a user interface, according to some embodiments of the application;
FIG. 11 is a schematic flow diagram illustrating an implementation of generating a masking layer according to a masking pattern in accordance with an embodiment of the present application;
FIG. 12 illustrates a schematic diagram of a custom mask layer, according to some embodiments of the application;
FIG. 13 illustrates a schematic diagram of a picture masking layer, according to some embodiments of the application;
FIG. 14 is a schematic flow chart showing an implementation of yet another screen display method according to some embodiments of the present application;
FIG. 15 is a flow chart of another screen display method according to an embodiment of the application;
FIG. 16 illustrates a schematic diagram of a screenshot shortcut according to some embodiments of the application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be described in detail below with reference to the accompanying drawings and specific embodiments of the present application.
Illustrative embodiments of the application include, but are not limited to, screen display methods, electronic devices, computer-readable storage media, and the like.
Fig. 1 shows a schematic view of an existing operation scene partially displayed on a screen.
Referring to fig. 1, an existing scheme of partial display of a screen is provided, and protection of private information of the screen by a user can be achieved by combining full-screen coverage with partial display. Referring to fig. 1, a control 02 of a partial display mode is provided in a screen drop-down menu bar 01 of a mobile phone 00 for turning on and off the partial display mode. When detecting the user operation 10 of clicking the control 02, the mobile phone 00 confirms that the local display mode is started.
After the partial display mode is turned on, the screen of the mobile phone 00 may exhibit a covering effect as shown in fig. 2a to 2b described below.
Fig. 2a shows a schematic view of the interface effect with the screen fully covered in case the handset 00 starts the partial display mode.
Referring to fig. 2a, after the local display mode is started, when the mobile phone 00 detects the operation 11 of activating the screen by the user, a cover layer 03 is newly built on top of the screen of the mobile phone 00. The top of the screen refers to a part above the layer of the mobile phone 00 screen display interface content, and in some embodiments, the top layer, or the uppermost layer, of the mobile phone 00 screen or the displayed interface may also be described. As shown in fig. 2a, the newly built covering layer 03 can cover the lower side of the newly built covering layer 03, so that the screen of the mobile phone 00 can be completely covered.
Fig. 2b shows a schematic view of the interface effect of the content of the display portion after the mobile phone 00 detects that the transparency of the user's erasing operation is set to 0.
After the new covering layer 03 is formed on the top of the mobile phone 00 screen, the display area on the mobile phone 00 screen can be completely covered by the covering layer. At this time, if the user needs the mobile phone 00 to locally display the contents on a part of the screen, for example, the time on the display screen, the part of the screen to be displayed can be displayed by the erasing operation 12. Referring to fig. 2b, when the mobile phone 00 detects the erasing operation 12 of the user on the screen, the erasing track 04 of the erasing operation 12 of the user can be identified based on the start and end positions of the erasing operation 12, so as to determine the screen content corresponding area 04 designated to be displayed by the user. And further, the covering layer 03 in the area 04 is partially converted into transparent so as to display the screen area required to be displayed by the user. At the moment, the user can view the displayable screen content while covering the screen, so that the user can protect the privacy information contained in the screen.
It can be appreciated that the above-mentioned local display mode for protecting privacy information can only provide a single covering mode, and such a single covering mode cannot adapt to multiple scenes, and the above-mentioned covering mode is also inconvenient for the user to operate. For example, in some privacy preserving scenarios, the user needs to operate to turn on the local display mode, operate the preset control to activate the mode, and then erase the covering layer of the specified area by memory. If the user remembers uncleanly, the covered privacy information is displayed under the erasing operation, the privacy information which the user wants to protect is leaked, and the user cannot select other more suitable covering modes at the moment.
In order to avoid the problem that a single covering mode cannot meet the covering requirement of a user, the embodiment of the application provides a screen display method which is applied to electronic equipment with a screen. Specifically, according to the method, through a shortcut control or a shortcut gesture for triggering screen covering, when user operation on the shortcut control or the shortcut gesture is detected for the first time, at least one corresponding covering mode is started, and attribute parameters of a covering layer are determined based on the corresponding covering mode, so that the covering layer is obtained. Therefore, the user can start at least one screen covering mode through the shortcut control or the shortcut gesture, privacy information under the use scene of more electronic equipment can be protected, and the operation is convenient.
It will be appreciated that at least one of the above-described masking modes may be used to mask all or part of the display area of the screen, where the masking of all of the display area of the screen may be implemented using a full screen masking mode and the masking of part of the display area of the screen may be implemented using a custom masking mode or a picture masking mode.
And under the condition that the electronic equipment starts the corresponding covering mode, a user can also operate and set the pattern of the covering layer on the covering layer displayed by the electronic equipment, for example, the user can set the position, the size, the color, the transparency and other attributes of the covering area of the screen according to the needs, so that the user can carry out various different covering treatments on the optionally appointed display area in the screen, and the user experience is improved. Further, when the transparency of the covering layer is lower than the threshold value, a user can freely operate the covered part of the mobile phone screen, the covering effect of the covering layer cannot be affected, and the reliability of protecting the privacy information of the screen is improved.
It will be appreciated that the above-mentioned attribute parameters include, but are not limited to, size, layer overlap order, fill color, transparency, etc., and the user may be set based on the needs without limitation.
Fig. 3 a-3 b illustrate interface diagrams for invoking a screen mask shortcut control in accordance with some embodiments of the present application.
Referring to operation 30 shown in fig. 3a, the electronic device 100 may detect that the user slides down operation 30 from the top of the screen, and in response to operation 30, may display a shortcut corresponding to the control center.
Referring to fig. 3b, a plurality of regions may be included in a shortcut of a control center. Within region 301 may be included a control 3011 and a control 3012. When the user needs to invoke the screen hiding function, control 3011 can be clicked. At this time, when the electronic device 100 detects a click operation of the control 3011 by the user, a corresponding screen covering function may be started, for example, a full screen covering mode is started. And then, a corresponding covering layer can be generated based on the full-screen covering mode, so that the covering processing of the screen is completed. And when the electronic device 100 detects the clicking operation of the control 3011 again, the activated screen hiding function may be turned off. Therefore, the screen covering function of one-key starting and one-key closing is realized, so that the user operation is simplified, and the user experience is effectively improved.
Further, the user operation can be a specific gesture operation, and the target covering mode can be rapidly determined based on the mapping relation between the specific gesture operation and the target covering mode, so that the operation convenience is improved. For convenience of distinction, in describing a specific implementation procedure of the screen display method provided in the embodiment of the present application, the above user operation may be described as a first user operation, and a setting operation of a user on a covering layer displayed on an electronic device may be described as a second user operation. Reference may be made specifically to the following detailed description, which is not repeated here.
It can be appreciated that the electronic devices to which the above screen display method provided by the embodiments of the present application is applicable may include, but are not limited to, mobile phones, folding screen mobile phones, tablet computers, desktop computers, laptop computers, handheld computers, netbooks, and electronic devices with screens such as augmented Reality (Augmented Reality, AR)/Virtual Reality (VR) devices, smart televisions, smart watches, and the like. The screen of the electronic device to which the present application is applied may be either a capacitive touch screen (i.e., a capacitive screen) or a resistive touch screen, which is not limited herein.
Fig. 4a shows a schematic hardware structure of the electronic device 100 according to an embodiment of the application.
As shown in fig. 4a, the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (Universal Serial Bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, a subscriber identity module (Subscriber Identification Module, SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an acceleration sensor 180E, a distance sensor 180F, a touch sensor 180K, an ambient light sensor 180L, and the like.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the application, electronic device 100 may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components, without limitation.
The processor 110 may include one or more processing units, such as: the Processor 110 may include an application Processor (Application Processor, AP), a modem Processor, a graphics Processor (Graphics Processing Unit, GPU), an image signal Processor (IMAGE SIGNAL Processor, ISP), a controller, a video codec, and the like. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
In some embodiments, the graphics processor may construct the first overlay layer based on a first user operation to enable free masking of the screen based on user demand.
In some embodiments, the controller generates operation control signals according to the instruction operation code and the timing signals of the processor 110, and completes the control of instruction fetching and instruction execution to execute the screen display method provided by the embodiment of the application.
A memory may also be provided in the processor 110 for storing instructions and data.
In some embodiments, the processor 110 may include one or more interfaces. In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (Inter-INTEGRATED CIRCUIT, I2C) interface, an integrated circuit built-in audio (Inter-INTEGRATED CIRCUIT SOUND, I2S) interface, a pulse code modulation (Pulse Code Modulation, PCM) interface, a universal asynchronous receiver Transmitter (Universal Asynchronous Receiver/Transmitter, UART) interface, a mobile industry processor interface (Mobile Industry Processor Interface, MIPI), a General-Purpose Input/Output (GPIO) interface, a subscriber identity module (Subscriber Identity Module, SIM) interface, and/or a universal serial bus (Universal Serial Bus, USB) interface, among others. The USB interface 130 may be used to connect a charger to charge the electronic device 100, or may be used to transfer data between the electronic device 100 and a peripheral device.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interface may include a universal serial bus (Universal Serial Bus, USB) interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, or may be used to transfer data between the electronic device 100 and a peripheral device.
The charge management module 140 is configured to receive a charge input from a charger. The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The modem processor may include a modulator and a demodulator.
The wireless Communication module 160 may provide solutions for wireless Communication including wireless local area network (Wireless Local Area Networks, WLAN) (e.g., wireless fidelity (WIRELESS FIDELITY, wi-Fi) network), bluetooth (BT), global navigation satellite system (Global Navigation SATELLITE SYSTEM, GNSS), frequency modulation (Frequency Modulation, FM), near field Communication (NEAR FIELD Communication), infrared (IR), etc., as applied to the electronic device 100.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering, and in some embodiments may be used to render the first mask layer according to user-specified attribute parameters. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a Liquid crystal display (Liquid CRYSTAL DISPLAY, LCD), an Organic Light-Emitting Diode (OLED), an Active-Matrix Organic LIGHT EMITTING Diode (AMOLED), a flexible Light-Emitting Diode (Flex), a Mini-LED, a Micro-OLED, a Quantum Dot LIGHT EMITTING Diodes (QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing or taking a video, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, so that the electrical signal is converted into an image visible to the naked eye. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (Charge Coupled Device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
Video codecs are used to compress or decompress digital video.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100.
The internal memory 121 may be used to store computer executable program code that includes instructions. The internal memory 121 may include a storage program area and a storage data area. The storage data area may store data created during use of the electronic device 100 (e.g., video data obtained by photographing, etc.), and the like. In addition, the internal memory 121 may include a high-speed random access memory, a nonvolatile memory, and the like. The processor 110 performs various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194.
The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A.
The gyro sensor 180B may be used to determine a motion gesture of the electronic device 100.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The method can also be used for identifying the gesture of the electronic equipment and is applied to applications such as horizontal and vertical screen switching.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser.
The ambient light sensor 180L is used to sense ambient light level.
The touch sensor 180K, also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194.
Fig. 4b shows a schematic diagram of interactions between structures in a system architecture of the electronic device 100 according to an embodiment of the present application.
Referring to fig. 4b, the system architecture of the electronic device 100 may include, in order, an application layer 440, an application framework layer 430, a system service layer 420, a kernel layer 410, and a hardware layer 400.
The kernel layer 410 may mask differences between different kernels, and provide basic kernel capabilities for upper layers, including process/thread management, memory management, file system, network management, peripheral management, and the like.
The kernel layer 410 may include a kernel abstraction layer 411 and a hardware driver framework 412.
The Kernel abstraction layer 411 (Kernel Abstract Layer, KAL) contains multiple kernels, which may include kernels such as Linux Kernel (Linux Kernel), liteOS, graphics processor Kernel (Graphics Processing Unit, GPU), and the like, to support selecting an appropriate OS Kernel for different resource-constrained devices. The kernel abstraction layer can provide basic kernel capabilities for upper layers, including process/thread management, memory management, file system, network management, peripheral management, etc., by masking multi-kernel differences.
It will be appreciated that the GPU described above is used to construct and render video information input by the system.
A hardware driver framework 412 (HARDWARE DRIVER Foundation, HDF) for providing unified peripheral access capabilities and driver development and management framework.
The system services layer 420 includes a distributed task schedule 423, distributed data management 422, and a distributed soft bus 421. The system service layer 420 performs serial interaction between different hardware (sensors, screens, cameras, etc.) on different devices loaded with the same software system through the distributed task schedule 423, the distributed data management 422, and the distributed soft bus 421, so that all devices loaded with the same software system can find different hardware on the distributed soft bus 423.
It will be appreciated that the operating system may be the hongTM operating system (HarmonyTM OS) or other operating systems having similar system architectures, and is not limited thereto.
It can be appreciated that the distributed task schedule 423 can select the most suitable device to run the distributed task according to the capabilities, locations, service running states, and resource usage conditions of different devices and in combination with habits and intentions of users.
The distributed data management 422 may separately manage data storage and service logic on different devices, for example, the device a stores music data, but is not equipped with sound, the device B is equipped with sound, and the distributed data management 422 may send the music data stored by the device a to the device B for playing. It will be appreciated that the user's data is no longer bound to a single physical device, and that the business logic is separate from the data store.
The distributed soft bus 421 described above allows all devices loaded with the same software system to find different hardware on the distributed soft bus 421. For example, device a may determine via the distributed soft bus 421 that device B is equipped with sound.
The application framework layer 430 can mask the variability of the system service layer 420 and provide a unified interface for the application layer 440, facilitating the migration and deployment of application code.
The application framework layer 430 may include a User Interface (UI) framework 431.
It will be appreciated that the UI framework 431 is a UI programming framework, which is the infrastructure for developing a UI provided for an application developer, and mainly includes UI controls, such as buttons and lists, etc.; view layout, e.g., putting or arranging corresponding UI controls; animation mechanisms, such as animation design and effect presentation; interaction event handling, such as clicking or sliding, and the like, as well as corresponding programming languages and programming models, and the like. From the system running dimension, the UI programming framework also includes a runtime responsible for the resource loading, UI rendering, event response, etc. required by the application when executing in the system.
The application layer 440 may include system applications and third party non-system applications. The system application is a native application of the electronic device 100, and the third-party non-system application is an application when other devices use the native application.
The application layer 440 includes a full screen masking application 441, a custom masking application 442, a picture masking application 443, and a system user interface 444.
The full screen hiding application 441, the custom hiding application 442, and the picture hiding application 443 can generate different hiding layers in response to user operations, thereby freely implementing hiding of the screen 194 based on user demands.
The system user interface 444 (System User Interface, systemUI), which is a system core application, belongs to the application layer 440, and is used for presenting a corresponding interface to a user to feed back the state of the system and related applications, responding to the execution result of user operation, and the like, and the user can control the displayed corresponding interface to realize interaction with the system through SystemUI.
Graphics subsystem 450 includes an interface layer, a framework layer, and an engine layer.
The interface layer includes an interface service 451, which is an interface for providing graphics primitive development kit (Native Development Kit, NDK) capabilities, such as an interface for drawing tools like OpenGL ES.
The framework layer includes rendering services 452 and display and memory management 453.
The rendering service (RENDER SERVICE) 452 may be used to provide rendering capabilities including, but not limited to, synchronization of UI messages, management of rendering nodes and timing, management of rendering mechanisms (such as unified rendering and split rendering, etc.). The rendering service 452 can be converted into drawing tree information through control description of the UI frame 431, and then perform optimal path rendering according to the corresponding rendering policy.
Display and memory management 453 may be used to provide the capability of composite display management and memory management.
The engine layer includes a graphics library 454. Graphics library 454 may include a two-dimensional (2D) graphics library. The 2D graphics library described above may be used to provide a 2D graphics rendering application programming interface (Application Program Interface, API).
In the process of implementing the screen display method provided by the present application, a screen 194 (not shown in the figure) in the hardware layer may detect a touch operation of a user and transmit the detected touch operation signal to the kernel layer 410. The touch operation may be processed as an Input (Input) event via the hardware driver framework 412 in the kernel layer 410 and sent to the event notification subsystem 460 located in the application framework layer 430 and the system services layer 420. The event notification subsystem 460 may distribute the received input event to the system user interface 444 in the application layer 440.
Next, the system user interface 444 may determine the invoked target application based on the input event. In some embodiments, the target applications described above may include a full screen masking application 441, a custom masking application 442, and a picture masking application 443. For example, when the user's input event is clicking on a full screen coverage control, the system user interface 444 may retrieve the input event and invoke a corresponding full screen coverage application 441 based on the input event. The full screen coverage application may send the region information corresponding to the full screen coverage, e.g., screen size information, to the display by invoking interface service 451 of the image subsystem to memory management 453. The display and memory management 453 generates a buffer queue corresponding to the screen size based on the screen size information, and then sends the buffer queue to the rendering service 453 to generate a full-screen coverage layer, and sends the full-screen coverage layer to the display and memory management 453 for composite transmission. Here, the display and memory management 453 may synthesize the full-screen covering layer with the current view layer to obtain the covered user interface. The display and memory manager 453 may send the covered user interface to the system user interface 444 by invoking the hardware driver framework 412, and the system user interface 444 may invoke the interface service provided by the hardware driver framework 412 to complete the sending display, so that the screen 194 displays the user interface after the full-screen covered layer and the current view are combined.
In other embodiments, the rendering service 452 in the graphics subsystem 450. Rendering service 452 may determine attribute parameters for a full-screen overlay layer based on the input event and update the attribute parameters for the overlay layer based on the attribute parameters to obtain a target overlay layer. At this time, the display and memory manager 453 may synthesize the target covering layer to the current view layer, obtain the covered user interface, and send the synthesized user interface to the graphic library 454 to complete the rendering process. Next, the display and memory manager 453 may send the covered user interface to the hardware chip 401 of the hardware layer by calling an interface (not shown) provided by the hardware driver framework 412, so that the system user interface 444 may display the covered user interface on the screen 194 by calling an interface (not shown) provided by the hardware driver framework 412.
Based on the structure of the electronic device 100 shown in fig. 4a and the interaction process in the system architecture shown in fig. 4b, the following detailed description of a specific implementation process of a screen display method according to the present application is provided with reference to specific embodiments and related drawings.
The following describes a specific implementation procedure of a screen display method according to an embodiment of the present application through embodiment 1.
Example 1
Fig. 5 shows a flowchart of a screen display method according to an embodiment of the present application. It can be understood that the execution body of each step of the flowchart shown in fig. 5 may be the electronic device 100 described above, or other electronic devices, and the description of the execution body of a single step will not be repeated.
As shown in fig. 5, the interaction flow includes the following steps:
501: a first user operation to wake up the cover mode is detected.
The first user operation is, for example, a preset user operation for waking up the covering mode, for example, a click operation of a preset shortcut control by a user, or a preset shortcut gesture performed by the user. In some embodiments, the electronic device 100 may determine the corresponding coverage mode based on the correspondence of the preset user operation and the coverage mode.
502: A determination is made that the first user operation indicates a wake-up cover mode.
Illustratively, the above-described covering modes include, but are not limited to, a full screen covering mode, a custom covering mode, and a picture covering mode. Different coverage modes can be determined by presetting a plurality of different shortcut controls on the electronic device 100, and different coverage modes can be corresponding to different preset shortcut gestures. For example, when the first user operates as a user double-clicking on the screen with the finger joints, waking up a full screen covering mode; when the first user operates the screen with three clicks of the finger joint, the user-defined covering mode is awakened. Different covering modes can cover more covering scenes, so that redundant user operation is effectively reduced, and a user can conveniently use and grasp different covering modes.
It is understood that the above-mentioned full-screen covering mode may be to cover the screen full screen. The custom coverage mode may be to determine a screen coverage layer based on user operation. The above-described picture covering mode may complete the covering process of the specified screen area for calling the picture specified by the user.
It is to be understood that the above three screen covering modes are only examples, and other ways of covering the screen may be included herein, which are not described herein.
It will be appreciated that referring to fig. 1-2 b, in the event that the user desires to turn off the local display mode without revealing other information on the cell phone screen, the user may click control 02 through the drop down menu bar to turn off the local display mode. In this process, the user is likely to fail to pull down once by misoperation, resulting in failure of exiting the partial display mode. Even, the position where the finger slides on the screen becomes transparent, so that the original screen privacy information which the user hopes to hide cannot be protected, and the use will of the user is deviated.
In order to avoid the problem of failure in exiting the covering mode, in some embodiments, the first user operation may be a preset shortcut gesture, for example, the preset shortcut gesture may be a user knuckle tap on the screen 194 or the user clicks on the screen 194 with a preset number of times and frequency, so as to avoid the situation that the drop-down shortcut causes failure in exiting the covering mode, reduce the situation that the user uses multiple continuous operations to start or exit the corresponding covering mode, and only needs a single gesture to easily start or exit the corresponding covering mode, thereby effectively simplifying the complexity of the user operation.
It is to be understood that the preset shortcut gesture may be any manner that can correspondingly determine the covering mode based on the preset shortcut gesture, which is not limited herein.
In some embodiments, the preset shortcut control described above may be provided in a shortcut of the system user interface 444.
Fig. 6 a-6 b are schematic views of a shortcut scenario of the electronic device 100 according to some embodiments of the present application, so as to further describe the preset shortcut control in detail.
Referring to fig. 6a, a plurality of areas may be included in a shortcut of a control center. Multiple controls may be included within region 601. In some embodiments, the preset controls may include a control 6011 and a control 6012. When a user's click operation 60 on control 6012 is detected, referring to FIG. 6b, a screen-covered shortcut menu 602 may be expanded based on the click operation 60.
With continued reference to fig. 6b, if the electronic device 100 detects a click operation of the control 6012 by the user, the shortcut menu 602 of the screen hiding function may be expanded based on the click operation of the control 6012 by the user. A plurality of preset shortcut controls are provided in the shortcut menu 602, as shown in the figure, the preset shortcut controls may include a control 6021, a control 6022 and a control 6023, wherein the control 6021 may be used for determining a full-screen coverage mode, the control 6022 may be used for determining a custom coverage mode, and the control 6023 may be used for determining a picture coverage mode. If the electronic device 100 detects a click operation of any control in the shortcut menu 602 by the user, the corresponding coverage mode may be determined based on the clicked control.
For example, referring to fig. 6a and 6b, if the electronic device 100 detects a click operation of the control 6021 by the user or detects a click operation of the control 6011 by the user, it may determine that the user starts the full-screen covering mode. And then, a corresponding covering layer can be generated based on the full-screen covering mode, so that the covering processing of the screen is completed. The method has the advantages that a plurality of controls are provided for a user to realize a plurality of different covering modes, so that user operation can be simplified, more screen covering scenes, such as protection of privacy information in public places, screen appointed position covering during learning, and the like, can be effectively covered, and user experience is effectively improved.
503: And creating a covering layer in the covering mode at a preset position according to the determined covering mode.
For example, the electronic device 100 may generate a corresponding coverage layer based on the determined coverage pattern. For example, the electronic device 100 may construct a transparent covering layer after detecting the first user operation, and place the transparent covering layer on a top layer, which is hereinafter referred to as "top-up". The electronic device 100 may then determine a preset location on the transparent cover layer to generate the cover layer based on the determined cover pattern.
FIG. 7 illustrates a schematic view of a full-screen covered scene to further detail for full-screen coverage mode, according to some embodiments of the application.
If the electronic device 100 detects the click operation of the control 6011 by the user, it may determine that the user selects the full-screen covering mode. Referring to fig. 7, a full-screen coverage layer 701 is generated based on a full-screen coverage mode. Here, the full-screen covering layer 701 may be set to black and set to the top layer according to a click operation of the control 6011 by the user or a click operation of the control 6011 by the user is detected. Based on the control 6011, the user can complete full-screen coverage by one key, the operation is convenient, and the user experience is improved.
504: A second user operation is detected that acts on the masking layer.
It will be appreciated that the second user operation may include an operation to initiate a parameter adjustment and an operation to adjust a parameter, and the second user operation acts on the covering layer. For example, the second user operation may be a long press operation on the covering layer by the user, and the modification process on the attribute parameter of the covering layer may be started. So as to determine new attribute parameters based on subsequent user parameter adjustment operations.
505: And determining new attribute parameters according to the detected second user operation.
For example, the second user operation may initiate modification of the attribute parameters of the preset mask layer. The electronic device 100 may then determine new attribute parameters based on the subsequently detected modified content specified by the second user operation.
In some embodiments, the attribute parameters of the mask layer include, but are not limited to, fill color, size, position, layer overlap order, and transparency.
506: And adjusting the style of the covering layer based on the new attribute parameters to obtain the target covering layer.
For example, the electronic device 100 may determine the style of the updated covering layer based on the new attribute parameter, for example, the filling color, the size, the position, the overlapping sequence of the layers, the transparency, and the like may be freely set based on the requirement of the user, so as to obtain the target covering layer, so that the user may perform multiple different covering processes on any designated display area in the screen, and user experience is improved.
The following describes in further detail the implementation process of updating the cover layer according to the embodiment of the present application with reference to fig. 8a to 8 c.
In some embodiments, it is assumed that the first user operation determines a custom coverage pattern, and a coverage layer A1 is generated based on the custom coverage pattern. At this time, the electronic device 100 may detect a plurality of second user operations of the user, for example, operation 801 in fig. 8a and operation 802 in fig. 8b, and update the attribute parameters of the covering layer, to obtain the target covering layer.
Referring to fig. 8a, the electronic device 100 detects a sliding operation 801 of the cover layer A1 by a user, and displaces the cover layer A1 on the screen. The above operation 801 updates the position parameter of the cover layer A1, and drags the cover layer A1 to the upper part of the screen shown in fig. 8b, so as to obtain the cover layer A2 after updating the position parameter in fig. 8 b.
Referring to fig. 8b, the electronic device 100 detects a user sliding operation 802 on the cover layer A2. The above operation 802 updates the size parameter of the masking layer A2, and reduces the size of the masking layer A2 to obtain the masking layer A3 after updating the size parameter in fig. 8 c.
In other embodiments, the electronic device 100 may activate a preset parameter control after detecting the second user operation, and provide an entry for the user to adjust the attribute parameter.
The following describes in further detail the implementation process of adjusting the attribute parameters by the user based on the preset parameter control according to the embodiment of the present application with reference to fig. 9a to 9 c.
Referring to fig. 9a, assume that the electronic device 100 detects a first user operation, initiates a custom coverage mode, and generates a coverage layer B based on the custom coverage mode. At this time, the electronic device 100 detects a long press operation 901 of the user on the cover layer B, and starts modification of the attribute parameters of the cover layer B.
Referring to fig. 9B, electronic device 100 initiates modification of the property parameters of hiding layer B, and control 903 may be actuated to provide the user with access to modify the property parameters, e.g., to modify transparency of hiding layer B. The electronic device 100 may detect a sliding operation 901 of the control 903 by the user, and may update the transparency of the cover layer B based on the sliding operation 901. For example, determining operation 901 to slide upward may decrease the transparency of masking layer B; while determining operation 901 is sliding down, the transparency of the masking layer B may be increased. Therefore, the transparency of the covering layer can be updated in real time by a user, and more covering layer application scenes can be covered. For example, the answers of the questions can be covered when the questions are made, and the transparency of the covering layer can be adjusted after the answers are solved so as to facilitate the answer checking.
Referring to fig. 9c, after the modification of the attribute parameter of the covering layer B is started, if the electronic device 100 detects the touch operation 904 of the user on the specified area 905 on the right side of the screen, corresponding adjustment controls, for example, the transparency adjustment control 906 and the color adjustment control 907 shown in fig. 9c, may be started, so that an entry for modifying the attribute parameter may be provided for the user, and the operation convenience is improved.
Referring to fig. 1 to 2b above, it will be appreciated that in order to protect private information, the existing partial display scheme requires that the screen is first covered and then the user operation is detected to display the partial content of a partial screen area. This requires the user to erase the area corresponding to the content to be displayed by means of memory, is complicated to operate, and brings a memory burden to the user.
In addition, since the control 02 in the local display mode provided by the above-mentioned screen covering scheme is set in the drop-down menu bar 01, if the user needs to exit the local display mode, the user needs to operate the drop-down menu bar 01 by means of the memorized drop-down position under the condition that the screen is fully covered by the covering layer 03, and in the local display mode, the user cannot perform other operations. For example, when the user wants to perform other interactive operations, such as clicking on an application to view or reply to a message after viewing the message. At this time, the user cannot directly view through clicking the message, and only the local display mode can be turned off first.
Thus, in still other embodiments, when the transparency of the cover layer is greater than or equal to the preset threshold, the screen function of the cover portion of the cover layer is turned off, preventing the user from erroneously touching the covered screen content in the cover mode. And when the transparency of the covering layer is smaller than a preset threshold value, opening the screen function of the covering part of the covering layer, for example, acquiring the click operation of a user on the originally covered application icon to start the application. When the user wants to perform interactive operation on the screen covering the covering part of the covering layer, the user can keep the interactive function of the covered screen area control while keeping the covering layer with high transparency, so that the user can use the electronic equipment without being influenced by the covering mode, and the user operation is more convenient. In order to prevent the user from touching the screen function of the part covered by the covering layer by mistake under the use scene of applying the high-transparency target covering layer, so that the target covering layer under low transparency does not influence the use of the electronic device 100 by the user, and the use experience of the user is improved.
507: And synthesizing the target covering layer and the layer of the current view to obtain the covered user interface.
For example, the target overlay layer and the current view layer may be layered into an overlaid user interface using rendering service 452.
It can be appreciated that the synthesis of the target occlusion layer and the current view layer may be performed according to the attribute parameters of the target occlusion layer. For example, the target occlusion layer and the current view layer may be synthesized based on a layer overlap order.
Referring to fig. 10, a covered user interface is obtained by setting the target covering layer on top and overlaying it on the current view layer.
508: And displaying the covered user interface.
Illustratively, the electronic device 100 may send the covered user interface to the system user interface 444 through the rendering service 452, so that the covered user interface is displayed on the screen 194, so as to achieve free coverage of the screen by the user, and improve the user experience.
509: The first user operation is again detected and the covering mode is exited.
For example, when the electronic device 100 detects the first user operation again, the covering mode may be exited and the covering layer is no longer displayed. Wherein the first user operation may be the same as the first user operation detected in step 501. For example, the user double-finger tap screen may be preset as the first user operation, and when the electronic device 100 detects that the user double-finger taps the screen for the first time, the corresponding screen covering mode may be started. The electronic device 100 may obtain the target coverage layer and display the target coverage layer on the screen 194 after performing the implementation of steps 501 to 506. When the electronic device 100 detects that the user taps the screen with two fingers for the second time, the covering mode can be exited, and the covering layer is not displayed any more, so that other controls of the screen can respond to the user operation, and operation interference is avoided. In this way, the target covering layer does not affect the subsequent use of the electronic device 100 by the user.
It can be understood that, based on the implementation flow of the steps 501 to 509, the screen display method provided by the embodiment of the present application determines the corresponding covering mode by detecting the first user operation of waking up the covering mode, and creates the covering layer in the covering mode at the preset position based on the covering mode; when a second user operation acting on the covering layer is detected, new attribute parameters are determined according to the detected second user operation, the style of the covering layer is adjusted based on the new attribute parameters, a target covering layer is obtained, the target covering layer and the layer of the current view are formed into a covered user interface, the first user operation is detected again, and the covering mode is exited according to the first user operation. In one aspect, redundant user-generated masking layer operations are reduced by providing a plurality of selectable screen masking modes for a user. On the other hand, the user can set the position, the size and other parameters of the covering layer as required, the protection of privacy information can be realized without locking a screen, the normal use of the electronic equipment 100 is not influenced, and more screen covering scenes are covered.
The specific implementation of the steps 501 to 503 is described in detail below with reference to fig. 11.
FIG. 11 is a flow chart illustrating an implementation of generating a masking layer according to a masking pattern, according to an embodiment of the present application.
As shown in fig. 11, the process may include the steps of:
1101: the system user interface 444 listens for a click event corresponding to the first user operation.
It is appreciated that the system user interface 444 may monitor for a click event corresponding to the first user operation. In some embodiments, referring to fig. 6b above, when a user's click operation on control 6021, control 6022, or control 6023 is detected, system user interface 444 may monitor in real time for a click event corresponding to the click operation and may respond in real time to the click event. For example, when a click event corresponding to a click operation of the control 6021 by the user is monitored, a first call instruction may be sent to the full-screen covering application to start a full-screen covering mode, and a covering layer corresponding to the full-screen covering mode is generated.
1102: The system user interface 444 determines that the click event is to initiate a full screen coverage mode.
Illustratively, referring to FIG. 6b above, when the user clicks on control 6021, system user interface 444 may determine that the click event is to initiate full screen coverage mode.
1103: The system user interface 444 sends a first call instruction to the full screen hiding application 441.
Illustratively, the system user interface 444 sends a first call instruction to the full-screen covering application 441 to call the full-screen covering application 441 such that the full-screen covering application 441 can generate a covering layer in full-screen covering mode based on the call instruction.
1104: The full screen hiding application 441 sends the display and memory management 453 area information corresponding to the screen size.
Illustratively, the full screen overlay application 441 sends the display and memory management 453 area information corresponding to the screen size to generate a full screen overlay layer consistent with the screen size.
1105: The display and memory management 453 generates a black mask layer of the same size as the screen based on the full-screen mask instruction.
Illustratively, the display and memory management 453 may generate a Buffer Queue (Buffer Queue) corresponding to the screen size based on the full-screen-cover instruction sent by the full-screen-cover application 441, so that the rendering service 452 may further process the Buffer Queue to obtain the full-screen-cover layer.
It will be appreciated that a buffer queue is a buffer used in accordance with a queue structure that may be used to store data information, such as for constructing a mask layer.
It will be appreciated that the buffer queue indicates a cover layer size that is consistent with the screen size in order to achieve a full screen cover.
1106: The display and memory management 453 sends a buffer queue corresponding to the screen size to the rendering service 452.
Illustratively, the rendering service 452 is configured to generate a covering layer corresponding to the covering mode based on the buffer queue, so the display and memory manager 453 sends the buffer queue corresponding to the screen size to the rendering service 452 for rendering processing, so as to obtain a full-screen covering layer.
1107: Rendering service 452 fills the buffer queue corresponding to the screen size with black to obtain a full-screen covering layer.
Illustratively, the rendering service 452 may construct a top full-screen transparent layer according to a buffer queue corresponding to a screen size, and fill black into the top full-screen transparent layer to implement a process of covering the full screen.
In some embodiments, referring to fig. 7 above, full screen masking application 441 may generate a black masking layer 701 of the same size as the screen, and the masking layer is a top-set layer to implement masking the screen full screen.
1108: Rendering service 452 sends a full screen overlay layer to display and memory management 453.
It will be appreciated that rendering service 452 sends a full-screen overlay to display and memory management 453 so that the full-screen overlay can be composited with the current view and displayed on screen 194.
1109: The display and memory management 453 composes and displays the full-screen cover layers.
Illustratively, the display and memory management 453 acquires a full-screen covering layer, so that the full-screen covering layer can be combined with the layer of the current view to be displayed on the screen, and covering processing of the whole screen is achieved.
1110: The system user interface 444 determines that the click event is the initiation of a custom cover mode and obtains the user-determined region information.
Illustratively, referring to FIG. 6b above, when the user clicks on control 6022, system user interface 444 may determine that the click event is the initiation of the custom cover mode.
It can be appreciated that in the custom covering mode, the user can freely set parameters such as the shape, the size, etc. of the covering layer. For example, a transparent top layer may be created after the custom cover mode is initiated. And determining a screen designated area selected by a user, taking the screen designated area as an area covering the layer, and filling the preset transparent layer in the screen designated area with colors.
It will be appreciated that the parameters of the masking layer in the custom masking mode described above may be freely set based on user operation, such as size, overlay order, fill color, shape, transparency, etc.
1111: The system user interface 444 sends a second call instruction and region information to the custom cover application 442.
Illustratively, the system user interface 444 sends a second call instruction and region information to the custom cover application 442 to call the custom cover application 442 to generate a custom cover layer according to the region information.
1112: Custom cover application 442 sends the region information to display and memory management 453.
Illustratively, the custom cover application 442 sends the region information to the display and memory manager 453 to generate a buffer queue corresponding to the region information, so as to generate a custom cover layer, and implement local cover to the screen.
1113: A buffer queue corresponding to the region information generated by the memory management 453 based on the region information is displayed.
Illustratively, a Buffer Queue (Buffer Queue) corresponding to the region information generated by the memory management 453 based on the region information is displayed for sending to the rendering service 452 to generate the custom mask layer.
It will be appreciated that a buffer queue is a buffer used in accordance with a queue structure that may be used to store data information, such as for constructing a mask layer.
1114: A buffer queue corresponding to the memory management 453 transmitting the region information to the rendering service 452 is displayed.
Illustratively, a buffer queue corresponding to the region information sent by the memory management 453 to the rendering service 452 is displayed, so that the rendering service 452 can determine a specified screen region corresponding to the region information on a transparent specified layer constructed according to the buffer queue corresponding to the region information, and can construct a corresponding covering layer with the specified screen region.
1115: Rendering service 452 generates a corresponding custom mask layer based on the buffer queue corresponding to the region information.
Illustratively, the rendering service 452 determines a specified screen region corresponding to the region information based on the buffer queue corresponding to the region information to generate custom mask layers of corresponding size and shape for the specified screen region.
FIG. 12 illustrates a schematic diagram of a custom mask layer, according to some embodiments of the application. The implementation of generating custom mask layers is further described below in conjunction with FIG. 12.
Referring to fig. 12, after the custom cover mode is initiated, a user sliding operation 1201 on the screen is detected, and a custom cover layer may be generated based on the sliding operation 1201. For example, the shape may be arbitrarily drawn with the position where the user's finger falls as a start point and the position where the finger lifts as an end point, and the preset color is filled, thereby generating the cover layer C.
It may be appreciated that the user operation of drawing the coverage layer pattern in the above-described custom coverage mode may be a preset gesture operation, for example, the user uses a finger joint sliding screen, and the electronic device 100 may identify the position where the finger joint of the user falls and the position where the finger joint lifts. For another example, the user may define a start point and an end point using a double-click operation, and arbitrarily draw a shape at the start point and the end point to generate the mask layer.
1116: Rendering service 452 sends the custom mask layer to display and memory management 453.
It will be appreciated that rendering service 452 sends custom cover layers to display and memory management 453 so that the custom cover layers can be composited with the layers of the current view and displayed on screen 194.
1117: The display and memory management 453 composes and displays the custom mask layer.
Illustratively, the display and memory management 453 acquires the custom covering layer, so that the custom covering layer can be combined with the layer of the current view to be displayed on the screen, thereby realizing the custom local covering processing of the screen.
1118: The system user interface 444 determines that the click event is to initiate a picture mask mode.
Illustratively, referring to FIG. 6b above, when the user clicks on control 6023, system user interface 444 may determine that the click event is to initiate a picture mask mode.
1119: The system user interface 444 takes a picture of the user's selection.
For example, the system user interface 444 may take a user-selected picture for populating the overlay layer.
It is to be understood that the above pictures may be local pictures of the electronic device 100 or network pictures, which are not limited herein.
1120: The system user interface 444 sends a third call instruction and a picture to the picture covering application 443.
Illustratively, the system user interface 444 sends a third call instruction to the picture mask application 443 to call the picture mask application 443 and sends the user-selected picture to the picture mask application 443 so that the picture mask application 443 can generate a corresponding buffer queue based on the picture to facilitate the rendering service 452 to populate the mask layer with the user-selected picture.
1121: The picture overlay application 443 sends pictures to the display and memory management 453.
Illustratively, the picture hiding application 443 sends pictures to the display and memory management 453 so that the display and memory management 453 can generate a corresponding buffer queue based on the pictures to generate a picture hiding layer.
1122: The buffer queue corresponding to the picture generated by the memory management 453 based on the picture is displayed.
It can be understood that the buffer queue includes picture data, and the display and memory management 453 can fill a preset transparent top layer based on the picture data to obtain a picture covering layer.
1123: A buffer queue corresponding to the picture sent by the memory manager 453 to the rendering service 452 is displayed.
Illustratively, a buffer queue corresponding to the memory management 453 sending the pictures to the rendering service 452 is displayed in order to generate corresponding picture-covering layers.
1124: Rendering service 452 generates a corresponding picture mask layer based on the buffer queue to which the picture corresponds.
Illustratively, rendering service 452 may populate a transparent top-level layer based on the picture data contained in the buffer queue to generate a corresponding picture mask layer.
Fig. 13 illustrates a schematic diagram of a picture masking layer, according to some embodiments of the application.
Referring to fig. 13, after the picture covering mode is started, a picture may be filled into the transparent top layer to generate a picture covering layer D, so as to achieve a picture covering effect.
It will be appreciated that the pictures described above may be user-specified pictures, such as photos within an album, web-downloaded pictures, and the like. The pictures may also be preset pictures, which are not limited herein.
It is understood that the transparent designated layer may be a transparent layer at a predetermined position, for example, a transparent layer preset on the top of the screen and having the same size as the screen. When the picture designated by the user or the preset picture is larger than the screen size, the filling picture can be partially displayed on the screen, or the filling picture can be scaled down to the screen size in an equal proportion to finish the filling process, so that the picture covering layer is obtained.
It will be appreciated that the filling of the pictures described above may enable free stretch filling based on the needs of the user, without limitation.
1125: Rendering service 452 sends a picture mask layer to display and memory management 453.
It will be appreciated that rendering service 452 sends a picture mask layer to display and memory manager 453 so that the custom mask layer described above can be composited with the layer of the current view and displayed on screen 194.
1126: The display and memory manager 453 composes and sends the picture mask layers.
Illustratively, the display and memory management 453 acquires the picture covering layer, so that the picture covering layer can be combined with the layer of the current view to be displayed on the screen, thereby realizing the custom local covering processing of the screen.
It can be understood that, based on the implementation flow of steps 1101 to 1126, the screen display method provided by the embodiment of the present application monitors, through the system user interface 444, a click event corresponding to the first user operation, and determines a corresponding coverage mode according to the click event; different covering applications are called based on different covering modes, so that the display and memory management 453 generates corresponding buffer area queues, further, the rendering service 452 can generate corresponding covering layers according to data in the different buffer area queues, and finally, the display and memory management 453 achieves the combined display of the covering layers in the different covering modes. On one hand, a plurality of selectable screen covering modes can be provided for a user to cover more screen covering scenes; on the other hand, different responses can be performed based on different covering modes, so that the operation flow is simplified, and a user can be quickly familiar with screen covering operation.
It can be appreciated that the above-mentioned covering layer may also be a default picture or a user-defined picture, and the specific implementation process of generating the corresponding covering layer in different covering modes will be described in detail below with reference to another flowchart 14.
FIG. 14 is a flow chart illustrating an implementation of yet another screen display method according to some embodiments of the application. It can be appreciated that the execution body of each step in the flowchart shown in fig. 14 may be the electronic device 100 described above, or other electronic devices, and the description of the execution body of a single step will not be repeated.
As shown in fig. 14, the interaction flow includes the steps of:
1401: a first user operation to wake up the cover mode is detected.
1402: A determination is made that the first user operation indicates a wake-up cover mode.
It is to be understood that the steps 1401 to 1402 are identical to the specific implementation procedures of the steps 501 to 502, and will not be described herein.
1403: The full screen cover mode is awakened.
Illustratively, referring to FIG. 6b above, when the user clicks on control 6021, system user interface 444 may determine that the click event is to initiate full screen coverage mode.
1404: And creating a black screen covering layer based on the full-screen covering mode.
For example, to achieve full screen hiding, electronic device 100 may create a black screen hiding layer with dimensions consistent with the screen dimensions.
It will be appreciated that the above color black is merely exemplary, and that the full-screen covered color fill may be freely set by the user, without limitation.
1405: And carrying out full-screen coverage on the current screen.
Referring to fig. 7 above, the electronic device 100 generates a full-screen coverage layer 701 based on the full-screen coverage mode. Here, the electronic device 100 may set the full-screen cover layer 701 to black and set to the top layer according to the click operation of the control 6011 by the user or the detection of the click operation of the control 6011 by the user. Based on the control 6011, the user can complete full-screen coverage by one key, the operation is convenient, and the user experience is improved.
1406: The custom cover mode is awakened.
Illustratively, referring to FIG. 6b above, when the user clicks on control 6022, system user interface 444 may determine that the click event is the initiation of the custom cover mode.
It can be appreciated that in the custom covering mode, the user can freely set parameters such as the shape, the size, etc. of the covering layer.
1407: And creating a transparent layer based on the custom coverage mode.
It can be appreciated that in the custom covering mode, the user can freely set parameters such as the shape, the size, etc. of the covering layer. Illustratively, the electronic device 100 initiates a custom cover mode, creating a transparent top-level layer to facilitate generating a custom cover layer on the transparent designated layer.
1408: And identifying the track touched by the finger of the user and generating a covering layer.
Illustratively, the electronic device 100 identifies a user finger touch trajectory to determine a user-selected screen-designated area. And taking the specified area of the screen as an area covering the layer, and filling the preset transparent layer in the specified area of the screen with color.
Referring to fig. 12, after the custom cover mode is started, the electronic device 100 detects a sliding operation 1201 of the user on the screen, and may generate a custom cover layer based on the sliding operation 1201. For example, the shape may be arbitrarily drawn with the position where the user's finger falls as a start point and the position where the finger lifts as an end point, and the preset color is filled, thereby generating the cover layer C.
It may be appreciated that the user operation of drawing the coverage layer pattern in the above-described custom coverage mode may be a preset gesture operation, for example, the user uses a finger joint sliding screen, and the electronic device 100 may identify the position where the finger joint of the user falls and the position where the finger joint lifts. For another example, the user may define a start point and an end point using a double-click operation, and arbitrarily draw a shape at the start point and the end point to generate the mask layer.
1409: And carrying out partial screen coverage on the current screen.
Illustratively, the electronic device 100 generates a free-cover layer in the free-cover mode, which may enable partial screen-cover of the current screen.
1410: Wake up picture mask mode.
For example, referring to fig. 6b, when a user clicks on control 6023, electronic device 100 can determine that the click event is to initiate a picture-mask mode.
1411: And generating a covering layer by using the picture as a top layer based on the picture covering mode.
For example, the electronic device 100 may obtain a user-selected picture for filling in the overhead covering layer. It is to be understood that the above pictures may be local pictures of the electronic device 100 or network pictures, which are not limited herein.
It is understood that the pictures may be pictures specified by a user, such as pictures in an album in the electronic device 100, pictures downloaded to the local area by the electronic device 100 via a network connection, and so on. The pictures may also be preset pictures, for example, pictures in a preset picture set, which is not limited herein.
It is understood that the transparent designated layer may be a transparent layer at a predetermined position, for example, a transparent layer preset on the top of the screen and having the same size as the screen. When the picture specified by the user or the preset picture is larger than the screen size, the electronic device 100 may locally display the filled picture on the screen, or may reduce the filled picture to the screen size in an equal proportion to complete the filling process, so as to obtain the picture covering layer.
It will be appreciated that the above embodiment of filling the transparent layer with the picture may be implemented based on the needs of the user, for example, filling the transparent layer on top after freely stretching the picture, so as to implement that the partial and/or complete filling of the picture is displayed on the picture covering layer, which is not limited herein.
1412: And carrying out partial screen coverage on the current screen.
Illustratively, the electronic device 100 generates a picture mask layer in a picture mask mode, and may implement partial screen masking of the current screen.
1413: A second user operation is detected that acts on the masking layer.
1414: And adjusting attribute parameters of the covering layer based on the second user operation to obtain the target covering layer.
1415: And synthesizing the target covering layer and the layer of the current view to obtain the covered user interface.
1416: And displaying the covered user interface.
1417: The first user operation is again detected and the covering mode is exited.
It is to be understood that the steps 1413 to 1417 are identical to the specific implementation of the steps 504 to 509, and are not described herein.
It can be appreciated that, based on the implementation flow of steps 1401 to 1417, the electronic device 100 determines three different coverage modes through the first user operation according to the screen display method provided by the embodiment of the present application, and generates different coverage layers based on the different coverage modes. The electronic device 100 can respond differently based on different covering modes, so that the covering function of flexibly generating the covering layer based on the user requirement is realized, the operation flow is simplified, the user can be enabled to be familiar with the screen covering operation quickly, and more screen covering scenes can be covered conveniently.
Specific embodiments of the screen display method according to other embodiments of the present application will be described in detail with reference to embodiment 2.
Example 2
Fig. 15 shows a flowchart of another screen display method according to an embodiment of the present application. It can be understood that the execution body of each step in the flowchart shown in fig. 15 may be the electronic device 100 described above, or other electronic devices, and the description of the execution body of a single step will not be repeated.
As shown in fig. 15, the interaction flow includes the steps of:
1501: and detecting a preset gesture operation of waking up the system shortcut.
Illustratively, the preset gesture operations described above are used to wake up a system shortcut, such as a screen capture shortcut. The preset gesture operation may be, for example, a tap screen, a long press screen, or the like gesture operation. When the electronic device 100 detects the preset gesture operation, the electronic device 100 may determine a corresponding click event based on the preset gesture operation. In this manner, the electronic device 100 invokes the shortcut interface of the corresponding system shortcut in response to the click event, thereby combining the hiding function with other preset system shortcuts so that the user can invoke the multi-function shortcut with a single gesture. It is to be appreciated that the above-described multi-function shortcuts include, but are not limited to, shortcuts for screen covering functions.
It will be appreciated that the preset gesture operation has a correspondence to the system shortcut, for example, a finger joint tapping the screen twice may evoke a screenshot shortcut.
1502: And responding to the preset gesture operation, and calling a corresponding system shortcut interface.
It is appreciated that the invoked system shortcut may be a preset system shortcut within the electronic device 100, such as a screenshot shortcut, a sharing shortcut, a shooting shortcut, etc. By way of example, the electronic device 100 may further simplify operations by coupling screen hiding controls into a preset system shortcut interface, thereby enhancing the user's interaction experience with the electronic device 100.
1503: And detecting the confirmation operation of the user on the covering control in the system shortcut interface, and starting the custom covering mode.
Illustratively, the electronic device 100 sets a control for starting the covering mode in the system shortcut interface, which provides an entry for a user to start a screen covering function in the system shortcut interface, so that the electronic device 100 may call a corresponding covering application, such as the custom covering application 442.
FIG. 16 illustrates a schematic diagram of a screenshot shortcut according to some embodiments of the application. The initiation of the cover mode within the system shortcut interface is described in detail below in connection with FIG. 16.
Referring to fig. 16, a screen capture shortcut may be initiated after a preset gesture operation is detected, for example, a user is detected to use a finger joint sliding screen, and the sliding track is closed end to end, the electronic device 100 may initiate the screen capture shortcut and display the screen capture shortcut interface 1600. Within this interface 1600 are provided a plurality of controls, including control 1601. The control 1601 is a cover mode start control, and if the electronic device 100 detects a click operation of the control 1601 by a user, a custom cover mode is started, so as to generate a cover layer.
1504: And determining a region to be covered, which corresponds to the region indicated by the preset gesture operation.
For example, the electronic device 100 may determine the region to be covered according to an operation track corresponding to a preset gesture operation. In some embodiments, with continued reference to fig. 16 above, when the detected preset gesture operation circles out the region 1602 to be covered, the electronic device 100 may send the region information of the region 1602 to be covered to the rendering service 452 after detecting the confirmation operation of the control 1601 by the user, so as to generate a target covering layer with the same size and the same shape as the region 1602 to be covered.
1505: A target masking layer is created at the area to be masked.
For example, the electronic device 100 may take the boundary of the area to be covered as the boundary of the target cover layer to create the target cover layer. In some embodiments, the electronic device 100 may send the data information corresponding to the to-be-covered region to the custom coverage application 442, and generate, by the custom coverage application 442, a buffer queue corresponding to the to-be-covered region, so that the rendering service 452 may generate the target coverage layer based on the buffer queue corresponding to the to-be-covered region.
1506: And synthesizing the target covering layer and the layer of the current view to obtain the covered user interface.
It is to be understood that the above step 1506 is consistent with the specific implementation of the step 507, which is not described herein.
1507: And displaying the covered user interface.
It is to be understood that the above step 1507 is consistent with the implementation of the step 508, and will not be described herein.
1508: Detecting the preset gesture operation again, and exiting the custom coverage mode according to the preset gesture operation.
For example, when the electronic device 100 detects the preset gesture operation again, the target covering layer is not displayed any more, and the custom covering mode is exited. When the electronic device 100 detects the second preset gesture operation, the target covering layer is deleted, and the user-defined covering mode is exited, so that other controls on the screen can respond to the user operation, and operation interference is avoided.
It can be appreciated that, based on the implementation flow of steps 1501 to 1508, the screen display method provided by the embodiment of the present application calls a corresponding shortcut interface by the electronic device 100 in response to the detected preset gesture operation; detecting the confirmation operation of a user on a covering control in a shortcut interface, and starting a custom covering mode; determining an area to be covered based on a preset gesture operation, generating a target covering layer based on the area to be covered, forming a covered interface by the target covering layer and the current view layer, and exiting a custom covering mode according to the preset gesture operation after the preset gesture operation is detected again. Therefore, the screen covering function and other shortcuts are coupled in the same shortcut interface, a more convenient screen covering mode entrance is provided for a user, the user can grasp screen covering operation more easily, and the interactive experience of the user is improved.
Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one example implementation or technique according to the disclosure. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment.
The present disclosure also relates to an operating device for executing the text. The apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random Access Memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application Specific Integrated Circuits (ASICs), or any type of media suitable for storing electronic instructions, and each may be coupled to a computer system bus. Furthermore, the computers referred to in the specification may include a single processor or may be architectures employing multiple processors for increased computing power.
The processes and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform one or more method steps. The structure for a variety of these systems is discussed in the following description. In addition, any particular programming language sufficient to practice the techniques and embodiments of the present disclosure may be used. Various programming languages may be used to implement the screen display methods of the present disclosure, as discussed herein.
Additionally, the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the disclosed subject matter. Accordingly, the present disclosure is intended to be illustrative, but not limiting, of the scope of the concepts discussed herein.

Claims (15)

CN202310078366.XA2023-01-192023-01-19Screen display method, electronic device and readable storage mediumPendingCN118363503A (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
CN202310078366.XACN118363503A (en)2023-01-192023-01-19Screen display method, electronic device and readable storage medium
PCT/CN2023/134778WO2024152747A1 (en)2023-01-192023-11-28Screen display method, electronic device, and readable storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202310078366.XACN118363503A (en)2023-01-192023-01-19Screen display method, electronic device and readable storage medium

Publications (1)

Publication NumberPublication Date
CN118363503Atrue CN118363503A (en)2024-07-19

Family

ID=91883012

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202310078366.XAPendingCN118363503A (en)2023-01-192023-01-19Screen display method, electronic device and readable storage medium

Country Status (2)

CountryLink
CN (1)CN118363503A (en)
WO (1)WO2024152747A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN119893011B (en)*2025-03-242025-07-01杭州海康机器人股份有限公司Video display method and device and automatic guide vehicle

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104239823B (en)*2013-06-072017-10-27腾讯科技(深圳)有限公司The displaying control method and device of interface content
CN103973891B (en)*2014-05-092016-06-01平安付智能技术有限公司For the data safety processing method of software interface
CN109828732B (en)*2018-12-262022-07-01维沃移动通信有限公司 A display control method and terminal device
US11126745B1 (en)*2019-03-152021-09-21Snap Inc.Privacy approval system
CN111158562A (en)*2019-12-102020-05-15维沃移动通信有限公司 A kind of anti-missing method and electronic device

Also Published As

Publication numberPublication date
WO2024152747A1 (en)2024-07-25

Similar Documents

PublicationPublication DateTitle
CN112558825B (en) Information processing method and electronic device
US12061833B2 (en)Multi-window display method, electronic device, and system
US20230259246A1 (en)Window Display Method, Window Switching Method, Electronic Device, and System
CN112269527A (en)Application interface generation method and related device
CN112698905B (en)Screen saver display method, display device, terminal device and server
CN112527222A (en)Information processing method and electronic equipment
WO2022001279A1 (en)Cross-device desktop management method, first electronic device, and second electronic device
US12153792B2 (en)Keyboard display method, foldable-screen device and computer-readable storage medium
CN111970549B (en)Menu display method and display device
CN117724781B (en) A method for playing application startup animation and electronic device
US20220207803A1 (en)Method for editing image, storage medium, and electronic device
CN112749362A (en)Control creating method, device, equipment and storage medium
CN109614563B (en)Method, device and equipment for displaying webpage and storage medium
WO2024152747A1 (en)Screen display method, electronic device, and readable storage medium
CN115643485B (en) Photography methods and electronic equipment
EP4394560A1 (en)Display method and electronic device
CN116719587B (en)Screen display method, electronic device and computer readable storage medium
WO2021052488A1 (en)Information processing method and electronic device
CN108984677B (en) An image stitching method and terminal
WO2024027504A1 (en)Application display method and electronic device
CN114302203B (en) Image display method and display device
CN114390190B (en)Display equipment and method for monitoring application to start camera
CN118484113A (en) Application management method and electronic device
CN117130509B (en)Brightness control method and related equipment
WO2024017145A1 (en)Display method and electronic device

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp