Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a display control method according to an embodiment of the present invention, where the display control method may be applied to a terminal device including a first screen and a second screen, as shown in fig. 1, and includes the following steps:
step 101, acquiring a first image displayed by a first screen.
In this embodiment, the terminal device may be a flexible screen terminal, and the flexible screen may be bent into the first screen and the second screen by a bending operation. The first image may be an image generated through a photographing operation, an image browsed when the album function is started, or a preview image in a process of starting a camera to shoot a preview, that is, the first image includes a preview image acquired by the camera of the first screen or a picture shot by the camera of the first screen.
For example, referring to fig. 4, as shown in fig. 4, theterminal device 400 includes afirst screen 401, acamera 403, and ashooting control area 404. Afirst image 4011 is displayed on thefirst screen 401. Specifically, after thecamera 403 is directed at the object, a preview image displayed on the photographing preview interface of thefirst screen 401 may be taken as thefirst image 4011. It should be noted that after the object aligned with the camera is displayed on the preview interface, the photographing operation does not need to be executed, and the corresponding photo does not need to be generated, so that the memory space does not need to be occupied, and only the preview image is displayed on the preview interface.
It is to be understood that, when a photographing operation is performed on the preview image displayed on the preview interface, the generated image may be thefirst image 4011. After thefirst image 4011 shot through the camera is displayed on the first screen, the retake prompt information can be displayed on theshooting button 4041 of theshooting control area 404, and the picture can be re-shot through touch input to theshooting button 4041 to generate a shot image.
Preferably, beforestep 101, further comprising:
and 104, receiving a second input of the user on the memo information editing interface displayed on the second screen.
In this embodiment, the second input includes any one of the following inputs: click input, long press input, and slide input. It is to be understood that the second input may be referred to as a second operation, and the second operation may include any one of the following operations: click operation, long press operation and slide operation. It will be appreciated that the second input can implement text copy, paste, etc. functions. In this embodiment, the memo information editing interface may be a text memo information editing interface, a doodle memo editing interface, a mixed memo editing interface, or the like.
Referring again to fig. 4, as shown in fig. 4, thesecond screen 402 of theterminal apparatus 400 displays a text memo information editing interface, which can receive text information.
Referring to fig. 5, as shown in fig. 5, theterminal device 500 includes afirst screen 501 and asecond screen 502, thesecond screen 502 displays a text memo information editing interface, the text memo information editing interface is in an editable state, specifically, a second input is received on the text memo information editing interface of thesecond screen 502, and the text memo information input by the second input is "spending no flowers and not flowers in a bouquet, and is independently careless. It is understood that thesecond screen 502 may also display other forms of memo information editing interfaces, and the memo information input by the second input may also be graffiti information, voice information, video information, and the like.
And step 105, responding to the second input, and displaying the first memo information input by the second input on the memo information editing interface.
In this embodiment, the first memo information includes at least one of: text information, graffiti information, voice information, video information.
For example, referring to fig. 5 again, the text memo information of "blossom is not hundreds of flowers, and independently hedge is not funny" inputted by the second input is displayed on the text memo information editing interface. It should be noted that the displayed first memo information may also be graffiti information, voice information, video information, or the like.
Step 106, obtaining a first object feature of the target object displayed in the first screen.
For example, referring to fig. 5 again, if the target object displayed in thefirst screen 501 is a flower image, and the first object feature of the flower image is a flower contour feature or a flower color feature, the flower contour feature and/or the flower color feature of the flower image displayed in thefirst screen 501 are obtained.
And 107, storing the first memo information and the first object characteristic in an associated manner.
For example, in fig. 5, the text memo information of "flower not blossoming hundred flowers, independent hedge not interesting" is stored in association with the flower outline feature and/or the flower color feature.
Therefore, the target object is displayed on the first screen, the first memo information input by the user is received on the second screen, the user can input the memo information aiming at the target object, the accuracy of the memo information is improved, the first memo information is associated with the first object feature of the target object displayed in the first screen, the associated first memo information is conveniently displayed through the first object feature of the target object subsequently, and the step of displaying the first memo information can be simplified.
Preferably, before step 104, the method further comprises:
and step 108, receiving a third input of the user.
In this embodiment, the third input comprises any one of the following inputs: click input, long press input, and slide input. It will be appreciated that the third input may be referred to as a third operation, which may include any one of the following operations: click operation, long press operation and slide operation. It will be appreciated that the third input may be arranged to receive a click input, a long press input or a swipe input on the first screen.
Referring to fig. 2, fig. 2 is a schematic view of a display interface of a terminal device according to an embodiment of the present invention. As shown in fig. 2, theterminal apparatus 200 includes afirst screen 201, asecond screen 202, acamera 203, and ashooting control area 204. A photographing preview interface is displayed on thefirst screen 201, and a text memo information editing interface is displayed on thesecond screen 202. The shootingcontrol area 204 displays a plurality of shooting modes andshooting buttons 2041, and the plurality of shooting modes may include panoramic, beauty, shooting, video recording and the like. In the case where the third input is received on the photographingbutton 2041, an image may be photographed through thecamera 203, and a text memo information editing interface may be displayed on thesecond screen 202.
Referring to fig. 3, fig. 3 is a second schematic view of a display interface of a terminal device according to an embodiment of the present invention. As shown in fig. 3, theterminal device 300 includes afirst screen 301, asecond screen 302, acamera 303, and ashooting control area 304. A shooting preview interface is displayed on thefirst screen 301, and a scrawling memo editing interface is displayed on thesecond screen 302. The photographingcontrol area 304 displays a plurality of photographing modes and photographingbuttons 3041, and the plurality of photographing modes may include a panorama mode, a beauty mode, a photographing mode, a video recording mode, and the like. In the case where the third input is received on the photographingbutton 3041, an image may be photographed by thecamera 303, and a scrawling memo information editing interface may be displayed on thesecond screen 302.
It is understood that other forms of memo information editing interfaces may be displayed on thesecond screen 202 or thesecond screen 302, and the memo information editing interface may receive at least one of the following information: text information, graffiti information, voice information, video information, text memo information, and graffiti information.
And step 109, responding to the third input, and displaying the first image shot by the camera on the first screen.
For example, referring to fig. 6, as shown in fig. 6, theterminal device 600 includes afirst screen 601, asecond screen 602, acamera 603, and ashooting control area 604. After thecamera 603 is aligned with the object, in a case where the photographingbutton 6041 receives a third input by the user, thefirst image 6011 is photographed by thecamera 603, and thefirst image 6011 is displayed on thefirst screen 601. The photograph can be re-taken by touch input to the photographingbutton 6041. Thesecond screen 602 may display a graffiti memo information editing interface.
Step 1010, determining a target object in the first image.
In step 1010, the target object may be determined from the first image by image recognition techniques, or may be determined based on user input.
Therefore, after the first image is shot by the camera, the first image is displayed on the first screen, the target object in the first image is determined, and when the first memo information is received on the second screen, the first memo information and the first object feature of the target object are stored in an associated mode, so that the step of associating the image with the memo information is simplified, and the operation time is saved.
Preferably, after step 108, further comprising:
step 1011, displaying a memo information editing interface on the second screen under the condition that the fourth input of the user is received;
or displaying a memo information editing interface on a second screen when the first image shooting is finished.
In an embodiment of the present invention, the fourth input comprises any one of the following inputs: click input, long press input, and slide input. It is to be understood that the fourth input may be referred to as a fourth operation, and the fourth operation may include any one of the following operations: click operation, long press operation and slide operation. It will be appreciated that the fourth input may be provided as a user input received on the second screen.
For example, referring to fig. 2 again, in fig. 2, if a fourth input is received on thesecond screen 202, the text memo information editing interface may receive the text memo information in the text memo information editing interface displayed on thesecond screen 202. Alternatively, if the shooting of the first image by thecamera 203 is completed, a text memo information editing interface is displayed on thesecond screen 202. It is understood that thesecond screen 202 may also display other forms of memo information editing interfaces where text information, graffiti information, voice information, and video information may be received as memo information.
For another example, referring again to fig. 3, in fig. 3, if the fourth input is received on thesecond screen 202, the scrawling memo editing interface displayed on thesecond screen 302 may receive the scrawling memo information. Alternatively, if the shooting of the first image by thecamera 303 is completed, the scrawling memo information editing interface is displayed on thesecond screen 302. It is understood that other forms of memo information editing interfaces may be displayed on thesecond screen 302, and text information, graffiti information, voice information, and video information may be received as memo information on the memo information editing interfaces.
Therefore, when the fourth input of the user is received or the first image is shot, the memorandum information editing interface is displayed on the second screen, the time when the user needs to edit the memorandum information can be accurately judged, the memorandum information editing interface is displayed at a proper time, and the electric quantity loss and the resource loss can be reduced.
Preferably, the step 1010 specifically includes:
step 10101, determining at least one object in the first image as a target object;
or, under the condition that a fifth input that a user selects a local feature region is received, determining an image of the local feature region selected by the fifth input as a target object.
In an embodiment of the present invention, the fifth input comprises any one of the following inputs: click input, long press input, and slide input. It is to be understood that the fifth input may be referred to as a fifth operation, and the fifth operation may include any one of the following operations: click operation, long press operation and slide operation. It will be appreciated that the fifth input may be provided as a user input received on the first image.
For example, referring to fig. 4, the number of the objects of the first image may be multiple, such as a flower image, a leaf image, a trunk image, and the like in the first image, and the target object may be determined as the flower image.
For another example, in fig. 4, the first image includes a plurality of local feature regions such as aflower feature region 4012 and atrunk feature region 4013, and in a case that a fifth input that the user selects theflower feature region 4012 is received, the flower image of theflower feature region 4012 is determined as the target object.
Therefore, at least one object in the first image is determined as the target object, the target object can be rapidly determined from the first image without user operation, and the efficiency of determining the target object is improved. In addition, under the condition that a fifth input that the user selects the local feature region is received, the image of the local feature region selected by the fifth input is determined as the target object, the target object can be determined according to the input of the user, and the flexibility of selecting the target object is improved.
And 102, acquiring a first object characteristic of a target object in the first image.
In this embodiment, the target object includes at least one object in the first image, or an image of a local feature region of one object in the first image. The first object feature may be a contour feature or a color feature, etc.
For example, referring to fig. 4, the first image in fig. 4 includes 3 objects, a flower object, a leaf object, a trunk object, etc., the target object may be determined as a flower image, a flower object characteristic of the flower object, such as a number of petals, a color of the petals, a shape of the petals, etc., may be obtained, the target object may also be determined as a leaf image, and a leaf characteristic of the leaf image, such as a color of the leaf, a shape of the leaf, etc., may be obtained. The target object may also be determined as a trunk image, and a trunk feature of the trunk image, such as a trunk texture feature, may be acquired. The target object may also be a part of a flower image in a flower object, or a part of a leaf image in a leaf object, etc.
Different object features may be associated with different memo information, respectively, for example, a flower object feature may be associated with text memo information describing a flower, a leaf object feature may be associated with video memo information related to leaves, a trunk object feature may be associated with graffiti memo information of a trunk shape, and the like. When the user clicks or selects the flower object in the first image, text memo information which is related to the characteristics of the flower object and used for describing the flower is displayed on the second screen; when the user clicks or selects the leaf object in the first image, displaying video memorandum information related to leaves associated with the leaf object characteristics on the second screen; when the user clicks or selects the trunk object in the first image, trunk-shaped graffiti memo information associated with the trunk object feature is displayed on the second screen. Thus, the corresponding memo information can be displayed on the second screen quickly.
Preferably, thestep 102 specifically includes:
and step 1021, performing object recognition on the first image to recognize a target object.
In this embodiment, the target object may be a subject image of a real object, for example, the target object may be a flower image. For example, when the first image is thefirst preview image 4011, the object recognition is performed on thefirst preview image 4011, and the recognized target object is a flower object.
And step 1022, performing feature recognition on the target object to obtain a first object feature.
In this embodiment, the first object feature may be a contour feature and/or a color feature, for example, if the target object is a flower object identified in thefirst preview image 4011, the first object feature may be a flower contour feature and/or a flower color feature.
Therefore, object recognition and feature recognition are automatically carried out on the first image through the image recognition technology, the first object feature of the first image can be automatically obtained without manual operation of a user, and the intelligent degree in the process of obtaining the first object feature is improved.
Preferably, thestep 102 specifically includes:
step 1023, a first input of a user in a target area of the first image is received.
In this embodiment, the first input may include any one of the following inputs: click input, long press input, and slide input. It is to be understood that the first input may be referred to as a first operation, and the first operation may include any one of the following operations: click operation, long press operation and slide operation. The first image may be divided into a plurality of regions in advance for the user to select one target region.
For example, referring to fig. 4 again, the plurality of regions into which thefirst preview image 4011 is divided include aflower feature region 4012 and atrunk feature region 4013, and a user may perform a touch operation on theflower feature region 4012, so that the terminal device can receive a first input of the user from theflower feature region 4012.
And step 1024, determining the image of the input area of the first input as a target object.
For example, in fig. 4, if the input region of the first input is theflower feature region 4012, the flower image is determined as the target object.
And 1025, performing feature recognition on the target object to obtain a first object feature.
For example, in fig. 4, in the case that the target object is a flower image in theflower feature area 4012, feature recognition is performed on the flower image, so as to obtain a first object feature related to the flower image, where the first object feature of the flower image may include a flower contour feature, a flower color feature, and the like.
In this way, the user can determine the target object from the first image through the first input, so that the first object feature of the target object can be acquired.
And 103, displaying first memo information associated with the first object feature on a second screen.
In this embodiment, the first memo information includes at least one of: text information, graffiti information, voice information, video information.
For example, referring to fig. 8, as shown in fig. 8, aterminal device 800 includes afirst screen 801 and asecond screen 802. When the first memo information associated with the first object feature of thefirst image 8011 is the text memo information of "blossom is not a bouquet and independent hedge is not funny", the text memo information of "blossom is not a bouquet and independent hedge is not funny" is displayed on the memo information display interface of thesecond screen 802, and thefirst image 8011 is displayed on thefirst screen 801.
Referring to fig. 9, as shown in fig. 9, theterminal device 900 includes afirst screen 901 and asecond screen 902. In the case where the first memo information associated with the first object feature of thefirst image 9011 is the scrawling memo information, thefirst image 9011 is displayed on thefirst screen 901, and in the case where the first memo information associated with the first object feature of thefirst image 9011 is the scrawling memo information, the associated first scrawling image 9021 is displayed on the scrawling memo information display interface of thesecond screen 902.
Preferably, afterstep 103, further comprising:
optionally, afterstep 103, the following steps may be further included:
104, receiving the editing information of the first memo information on the second screen to obtain edited memo information under the condition that the second screen is in a memo information editing state;
and storing the first object characteristic and the edited memo information in an associated manner.
In this embodiment, a trigger condition for the second screen to enter the memo information editing state may be preset, where the trigger condition may be that a touch input is received on the second screen, or that a duration for displaying the memo information on the second screen exceeds a preset duration. For example, the touch input may be a slide input, a long press input, a drag input, or a combination of multiple ones of a slide input, a long press input, a drag input, or the like.
Referring to fig. 10, as shown in fig. 10, theterminal device 1000 includes afirst screen 1001 and asecond screen 1002. Thefirst image 10011 is displayed on thefirst screen 1001, and the text memo information corresponding to the first object feature of thefirst image 10011 is displayed on the memo information display interface of thesecond screen 1002, where the specific text memo information is "flower not blossom but blossom, independently thinning out hedges and fun". In the case where thesecond screen 1002 receives thelong press input 10021 and thedrag input 10022 to thefirst screen 1001, thesecond screen 1002 enters a memo information editing state. After the memo information editing state, the displayed first memo information can be edited on thesecond screen 1002 to obtain the edited first memo information. A first determination button may be further displayed on thesecond screen 1002, and in a case where a touch input is received on the first determination button, the first object feature of thefirst image 10011 is stored in association with the edited first memo information, and thefirst image 10011 is saved.
Referring to fig. 11, as shown in fig. 11, aterminal device 1100 includes afirst screen 1101 and asecond screen 1102. Thefirst image 11011 is displayed on thefirst screen 1101, and the scrawling memo information corresponding to the first object feature of thefirst image 11011 is displayed on the memo information display interface of thesecond screen 1102, and the specific scrawling memo information is the second scrawling image 11023. In the case where thesecond screen 1102 receives thelong press input 11021 and thedrag input 11022 to thefirst screen 1101, thesecond screen 1102 starts a memo information editing state. The displayed first memo information may be edited on thesecond screen 1102 to obtain the edited first memo information. A second determination button may be further displayed on thesecond screen 1102, and in the case where a touch input is received on the second determination button, the first object feature of thefirst image 11011 is stored in association with the edited first memo information, and thefirst image 11011 is saved.
Therefore, the first memo information can be quickly edited on the second screen, and the efficiency of editing the first memo information is improved.
In this embodiment of the present invention, the terminal device may be any terminal device including two cameras, for example: a Mobile phone, a Tablet Personal Computer (Tablet Personal Computer), a Laptop Computer (Laptop Computer), a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), a Wearable Device (Wearable Device), or the like.
According to the display control method, the first image displayed by the first screen is acquired; acquiring a first object feature of a target object in the first image; displaying first memo information associated with the first object feature on the second screen; wherein the target object comprises at least one object in the first image or an image of a local feature region of an object in the first image. In this way, when the terminal device displays the first image on the first screen, the first memo information associated with the first object feature of the first image can be displayed on the second screen, the operation step of displaying the memo information can be simplified, and the efficiency of displaying the memo information can be improved.
Referring to fig. 12, fig. 12 is a structural diagram of a terminal device according to an embodiment of the present invention, where the terminal device includes a first screen and a second screen, and as shown in fig. 12, theterminal device 1200 further includes: a first obtainingmodule 1201 and a second obtainingmodule 1202, wherein the second obtainingmodule 1202 is connected to thefirst display module 1203, and wherein:
a first obtainingmodule 1201, configured to obtain a first image displayed by a first screen;
a second obtainingmodule 1202, configured to obtain a first object feature of a target object in the first image;
afirst display module 1203, configured to display a first memo information associated with the first object feature on a second screen;
wherein the target object comprises at least one object in the first image or an image of a local feature region of an object in the first image.
Optionally, the second obtainingmodule 1202 includes:
the first identification submodule is used for carrying out object identification on the first image and identifying a target object;
and the second identification submodule is used for carrying out feature identification on the target object to obtain the first object feature.
Optionally, the second obtainingmodule 1202 includes:
the receiving submodule is used for receiving a first input of a user in a target area of the first image;
a determination submodule for determining an image of the input area of the first input as a target object;
and the third identification submodule is used for carrying out feature identification on the target object to obtain a first object feature.
Optionally, theterminal device 1200 further includes:
the first receiving module is used for receiving second input of the user on a memo information editing interface displayed on a second screen;
the second display module is used for responding to the second input and displaying the first memo information input by the second input on the memo information editing interface;
the third acquisition module is used for acquiring first object characteristics of the target object displayed in the first screen;
and the association module is used for associating and storing the first memo information with the first object characteristics.
Optionally, theterminal device 1200 further includes:
the second receiving module is used for receiving a third input of the user;
the third display module is used for responding to the third input and displaying a first image shot by the camera on the first screen;
a determination module to determine a target object in the first image.
Optionally, theterminal device 1200 further includes:
the fourth display module is used for displaying a memo information editing interface on the second screen under the condition of receiving a fourth input of the user;
or displaying a memo information editing interface on a second screen when the first image shooting is finished.
Optionally, the determining module is configured to determine at least one object in the first image as a target object;
or, under the condition that a fifth input that a user selects a local feature region is received, determining an image of the local feature region selected by the fifth input as a target object.
Optionally, the first memo information includes at least one of the following: text information, graffiti information, voice information, and video information;
the first image comprises a preview image collected by a camera of the first screen or a picture taken by the camera of the first screen.
Theterminal device 1200 can implement each process implemented by the terminal device in the method embodiment of fig. 1, and is not described here again to avoid repetition.
Theterminal device 1200 provided by the embodiment of the invention can display the first memo information associated with the first object feature of the first image on the second screen when the first image is displayed on the first screen, and can simplify the operation steps of displaying the memo information, thereby improving the efficiency of displaying the memo information.
Fig. 13 is a schematic diagram of a hardware structure of a terminal device for implementing various embodiments of the present invention, where theterminal device 1300 includes, but is not limited to: aradio frequency unit 1301, a network module 1302, anaudio output unit 1303, aninput unit 1304, asensor 1305, adisplay unit 1306, auser input unit 1307, aninterface unit 1308, amemory 1309, aprocessor 1310, apower supply 1311, and the like. Thedisplay unit 1306 includes at least a first screen and a second screen. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 13 does not constitute a limitation of the terminal device, and that the terminal device may include more or fewer components than shown, or combine certain components, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
Theprocessor 1310 is configured to obtain a first image displayed on a first screen; acquiring a first object feature of a target object in the first image; displaying first memo information associated with the first object feature on a second screen; wherein the target object comprises at least one object in the first image or an image of a local feature region of an object in the first image.
Optionally, theprocessor 1310 performs the acquiring of the first object feature of the target object in the first image, including: carrying out object recognition on the first image to recognize a target object; and carrying out feature recognition on the target object to obtain a first object feature.
Optionally, theprocessor 1310 performs the acquiring of the first object feature of the target object in the first image, including:
controlling theuser input unit 1307 to receive a first input by the user at a target area of the first image; determining an image of an input area of the first input as a target object; and carrying out feature recognition on the target object to obtain a first object feature.
Optionally, theprocessor 1310 is further configured to control theuser input unit 1307 to receive a second input from the user on the memo information editing interface displayed on the second screen; responding to the second input, and displaying first memo information input by the second input on the memo information editing interface; acquiring a first object feature of a target object displayed in a first screen; and storing the first memo information in association with the first object characteristics.
Optionally, theprocessor 1310 is further configured to control theuser input unit 1307 to receive a third input from the user; responding to the third input, and displaying a first image shot by a camera on a first screen; a target object in the first image is determined.
Optionally, theprocessor 1310 is further configured to, in a case that a fourth input from the user is received, display a memo information editing interface on the second screen; or displaying a memo information editing interface on a second screen when the first image shooting is finished.
Optionally, theprocessor 1310 performs the determining the target object in the first image, including: determining at least one object in the first image as a target object; or, under the condition that a fifth input that a user selects a local feature region is received, determining an image of the local feature region selected by the fifth input as a target object.
Optionally, the first memo information includes at least one of the following: text information, graffiti information, voice information, and video information;
the first image comprises a preview image collected by a camera of the first screen or a picture taken by the camera of the first screen.
Theterminal device 1300 can implement each process implemented by the terminal device in the foregoing embodiments, and details are not described here to avoid repetition.
Theterminal device 1300 of the embodiment of the present invention can display the first memo information associated with the first object feature of the first image on the second screen when the first image is displayed on the first screen, and can simplify the operation steps of displaying the memo information, thereby improving the efficiency of displaying the memo information.
It should be understood that, in the embodiment of the present invention, theradio frequency unit 1301 may be configured to receive and transmit signals during a message transmission or call process, and specifically, receive downlink data from a base station and then process the received downlink data to theprocessor 1310; in addition, the uplink data is transmitted to the base station. In general,radio unit 1301 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, theradio frequency unit 1301 can also communicate with a network and other devices through a wireless communication system.
The terminal device provides wireless broadband internet access to the user through the network module 1302, such as helping the user send and receive e-mails, browse web pages, and access streaming media.
Theaudio output unit 1303 can convert audio data received by theradio frequency unit 1301 or the network module 1302 or stored in thememory 1309 into an audio signal and output as sound. Also, theaudio output unit 1303 can also provide audio output related to a specific function performed by the terminal apparatus 1300 (e.g., a call signal reception sound, a message reception sound, and the like). Theaudio output unit 1303 includes a speaker, a buzzer, a receiver, and the like.
Theinput unit 1304 is used to receive audio or video signals. Theinput Unit 1304 may include a Graphics Processing Unit (GPU) 13041 and amicrophone 13042, and theGraphics processor 13041 processes image data of still pictures or video obtained by an image capturing apparatus (such as a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on thedisplay unit 1306. The image frames processed by thegraphic processor 13041 may be stored in the memory 1309 (or other storage medium) or transmitted via theradio frequency unit 1301 or the network module 1302. Themicrophone 13042 can receive sounds and can process such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via theradio frequency unit 1301 in case of a phone call mode.
Terminal device 1300 also includes at least onesensor 1305, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of thedisplay panel 13061 according to the brightness of ambient light, and a proximity sensor that turns off thedisplay panel 13061 and/or backlight when theterminal device 1300 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; thesensors 1305 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which will not be described in detail herein.
Thedisplay unit 1306 is used to display information input by a user or information provided to the user. TheDisplay unit 1306 may include aDisplay panel 13061, and theDisplay panel 13061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
Theuser input unit 1307 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, theuser input unit 1307 includes a touch panel 13071 andother input devices 13072. Touch panel 13071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on touch panel 13071 or near touch panel 13071 using a finger, stylus, or any other suitable object or attachment). The touch panel 13071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to theprocessor 1310, and receives and executes commands sent from theprocessor 1310. In addition, the touch panel 13071 can be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave. Theuser input unit 1307 may includeother input devices 13072 in addition to the touch panel 13071. In particular, theother input devices 13072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 13071 can be overlaid on thedisplay panel 13061, and when the touch panel 13071 detects a touch operation on or near the touch panel, the touch operation can be transmitted to theprocessor 1310 to determine the type of the touch event, and then theprocessor 1310 can provide a corresponding visual output on thedisplay panel 13061 according to the type of the touch event. Although in fig. 13, the touch panel 13071 and thedisplay panel 13061 are two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 13071 and thedisplay panel 13061 may be integrated to implement the input and output functions of the terminal device, and are not limited herein.
Theinterface unit 1308 is an interface for connecting an external device to theterminal apparatus 1300. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like.Interface unit 1308 can be used to receive input from an external device (e.g., data information, power, etc.) and transmit the received input to one or more elements withinterminal apparatus 1300 or can be used to transmit data betweenterminal apparatus 1300 and an external device.
Thememory 1309 may be used to store software programs as well as various data. Thememory 1309 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, thememory 1309 can include high-speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
Theprocessor 1310 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in thememory 1309 and calling data stored in thememory 1309, thereby performing overall monitoring of the terminal device.Processor 1310 may include one or more processing units; preferably, theprocessor 1310 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated intoprocessor 1310.
Theterminal device 1300 may further include a power supply 1311 (e.g., a battery) for supplying power to the various components, and preferably, thepower supply 1311 may be logically connected to theprocessor 1310 via a power management system, so that functions of managing charging, discharging, and power consumption are performed via the power management system.
In addition, theterminal device 1300 includes some functional modules that are not shown, and are not described herein again.
Preferably, an embodiment of the present invention further provides a terminal device, which includes aprocessor 1310, amemory 1309, and a computer program stored in thememory 1309 and capable of running on theprocessor 1310, where the computer program, when executed by theprocessor 1310, implements each process of the foregoing display control method embodiment, and can achieve the same technical effect, and details are not described here to avoid repetition.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the display control method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.