Detailed Description
In order that the above objects, features and advantages of the present application can be more clearly understood, the present application will be described in further detail with reference to the accompanying drawings and detailed description. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application, however, the present application may be practiced in other ways than those described herein, and therefore the scope of the present application is not limited by the specific embodiments disclosed below.
An image processing method, an image processing apparatus, an electronic device, and a readable storage medium according to some embodiments of the present application are described below with reference to fig. 1 to 14.
In an embodiment of the present application, fig. 1 shows one of flowcharts of an image processing method of the embodiment of the present application, including:
102, acquiring first depth of field information of a target image;
for example, the mobile phone enters an album editing interface, and the user clicks an image selection target image in the album and calls first depth-of-field information recorded when the image is shot.
104, projecting a three-dimensional model of a target image in a space where the electronic equipment is located according to the first depth of field information;
in this embodiment, the first depth of field information of each pixel point of the target image is read, and the three-dimensional size of an object or a person in the target image, that is, the coordinates of the pixel points (X, Y, Z axis coordinates), is obtained according to the first depth of field information, and a three-dimensional model corresponding to the target image is projected in a space where the electronic device is located by using a plurality of projection devices in different directions according to the coordinates of the pixel points, so that a user can view the three-dimensional form of the object or the person in the target image through the three-dimensional model, and the user can select a target position to be edited and modified conveniently.
It is understood that after the three-dimensional model is projected in the space, the three-dimensional model can be synchronously displayed on the screen of the electronic device, as shown in fig. 9, and the depth information of the target image is reflected by a gaussian curve.
Step 106, receiving a first input of a target position of the three-dimensional model;
the image processing method is suitable for electronic equipment, and the electronic equipment comprises but is not limited to a mobile terminal, a tablet computer, a notebook computer, a wearable device, a vehicle-mounted terminal and the like. The first input may be an operation of the electronic device by the user, or an operation of the three-dimensional model recognized by the electronic device by the user. Wherein the first input includes, but is not limited to, a click input, a key input, a fingerprint input, a swipe input, and a press input. The key input includes, but is not limited to, a power key, a volume key, a single-click input of a main menu key, a double-click input, a long-press input, a combination key input, etc. to the electronic device. The operation mode in the embodiments of the present application is not particularly limited, and may be any realizable mode.
It should be noted that a photosensitive element array composed of a plurality of photosensitive elements is arranged in a space where the electronic device is located, luminance data of different positions of the three-dimensional model can be collected through the photosensitive element array, when the user operates to shield projection light beams of the three-dimensional model, the luminance data can be collected through the plurality of photosensitive elements to determine the position of the user operation, and then the target position of the three-dimensional model is identified.
For example, a finger of a user is placed at a position of the three-dimensional model to be edited, and the projection position of the finger, namely a part of pixel points in the three-dimensional model, is sensed according to the photosensitive element, so that a certain region in the image can be locally modified, three-dimensional stereo editing processing is realized, the user can edit any position in the image, the local image repairing requirement of the user is met, and the image repairing accuracy is greatly improved.
Andstep 108, responding to the first input, and adjusting the three-dimensional size of the target position of the three-dimensional model according to the input parameters of the first input.
In this embodiment, the three-dimensional size of the target position of the three-dimensional model is replaced or modified according to the input parameters of the first input of the target position of the three-dimensional model, namely, the correction value of the image by the user, and the modified image is stored. Therefore, the two-dimensional editing of the image is changed into the three-dimensional editing, so that the electronic equipment can carry out the three-dimensional editing operation, people or objects in the image are more vivid and stereoscopic, a more exquisite and perfect image is obtained, and the satisfaction degree of a user on the processed image is effectively improved.
It is worth mentioning that after the three-dimensional size of the target position of the three-dimensional model is adjusted according to the input parameters, the projected three-dimensional model is changed accordingly to obtain the modified three-dimensional model, so that the user can check the modification effect of the target image in time.
In an embodiment of the present application, fig. 2 shows a second flowchart of an image processing method according to an embodiment of the present application, andstep 108, adjusting a three-dimensional size of a target position of a three-dimensional model according to a first input parameter, includes:
step 202, identifying a motion starting point and a motion ending point of a first input;
step 204, determining the displacement between the motion starting point and the motion end point;
in this embodiment, the first input may be a slide input to the three-dimensional model, a motion start point and a motion end point of the slide input are identified, and a displacement between the motion start point and the motion end point is calculated. Wherein the displacement comprises a direction and a distance.
Step 206, determining the size variation corresponding to the displacement according to the corresponding relation between the preset displacement interval and the size variation under the condition that the displacement belongs to the preset displacement interval;
and step 208, adjusting the three-dimensional size according to the size change amount.
In this embodiment, the corresponding relationship between the preset displacement interval and the size variation is preset, that is, different displacement intervals indicate different size variations correspondingly. And comparing the displacement between the motion starting point and the motion ending point with a preset displacement interval, and taking the size variation corresponding to the preset displacement interval as a target image correction value specified by the first input under the condition that the displacement belongs to the preset displacement interval. Therefore, the three-dimensional size can be modified in real time according to the size variation, a user can dynamically adjust the three-dimensional size through the sliding operation of the three-dimensional model, the image is zoomed in three dimensions, the change of the three-dimensional model is sensed when the user edits the image, the influence on the attractiveness of the image due to excessive modification of the user is avoided, the image modifying difficulty is reduced, and the integral or local three-dimensional sense and the exquisite sense of the image are effectively improved.
For example, in the case of a portrait image, the user needs to process a nose, a flat head, and fingers placed on the nose or hair of the three-dimensional model to determine the target position. The nose can be pulled up by stretching operation, the hair with collapsed top can be pulled up, or the lateral wing of the nose can be contracted by shortening operation, etc. During the stretching/shortening operation, the target position on the image changes with the change in the slide input.
In an embodiment of the present application, fig. 3 shows a third flowchart of an image processing method according to an embodiment of the present application, and step 202, identifying a motion start point and a motion end point of a first input includes:
step 302, capturing a projection of a first input on a three-dimensional model;
step 304, generating a motion track of a first input according to the projection;
and step 306, determining a motion starting point and a motion ending point according to the motion track.
In this embodiment, the projection pixel point position of the first input of the target position by the user on the three-dimensional model is captured by the photosensitive element, and the plurality of projection pixel point positions are connected to generate the motion trail of the first input. The motion starting point and the motion end point of the first input can be identified according to the motion track so as to determine the displacement between the two points, thereby accurately identifying the size variation required by a user, further modifying the three-dimensional size of the three-dimensional model through the size variation, realizing three-dimensional editing processing, and being beneficial to improving the overall or local stereoscopic impression and delicate feeling of the image.
In one embodiment of the present application, fig. 4 shows a fourth flowchart of an image processing method of an embodiment of the present application,step 302, capturing a projection of a first input on a three-dimensional model, comprising:
step 402, collecting brightness data of the three-dimensional model;
and step 404, determining projection according to the position corresponding to the brightness data smaller than or equal to the preset threshold value.
In this embodiment, the photosensitive element array acquires brightness data of different positions of the three-dimensional model, when the brightness data is less than or equal to a preset threshold, it is indicated that the position is blocked, and may be a sliding operation position of a user, and a position corresponding to the brightness data less than or equal to the preset threshold is recorded as a projection position, so that a motion trajectory input by the user can be determined through a set of projections, so that a size variation required by the user can be accurately identified, and further, a three-dimensional size of the three-dimensional model can be modified through the size variation, and three-dimensional editing processing is realized.
In an embodiment of the present application, fig. 5 shows a fifth flowchart of an image processing method according to an embodiment of the present application, and step 108, adjusting a three-dimensional size of a target position of a three-dimensional model according to a first input parameter, includes:
step 502, displaying the numerical value of the three-dimensional size and the size threshold;
in this embodiment, after the three-dimensional size of the three-dimensional model is identified, the numerical value of the three-dimensional size and the corresponding size threshold are displayed on the electronic device so that the user knows the current size parameters and the modifiable size range of the object or person in the target image. Therefore, users can reasonably repair the image according to the numerical value of the three-dimensional size and the size threshold, the image repairing quality is improved, and the image repairing difficulty is reduced.
It should be noted that the three-dimensional coordinate threshold may be a maximum value and a minimum value of the three-dimensional size of the three-dimensional model, or may be an equal-proportion adjustable range in the image that is reasonably set according to the requirement. Taking the figure retouching as an example, for the requirement of face slimming, the size threshold is the sum of the preset value and the preset value of the three-dimensional size of the pixel point. Therefore, the reasonable picture repairing range of the user can be prompted by displaying the size threshold, the influence on the attractive appearance of the picture caused by excessive modification of the user is avoided, and the picture repairing difficulty is favorably reduced.
And step 504, adjusting the three-dimensional size according to the target three-dimensional size value corresponding to the first input.
Wherein the first input is for inputting a target three-dimensional dimension value.
In this embodiment, the first input may be a key input to the electronic device, and the specific value of the target three-dimensional size value input by the user, that is, the distance information of the axes of the three-dimensional model coordinate system X, Y, Z, is obtained through the first input. Therefore, the value of the three-dimensional size of the target position of the three-dimensional model can be replaced according to the target three-dimensional size value, the three-dimensional editing function of the target image by the electronic equipment is realized, and the improvement of the whole or local three-dimensional effect and the delicate feeling of the target image is facilitated.
In an embodiment of the present application, fig. 6 shows a sixth flowchart of an image processing method according to an embodiment of the present application, including:
step 602, receiving a second input to the three-dimensional model;
in this embodiment, the second input may be an operation of the electronic device by the user, or an operation of the three-dimensional model of the solid by the user recognized by the electronic device. Wherein the second input includes, but is not limited to, a click input, a key input, a fingerprint input, a swipe input, a press input. The key input includes, but is not limited to, a power key, a volume key, a single-click input of a main menu key, a double-click input, a long-press input, a combination key input, etc. to the electronic device. The operation mode in the embodiments of the present application is not particularly limited, and may be any realizable mode.
And step 604, responding to the second input, and projecting the three-dimensional model according to the rotation angle corresponding to the second input.
In this embodiment, after projecting the three-dimensional model of the target image in the space where the electronic device is located according to the first depth information, the user may control the three-dimensional model to rotate through a second input to the three-dimensional model. Therefore, a user can check the three-dimensional model in an all-around manner, the target position needing to be edited can be selected, the three-dimensional editing processing is further realized, people or objects in the image are more vivid and stereoscopic, a delicate and perfect image is obtained, and the satisfaction degree of the user on the processed image is effectively improved.
For example, the second input may be a key input of the electronic device by the user, where the key input indicates a specific numerical value of the rotation angle, and the three-dimensional model may be projected through the rotation angle to implement the rotation of the three-dimensional model. The second input may be a key input of the user to the electronic device, a control of the rotation angle is set on a screen of the electronic device, and the user adjusts the projection angle of the three-dimensional model by clicking the control of the rotation angle. In addition, the second input can also be sliding input of the user to the three-dimensional model, the motion trail of the second input is identified through the photosensitive element array, the rotation angle corresponding to the second input is matched through the motion trail, and the three-dimensional model is projected according to the rotation angle.
In an embodiment of the present application, fig. 7 shows a seventh flowchart of an image processing method according to an embodiment of the present application, where before acquiring the first depth information of the target image,step 102, the method further includes:
step 702, displaying at least one depth image;
step 704, receiving a third input for at least one depth image;
in this embodiment, the third input of the user to the at least one depth image may be an input of a finger of the user on the depth image, an input of a touch device such as a stylus on the depth image, or the like.
Step 706, in response to a third input, determining a target image from the at least one depth image.
In this embodiment, at least one depth image is displayed on the screen of the electronic device, and the user may select a target image to be modified through a third input to the at least one depth image.
It is understood that a response function for triggering selection of the third input is defined for the electronic device in advance, the response function indicating that there is at least one rule triggering selection of the target image. And when a third input of the user to the electronic equipment is received, matching the third input with a rule for selecting the target image, and when the first input meets the rule, triggering the operation of determining the target image from the at least one depth image in response to the first input. For example, if the rule is defined as double-clicking the depth image, the depth image is taken as the target image when the user performs the operation of double-clicking the depth image. Of course, the rule may also be to click the depth image and confirm the control, press the depth image for a specified time, and the like, and the embodiment of the present application is not particularly limited.
Specifically, for example, taking sharing pictures from an album as an example, an album interface, that is, a thumbnail display interface of at least one depth picture, is displayed on a desktop of the electronic device, a user clicks a thumbnail to select the depth picture, and after selection, a selected identifier, that is, a "√ shaped mark", is displayed on the thumbnail of the picture, and in addition, the user lightly clicks the thumbnail to enter a large-picture browsing mode of a plurality of thumbnails, so that the thumbnail can be clearly viewed.
In an embodiment of the present application, fig. 8 shows an eighth flowchart of an image processing method according to an embodiment of the present application, wherestep 702, before displaying at least one depth image, further includes:
step 802, receiving a fourth input to the electronic device;
step 804, responding to the fourth input, and starting a depth camera of the electronic equipment;
wherein, the degree of depth camera includes structure light camera and general camera.
Step 806, collecting structured light coding information of a depth camera of the electronic device;
in this embodiment, when the electronic device receives the fourth input, the depth camera is turned on to perform shooting by the depth camera. Wherein, the degree of depth camera includes structure light camera and general camera. The structured light camera may include a structured light projector and a structured light sensor. The structured light camera can project light spots, light slits, gratings, grids or stripes to an object to be detected by using a structured light projector, that is, the structured light can also be generated by using coherent light, grating light, diffraction light and the like. And then, the structured light sensor is adopted to acquire the structured light coding information of the measured object, for example, the coded pattern is modulated by the surface of the measured object.
Specifically, the structured light may be Infrared (IR) light.
By way of specific example, the projector includes a flash lamp or a continuous light source.
808, determining second depth-of-field information according to the structured light coding information;
in this embodiment, since the light with a certain structure is in different depth regions of the object, the structure of the acquired image changes from the original light structure, and then the change of the structure is converted into the second depth-of-field information by performing the distance measurement operation on the structured light encoded information.
For example, the structured light may be encoded by spatial encoding, such as de brui jn sequence encoding; the structured light may also be encoded in an acquisition time coding manner, such as binary coding, gray coding, etc.; the spatial encoding scheme may project only a single preset structured light encoded information, e.g. a single frame of structured light encoded pattern, and the temporal encoding scheme may project a plurality of different preset structured light encoded information, e.g. a plurality of frames of different structured light encoded patterns.
Specifically, for the spatial coding mode, after the collected structured light coded information is decoded, the structured light coded information and the preset structured light coded information are compared to obtain a matching relationship between the two, and the second depth-of-field information is calculated by combining a triangulation distance measurement principle. According to the time coding mode, the structured light sensor can collect a plurality of structured light coded information modulated by the surface of the moving object, the obtained plurality of structured light coded information are decoded, and the second depth of field information is obtained through calculation by combining a triangular distance measurement principle.
And step 810, performing three-dimensional reconstruction by using the second depth information to obtain at least one depth image.
In this embodiment, after the second depth information is obtained, the three-dimensional size (X, Y, Z-axis coordinates) of each pixel point is generated according to the second depth information of each pixel point, and then three-dimensional reconstruction is performed according to the three-dimensional size, so that a depth image can be obtained.
In one embodiment of the present application, as shown in fig. 10, animage processing apparatus 900 includes: the acquiringmodule 902, wherein the acquiringmodule 902 is configured to acquire first depth-of-field information of a target image; aprojection module 904, wherein theprojection module 904 is configured to project a three-dimensional model of the target image in a space where the electronic device is located according to the first depth-of-field information; areceiving module 906, wherein the receivingmodule 906 is used for receiving a first input of a target position of the three-dimensional model; aprocessing module 908, theprocessing module 908 is configured to adjust a three-dimensional size of the target location of the three-dimensional model according to input parameters of the first input in response to the first input.
In the embodiment, the size of an object in the image is accurately identified through the depth information of the image, the three-dimensional model is projected in the space, and a user can modify the numerical value of the three-dimensional size of the three-dimensional model through the operation on the target position of the three-dimensional model, so that the three-dimensional editing processing is realized, people or objects in the image are more vivid and stereoscopic, the delicate and perfect image is obtained, and the satisfaction degree of the user on the processed image is effectively improved.
Optionally, as shown in fig. 11, theimage processing apparatus 900 further includes: theidentification module 910, theidentification module 910 is configured to identify a motion start point and a motion end point of the first input; a determiningmodule 912, wherein the determiningmodule 912 is configured to determine a displacement between the motion start point and the motion end point; under the condition that the displacement belongs to the preset displacement interval, determining the size variation corresponding to the displacement according to the corresponding relation between the preset displacement interval and the size variation; theprocessing module 908 is further configured to adjust the three-dimensional size according to the size change amount.
Optionally, the identifyingmodule 910 is specifically configured to: capturing a projection of a first input on a three-dimensional model; generating a motion track of the first input according to the projection; and determining a motion starting point and a motion ending point according to the motion track.
Optionally, the identifyingmodule 910 is specifically configured to: acquiring brightness data of the three-dimensional model; and determining projection according to the position corresponding to the brightness data smaller than or equal to the preset threshold value.
Optionally, as shown in fig. 12, theimage processing apparatus 900 further includes: adisplay module 916, wherein thedisplay module 916 is configured to display the numerical value of the three-dimensional size and the size threshold; theprocessing module 908 is further configured to adjust a three-dimensional size according to a target three-dimensional size value corresponding to the first input; wherein the first input is for inputting a target three-dimensional dimension value.
Optionally, the receivingmodule 906 is further configured to receive a second input to the three-dimensional model; theprojection module 904 is further configured to project, in response to the second input, the three-dimensional model according to the rotation angle corresponding to the second input.
Optionally, thedisplay module 916 is further configured to display at least one depth image; the receivingmodule 906 is further configured to receive a third input for the at least one depth image; the obtainingmodule 902 is further configured to determine a target image from the at least one depth image in response to a third input.
Optionally, the receivingmodule 906 is further configured to receive a fourth input to the electronic device; theimage processing apparatus 900 further includes: a starting module (not shown in the figure), which is used for responding to the fourth input and starting the depth camera of the electronic equipment; the acquisition module (not shown in the figure) is used for acquiring the structured light coding information of the depth camera; the obtainingmodule 902 is further configured to determine second depth-of-field information according to the structured light encoding information; and performing three-dimensional reconstruction by using the second depth information to obtain at least one depth image. Wherein, the degree of depth camera includes structure light camera and general camera.
In this embodiment, when each module of theimage processing apparatus 900 executes its respective function, the steps of the image processing method in any of the above embodiments are implemented, so that the image processing apparatus also includes all the beneficial effects of the image processing method in any of the above embodiments, which is not described herein again.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The management device of the application program in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
In one embodiment of the present application, as shown in fig. 13, there is provided anelectronic device 1000 comprising: theprocessor 1004, thememory 1002, and the program or the instructions stored in thememory 1002 and executable on theprocessor 1004 are executed by theprocessor 1004 to implement the steps of the image processing method provided in any of the above embodiments, and therefore, theelectronic device 1000 includes all the advantages of the image processing method provided in any of the above embodiments, which are not described herein again.
Fig. 14 is a block diagram of a hardware structure of anelectronic device 1200 implementing an embodiment of the present application. Theelectronic device 1200 includes, but is not limited to:radio frequency unit 1202,network module 1204,audio output unit 1206,input unit 1208,sensors 1210,display unit 1212,user input unit 1214,interface unit 1216,memory 1218,processor 1220, and the like.
Those skilled in the art will appreciate that theelectronic device 1200 may further comprise a power supply (e.g., a battery) for supplying power to the various components, and the power supply may be logically connected to theprocessor 1220 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system. The electronic device structure shown in fig. 14 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or a different arrangement of components. In the embodiment of the present application, the electronic device includes, but is not limited to, a mobile terminal, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, a pedometer, and the like.
Theprocessor 1220 is configured to obtain first depth information of the target image; thedisplay unit 1212 is configured to project a three-dimensional model of the target image in a space where the electronic device is located according to the first depth information; theuser input unit 1214 is used for receiving a first input of a target position of the three-dimensional model;processor 1220 is configured to, in response to the first input, adjust a three-dimensional size of a three-dimensional model target location based on input parameters of the first input.
Processor 1220 is further configured to identify a start point and an end point of the movement for the first input; determining a displacement between a motion starting point and a motion ending point; under the condition that the displacement belongs to the preset displacement interval, determining the size variation corresponding to the displacement according to the corresponding relation between the preset displacement interval and the size variation; and adjusting the three-dimensional size according to the size variation.
Further,processor 1220 is also configured to capture a projection of the first input on the three-dimensional model; generating a motion track of the first input according to the projection; and determining a motion starting point and a motion ending point according to the motion track.
Further, theprocessor 1220 is further configured to acquire brightness data of the three-dimensional model; and determining projection according to the position corresponding to the brightness data smaller than or equal to the preset threshold value.
Further, thedisplay unit 1212 is also configured to display the numerical value of the three-dimensional size and the size threshold;processor 1220 is further configured to adjust the three-dimensional size according to the target three-dimensional size value corresponding to the first input; wherein the first input is for inputting a target three-dimensional dimension value.
Further, theuser input unit 1214 is further configured to receive a second input to the three-dimensional model; thedisplay unit 1212 is further configured to project the three-dimensional model according to a rotation angle corresponding to the second input in response to the second input.
Further, thedisplay unit 1212 is further configured to display at least one depth image; theuser input unit 1214 is further for receiving a third input for the at least one depth image;processor 1220 is further configured to determine a target image from the at least one depth image in response to a third input.
Further, theuser input unit 1214 is also used for receiving a fourth input to the electronic device;processor 1220 is further to turn on a depth camera of the electronic device in response to the fourth input; collecting structured light coding information of a depth camera; determining second depth-of-field information according to the structured light coding information; and performing three-dimensional reconstruction by using the second depth information to obtain at least one depth image.
It should be understood that, in the embodiment of the present application, theradio frequency unit 1202 may be used for transceiving information or transceiving signals during a call, and in particular, receiving downlink data of a base station or sending uplink data to the base station.Radio frequency unit 1202 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
Thenetwork module 1204 provides wireless broadband internet access to the user, such as helping the user send and receive e-mails, browse web pages, and access streaming media.
Theaudio output unit 1206 may convert audio data received by theradio frequency unit 1202 or thenetwork module 1204 or stored in thememory 1218 into an audio signal and output as sound. Also, theaudio output unit 1206 may provide audio output related to a specific function performed by the electronic apparatus 1200 (e.g., a call signal reception sound, a message reception sound, and the like). Theaudio output unit 1206 includes a speaker, a buzzer, a receiver, and the like.
Theinput unit 1208 is used to receive audio or video signals. Theinput Unit 1208 may include a Graphics Processing Unit (GPU) 5082 and amicrophone 5084, and theGraphics processor 5082 processes image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on thedisplay unit 1212 or stored in the memory 1218 (or other storage medium) or transmitted via theradio frequency unit 1202 or thenetwork module 1204. Themicrophone 5084 may receive sound and may be capable of processing the sound into audio data, and the processed audio data may be converted into a format output transmittable to a mobile communication base station via theradio frequency unit 1202 in the case of a phone call mode.
Theelectronic device 1200 also includes at least onesensor 1210, such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, a light sensor, a motion sensor, and others.
Thedisplay unit 1212 is used to display information input by the user or information provided to the user. Thedisplay unit 1212 may include adisplay panel 5122, and thedisplay panel 5122 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
Theuser input unit 1214 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, theuser input unit 1214 includes atouch panel 5142 andother input devices 5144.Touch panel 5142, also referred to as a touch screen, can collect touch operations by a user on or near it. Thetouch panel 5142 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, and sends the touch point coordinates to theprocessor 1220 to receive and execute commands sent by theprocessor 1220.Other input devices 5144 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, thetouch panel 5142 can be overlaid on thedisplay panel 5122, and when thetouch panel 5142 detects a touch operation thereon or nearby, the touch panel is transmitted to theprocessor 1220 to determine the type of touch event, and then theprocessor 1220 provides a corresponding visual output on thedisplay panel 5122 according to the type of touch event. Thetouch panel 5142 and thedisplay panel 5122 can be provided as two separate components or can be integrated into one component.
Theinterface unit 1216 is an interface for connecting an external device to theelectronic apparatus 1200. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. Theinterface unit 1216 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within theelectronic apparatus 1200 or may be used to transmit data between theelectronic apparatus 1200 and the external device.
Memory 1218 may be used to store application programs as well as various data. Thememory 1218 may mainly include a program storage area and a data storage area, where the program storage area may store an operating system, an application program (such as a sound playing function, an image playing function, and the like) required by at least one function, and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the mobile terminal, and the like. In addition, thememory 1218 may include high-speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
Theprocessor 1220 performs various functions of theelectronic device 1200 and processes data by running or executing applications and/or modules stored within thememory 1218 and by invoking data stored within thememory 1218 to thereby provide an overall monitoring of theelectronic device 1200.Processor 1220 may include one or more processing units; theprocessor 1220 may integrate an application processor, which mainly handles operations of an operating system, a user interface, an application program, etc., and a modem processor, which mainly handles operations of image processing.
In an embodiment of the present application, a readable storage medium is provided, on which a program or instructions are stored, which when executed by a processor implement the steps of the image processing method as provided in any of the above embodiments.
In this embodiment, the readable storage medium can implement each process of the image processing method provided in the embodiment of the present application, and can achieve the same technical effect, and is not described herein again to avoid repetition.
The processor is the processor in the communication device in the above embodiment. Readable storage media, including computer-readable storage media, such as Read-Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, etc.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and the same technical effect can be achieved.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the methods of the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.