Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As in the background art, currently, when a user uses a photographing function of an electronic device to perform photographing, the user may need to zoom a photographing preview picture by zoom factor adjustment, but when the user performs photographing, if the user needs to perform the zoom operation, the user needs to retract the electronic device to perform zoom adjustment and then perform photographing, and particularly when the user performs photographing using a selfie stick or the like, the user is very inconvenient. The series of actions is complicated in operation, takes a long time, and easily causes the user to miss the scene desired to be photographed.
The following describes a photographing method provided by an embodiment of the present invention in detail with reference to the accompanying drawings. An embodiment of the present invention provides a shooting method, and referring to fig. 1, fig. 1 shows a schematic flow diagram of a shooting method provided by an embodiment of the present invention; the method is applied to the electronic equipment and comprises the following steps:
s101, acquiring a target area in a shooting preview picture focused by a user through an eye movement tracking technology;
after the electronic equipment is stable, a user watches a screen of the electronic equipment, eyeball information of the user is obtained through a front camera of the electronic equipment, eye movement tracking is carried out according to the eyeball information, and therefore a sight line area of the user is determined, the sight line area is the target area, and a focus center during subsequent shooting can be determined according to the target area. In order to improve the recognition accuracy, the user's sight line needs to stay for a certain time, for example, 3s, in the area desired to be selected while watching the screen.
In some embodiments, in order to facilitate the user to visually confirm the position of the target area, mark information on the target area may be displayed in the shooting preview screen. For example, the marking information here is to display a translucent area on the target area. As shown in fig. 2, fig. 2 is a schematic diagram illustrating a kind of marking information of a target area according to an embodiment of the present invention.
It should be noted that the eye tracking technology is dynamic tracking, i.e. when the eyeball of the user rotates, the target area moves. The reaction is as follows on the mark information: the marker information displayed in the photographing preview screen moves according to the movement of the user's gaze position.
S102, adjusting the zoom multiple according to the position of a first target object in the target area in the shooting preview picture and the corresponding relation between a preset position and the zoom multiple;
the correspondence relationship between the positions and the zoom magnifications here means that different positions are respectively provided with different zoom magnifications, for example, the zoom magnification set at position 1 is × 2. The specific corresponding relationship between the position and the zoom factor may be preset according to the size of the screen of the electronic device, and is not limited herein.
The zoom factor adjustment here refers to an operation of scaling the shooting preview screen.
S103, receiving a first input of a user;
the first input here refers to an input for controlling the start of shooting.
In one embodiment, the first input may be a target behavior action of the user, such as a head action, a nod, a shake, a blink, etc. And taking the target behavior action of the user acquired by the front camera as a first input. This way, the operation of the user for the first input is simplified.
And S104, responding to the first input, shooting the shooting preview picture after the zoom multiple adjustment, and obtaining a first shooting image.
In the embodiment of the invention, after the target area of the shooting preview picture by the user is obtained through the eye tracking technology, the zoom multiple adjustment is carried out according to the position of the first target object in the target area, and then the shooting preview picture after zooming is shot. Therefore, in the embodiment, the zoom factor adjustment of the shot preview picture can be realized directly through the eyeball action of the user without manually adjusting the zoom factor of the electronic equipment by the user, so that the zoom operation of the user is simplified, and the convenience and the shooting speed of the user during shooting are improved.
In some embodiments of the present invention, before S101, the method may further include: receiving a trigger input of a user to a first identifier in a shooting preview picture;
at this time, S101 may include: and responding to the trigger input, and acquiring a target area of the shooting preview picture in the shooting preview picture by the user through an eye tracking technology.
That is, in the present embodiment, the first identifier here is a control identifier for controlling the electronic device to enter a specific shooting mode in which the electronic device performs image shooting through the flow in fig. 1 described above. The shooting mode may be set to a "snapshot" mode, as shown in fig. 3, and fig. 3 is a schematic diagram of the first identifier provided in the embodiment of the present invention. The trigger input here may be an input of clicking on the first identifier. This way, the user can select whether to perform zoom shooting through the flow in fig. 1, ensuring that the user can autonomously select a desired shooting mode.
Further, after the user enters the "snapshot" mode, if the user enters the mode for the first time, that is, the user receives the trigger input of the first identifier for the first time, the following setting operation may be performed, and after the setting operation is completed, the process proceeds to S101. If the user does not enter the mode for the first time, the process directly enters S101 for subsequent operations. The setting operation herein may include:
the method for setting the preset scene comprises the following steps:
the first method is as follows:
receiving a second input of a user to a first scene in preset default scenes;
and in response to a second input, taking the first scene as a preset scene. Here, the first scenario may include one or more scenarios.
That is, the electronic device is provided with some default scenes in advance, such as classroom, beach, etc., as shown in fig. 4, fig. 4 is a schematic view of a setting interface of a preset scene provided by an embodiment of the present invention. The user can directly select from the default scenes, and the mode is convenient and quick.
The second method comprises the following steps:
receiving a fourth input of the first image saved in advance by the user;
and responding to the fourth input, identifying scene information in the first image, and determining a preset scene according to the scene information.
Namely, the user can select a first image from the album of the electronic device, and the electronic device recognizes the scene in the first image as the preset scene. As shown in fig. 5, fig. 5 is a schematic view of a setting interface of another preset scenario provided in the embodiment of the present invention. This way, the selection range of the preset scene is large.
After the setting of the preset scene is completed, at least one preset scene list can be stored.
Setting a preset object contained in a preset scene, comprising:
the first method is as follows: when the preset scene is a first scene selected from default scenes:
receiving a third input of the first object in the default objects associated with the first scene by the user;
and responding to a third input, and taking the first object as a preset object contained in the preset scene.
That is, the electronic device may set at least one default object it contains for each default scene in advance. The subsequent user may select at least one first object from at least one default object corresponding to each preset scene as a preset object included in the preset scene. As shown in fig. 6, fig. 6 is a schematic view of a setting interface of a preset object according to an embodiment of the present invention. If the preset scene is a classroom, the default objects may include a blackboard, a curtain, a projector, a desk, a door, and the like. The arrangement is simple and quick.
The second method comprises the following steps: when the preset scene is the scene identified from the first image:
identifying object information in the first image;
receiving a fifth input of the user to a second object in the object information;
and responding to a fifth input, and taking the second object as a preset object contained in the preset scene.
The at least one object that can be selected by the user can be provided by identifying the object information in the first image, and the user can select at least one second object from the at least one second object as a preset object included in a preset scene corresponding to the first image. As shown in fig. 7, fig. 7 is a schematic view of another setting interface of preset objects according to an embodiment of the present invention, for example, the first image is a classroom, a curtain, a projector, a desk, and a bookcase are identified from the first image, and a user can select at least one of the curtain, the projector, the desk, and the bookcase as a preset object included in a classroom scene. In this way, the user can select some objects which are not very common in the preset scene as the preset objects, and the selection range of the user is wider.
In some embodiments of the present invention, based on the preset scene and the preset object set above, as shown in fig. 8, fig. 8 is a schematic flowchart of another shooting method provided in an embodiment of the present invention.
The method may further comprise:
s201 is similar to S101 in fig. 1, and is not described herein again.
S202, matching the shooting preview picture with at least one preset scene respectively, and taking the successfully matched target preset scene as a target scene corresponding to the shooting preview picture;
s203, determining a target preset object in a target area in the target scene according to at least one preset object contained in the target scene, and taking the target preset object as a first target object. S204-S206 are similar to S102-S104 in FIG. 1 and are not described herein again.
According to the preset scene and the preset object, the first target object in the user target area can be quickly identified and obtained, and therefore the zooming speed of the image is improved.
In addition, since there may be a case where a plurality of preset scenes are matched in the scene matching process, to solve the above problem, the setting process may further include:
setting the priority of the preset scene, including:
receiving a sixth input of the user on an upward moving control or a downward moving control corresponding to the target preset scene under the condition that a scene list interface is displayed;
wherein, include in the scene list interface: the method comprises the steps that the identification of each preset scene and an upward moving control and a downward moving control corresponding to each preset scene are displayed in a scene list interface according to the priority sequence of the corresponding preset scene;
and responding to a sixth input, and moving the position of the target preset scene in the scene list interface up or down.
That is, as shown in fig. 9, fig. 9 is a schematic view of a scene priority setting interface according to an embodiment of the present invention. The user clicks the up-shifting control or the down-shifting control corresponding to each preset scene to adjust the sequence of the preset scenes, and in the scene list interface, the highest priority of the uppermost preset scene can be set, and the lowest priority of the lowermost preset scene can be set.
Based on the foregoing embodiment, in a case that there are at least two target preset scenarios, in an embodiment, the foregoing S202 may further include: and taking the target preset scene with the highest corresponding preset scene priority in the at least two target preset scenes as the target scene.
In this embodiment, the target scene is screened according to the preset scene priority corresponding to the preset scene, so that the finally obtained target scene meets the user requirement as much as possible.
In addition, since a plurality of objects may be included in the target area of the user, in order to facilitate the user to select the first target object of most interest from the plurality of objects, based on the foregoing embodiment, after the operation of setting the preset object is completed, the method may further include: and setting the preset object priority of each preset object contained in the preset scene according to the received priority setting input of the user. Namely, the aforementioned setting process may further include:
fourthly, presetting the priority setting of the objects, comprising:
receiving a seventh input of a user to an upward moving control or a downward moving control corresponding to a target preset object in the object list interface under the condition that the object list interface corresponding to the target preset scene is displayed;
the object list interface comprises an identifier of each preset object contained in a target preset scene and an upward moving control or a downward moving control corresponding to each preset object, and the preset objects are displayed in the object list interface according to the priority sequence of the corresponding preset objects;
and responding to a seventh input, and moving the position of the target preset object in the object list interface up or down.
As shown in fig. 10, fig. 10 is a schematic view of an object priority setting interface according to an embodiment of the present invention. The user clicks the upward moving control or the downward moving control corresponding to each preset object to adjust the sequence of the preset objects, and in the object list interface, the highest priority of the uppermost preset object and the lowest priority of the lowermost preset object can be set.
Based on the preset object priority, in an embodiment of the present invention, as shown in fig. 11, fig. 11 is a schematic flow chart of another shooting method provided in the embodiment of the present invention. The method may further comprise:
s301 is similar to S101 in fig. 1, and is not described herein again.
S302, matching the shooting preview picture with at least one preset scene respectively, and taking the successfully matched target preset scene as a target scene corresponding to the shooting preview picture;
s303, identifying that a target area in a target scene comprises at least two preset objects;
s304, displaying preselection marks corresponding to at least two preset objects respectively according to the preset object priorities of the at least two preset objects; the higher the priority of the preset object is, the larger the preselection mark corresponding to the preset object is;
wherein the shape of the preselected indicia may be a prototype or a rectangle, or other shape.
S305, taking a preset object corresponding to the target preselection identification watched by the user as a first target object. S306-S308 are similar to S102-S104 in FIG. 1 and are not described herein again.
Fig. 12 is a schematic diagram of a preselected identification according to an embodiment of the present invention, as shown in fig. 12. According to the embodiment, the preselection identification is set on the preset object identified in the target area, so that a user can conveniently know which preset objects are identified currently, and can select the object which the user actually wants to shoot from the preset objects. For example, a blackboard and a curtain are identified within the target area, and the user may select the blackboard as the first target object. This way the accuracy in the identification of the first target object is improved. In addition, in this embodiment, the higher the priority of the preset object is, the larger the preselection identifier corresponding to the preset object is, and the larger preselection identifier enables the user to select the object to be selected more quickly and accurately.
In still other embodiments of the present invention, after the above S102, the minimum distance from the first target object to the boundary of the photographing preview screen is equal to the preset pixel threshold.
In the embodiment of the present invention, the purpose of associating the zoom multiple with the position of the first target object is to avoid that the first target object exceeds the display range of the shooting preview screen after zooming, and therefore, a corresponding relationship between the position and the zoom multiple herein should ensure that the first target object can be located at the middle position of the screen as much as possible after zooming, as shown in fig. 13, fig. 13 is a schematic position diagram of the first target object provided in the embodiment of the present invention, and a distance from any boundary of the shooting preview screen can reach a preset pixel threshold, for example, 20 pixels. For example, if the minimum distance between the first target object and the boundary of the shooting preview screen is less than the preset pixel threshold, the shooting preview screen is reduced, that is, the zoom factor is less than 1, as shown in fig. 14, fig. 14 is another position schematic diagram of the first target object provided in the embodiment of the present invention, if the distance between the first target object and any boundary of the shooting preview screen is greater than the preset pixel threshold, the shooting preview screen is enlarged, that is, the zoom factor is greater than 1, as shown in fig. 15, and fig. 15 is another position schematic diagram of the first target object provided in the embodiment of the present invention. In this case, in the finally captured image, it is possible to avoid the situation where the first target object is too close to the boundary edge or exceeds the boundary edge, and the image capturing effect can be improved.
In order to avoid affecting the sharpness of the captured image when zooming in purely by zooming in the case where the first target object is too far away from the electronic device, in some implementations of the invention, after S102 and before S103, the method may further include:
and displaying first prompt information under the condition that the proportion of the first target object in the shooting preview picture is smaller than a preset proportion threshold, wherein the first prompt information is used for prompting a user to move the electronic equipment towards the direction close to the first target object.
In this embodiment, the user is prompted to move the electronic device in the direction of the first target object, and the ratio of the first target object in the shooting preview picture is increased by approaching the first target object, so as to improve the shooting effect. The specific display content of the first prompt message is not limited in the present invention.
In order to avoid the problem that the shooting effect is not clear enough when the first target object is too close to the user, for example, the first target object continues to move toward the user until the situation as shown in fig. 16 occurs. In another implementation manner of the present invention, after S101 and before S102, the method may further include:
and under the condition that the distance from the first target object to any boundary of the shooting preview picture is smaller than a preset pixel threshold value, starting the wide-angle camera, and displaying second prompt information, wherein the second prompt information is used for prompting a user to control the electronic equipment to move towards a direction far away from the first target object.
In this embodiment, under the too big condition of first target object, through opening wide-angle camera, can catch the shooting object as complete as possible to hold steady electronic equipment and go backward in order to obtain bigger viewing range through reminding the user, thereby can make and can keep in safe distance between the border frame of first target object and shooting preview picture, thereby improve the shooting effect. The content of the second prompt message may be as shown in fig. 16.
Since at least two preset objects, such as a and B, may be included in the target area focused by the user, in the foregoing embodiment, only one of the preset objects is selected as the first target object, for example, a is selected, and zooming and shooting are performed based on the position of a.
Therefore, in order to ensure that other preset objects can be photographed and meet the requirement of the user to photograph a plurality of objects, the present invention further provides another embodiment, after S104, the method may further include:
under the condition that at least one second target object except the first target object in at least two preset objects is in the current shooting preview picture, carrying out zoom multiple adjustment on the shooting preview picture according to the position of the second target object in the shooting preview picture and the corresponding relation between the position and the zoom multiple;
and shooting the shooting preview picture after the zoom times are adjusted to obtain a second shooting image.
That is, in this embodiment, assuming that the target area watched by the user includes two preset objects a and B, and the first target object is a, after performing zoom shooting based on the position of the first target object, each frame of image acquired by the main camera is re-extracted, and if B is still within the range of view of the main camera, zoom factor adjustment is performed based on the position of B again, and autofocus shooting is performed. In the process, the user does not need to manually click the shooting button to shoot. If B is not within the range of view of the main camera, for example, the user has moved the lens, B has continued to move away from the range of view of the main camera, etc., then B is no longer captured. Wherein the user can view the second image taken through the album.
In addition, when 3 or more preset objects are stored in the target area of the user, whether each preset object is still in the framing range of the main camera or not can be checked in sequence according to the priority of the preset object in the target scene and the sequence from high to low of the priority, if yes, zooming and shooting are carried out based on the currently checked preset object, and then, the next preset object is checked continuously until all preset objects are processed.
According to the embodiment, the preset object which is not selected by the user but is in the previous target area is also shot separately and serves as the alternative image for the user to screen, the user does not need to shoot separately, and the user can conveniently obtain the desired image.
Based on the shooting method embodiment provided in the foregoing embodiment, correspondingly, an embodiment of the present invention further provides an electronic device, as shown in fig. 17, where fig. 17 is a schematic structural diagram of an electronic device provided in an embodiment of the present invention, and the electronic device includes:
thearea acquisition module 401 is configured to acquire a target area in a shooting preview picture focused by a user through an eye tracking technology;
afirst zooming module 402, configured to adjust a zooming multiple according to a position of a first target object in the target area in the shooting preview picture and a corresponding relationship between a preset position and the zooming multiple;
afirst receiving module 403, configured to receive a first input of a user;
and afirst shooting module 404, configured to respond to the first input, and shoot the shooting preview picture after the zoom factor adjustment, so as to obtain a first shooting image.
In the embodiment of the invention, after the target area of the shooting preview picture by the user is obtained through the eye tracking technology, the zoom multiple adjustment is carried out according to the position of the first target object in the target area, and then the shooting preview picture after zooming is shot. Therefore, in the embodiment, the zoom factor adjustment of the shot preview picture can be realized directly through the eyeball action of the user without manually adjusting the zoom factor of the electronic equipment by the user, so that the zoom operation of the user is simplified, and the convenience and the shooting speed of the user during shooting are improved.
In some embodiments, in order to facilitate the user to intuitively confirm the location of the target area, the electronic device may further include: and the marking module is used for displaying the marking information of the target area in the shooting preview picture. For example, the marking information herein shows a translucent area on the target area. Alternatively, the marker information displayed within the photographing preview screen may move as the user's gaze position moves.
Optionally, thefirst receiving module 403 is specifically configured to: target behavior actions of the user, such as head actions, nodding, shaking, blinking and the like, are acquired. This way, the operation of the user for the first input is simplified.
In some embodiments of the invention, the electronic device may further comprise:
the trigger receiving module is used for receiving the trigger input of a user to the first identifier in the shooting preview picture;
theregion acquiring module 401 may be configured to: and responding to the trigger input, and acquiring a target area of the shooting preview picture in the shooting preview picture by the user through an eye tracking technology.
This way, the user can select whether to perform zoom shooting through the flow in fig. 1, ensuring that the user can autonomously select a desired shooting mode.
Further, the trigger receiving module may be further configured to enter the setting module if the user enters the mode for the first time, and then trigger thearea obtaining module 401 after the setting operation is completed; if the user does not enter the mode for the first time, theregion acquisition module 401 is directly triggered.
Wherein, the setting module may include:
the first scene setting unit is used for receiving second input of a user to a first scene in preset default scenes; and in response to a second input, taking the first scene as a preset scene. Here, the first scenario may include one or more scenarios. Namely, the electronic device is provided with some default scenes in advance, and the user can directly select from the default scenes, which is convenient and quick.
A first object setting unit for, when the preset scene is a first scene selected from the default scenes: receiving a third input of the first object in the default objects associated with the first scene by the user; and responding to a third input, and taking the first object as a preset object contained in the preset scene.
That is, the electronic device may set at least one default object it contains for each default scene in advance. The subsequent user may select at least one first object from at least one default object corresponding to each preset scene as a preset object included in the preset scene. The arrangement is simple and quick.
Or the setting module may further include: .
The second scene setting unit is used for receiving fourth input of the first image saved in advance by the user; and responding to the fourth input, identifying scene information in the first image, and determining a preset scene according to the scene information. Namely, the user can select a first image from the album of the electronic device, and the electronic device recognizes the scene in the first image as the preset scene. This way, the selection range of the preset scene is large.
A second object setting unit for recognizing object information in the first image; receiving a fifth input of the user to a second object in the object information; and responding to a fifth input, and taking the second object as a preset object contained in the preset scene. In this way, the user can select some objects which are not very common in the preset scene as the preset objects, and the selection range of the user is wider.
Based on the setting module, the electronic device may further include:
the scene matching module is used for respectively matching the shooting preview picture with at least one preset scene and taking the successfully matched target preset scene as a target scene corresponding to the shooting preview picture;
and the object determining module is used for determining a target preset object in a target area in the target scene according to at least one preset object contained in the target scene, and taking the target preset object as a first target object.
According to the preset scene and the preset object, the first target object in the user target area can be quickly identified and obtained, and therefore the zooming speed of the image is improved.
In a specific embodiment, the scene matching module may be configured to:
identifying at least one second target object in the shooting preview picture; and respectively matching at least one second target object with the preset objects contained in each preset scene, and taking the target preset scene containing the largest number of the successfully matched preset objects as the target scene.
The method identifies the target scene according to the preset objects contained in the preset scene, has small calculation amount, and can determine the target scene as accurately as possible.
In addition, in other embodiments, the setting module may further include:
the scene priority setting module is used for receiving sixth input of the user on an upward moving control or a downward moving control corresponding to the target preset scene under the condition that a scene list interface is displayed; and responding to a sixth input, and moving the position of the target preset scene in the scene list interface up or down. Wherein, include in the scene list interface: and displaying the preset scenes in a scene list interface according to the priority sequence of the corresponding preset scenes by the identification of each preset scene and the upward moving control and the downward moving control corresponding to each preset scene.
Based on the foregoing embodiment, in a case that there are at least two target preset scenarios, in an embodiment, the scenario matching module may be further configured to:
and taking the target preset scene with the highest corresponding preset scene priority in the at least two target preset scenes as the target scene.
In this embodiment, the target scene is screened according to the preset scene priority corresponding to the preset scene, so that the finally obtained target scene meets the user requirement as much as possible.
In other embodiments, the setting module may further include:
the object priority setting module is used for receiving seventh input of a user on an upward moving control or a downward moving control corresponding to a target preset object in the object list interface under the condition that the object list interface corresponding to the target preset scene is displayed; and responding to a seventh input, and moving the position of the target preset object in the object list interface up or down. The object list interface comprises an identifier of each preset object contained in the target preset scene and an upward moving control or a downward moving control corresponding to each preset object, and the preset objects are displayed in the object list interface according to the priority sequence of the corresponding preset objects.
Based on the above embodiment, optionally, the object determination module may be configured to:
identifying that a target area in a target scene comprises at least two preset objects; displaying preselection marks corresponding to at least two preset objects respectively according to the preset object priorities of the at least two preset objects; the higher the priority of the preset object is, the larger the preselection mark corresponding to the preset object is; and taking a preset object corresponding to the target preselection identification watched by the user as a first target object.
According to the embodiment, the preselection identification is set on the preset object identified in the target area, so that a user can conveniently know which preset objects are identified currently, and can select the object which the user actually wants to shoot from the preset objects. This way the accuracy in the identification of the first target object is improved. In addition, in this embodiment, the higher the priority of the preset object is, the larger the preselection identifier corresponding to the preset object is, and the larger preselection identifier enables the user to select the object to be selected more quickly and accurately.
In still other embodiments of the present invention, the correspondence between the position and the zoom factor includes:
after adjusting the zoom factor, the minimum distance from the first target object to the boundary of the shooting preview picture is equal to a preset pixel threshold value.
In the embodiment of the present invention, the purpose of associating the zoom multiple with the position of the first target object is to avoid that the first target object exceeds the display range of the shooting preview picture after zooming, and therefore, the corresponding relationship between the position and the zoom multiple herein should store that the first target object can be located at the middle of the screen as much as possible after zooming, and the distance from any boundary of the shooting preview picture can reach the preset pixel threshold.
In some implementations of the invention, the electronic device may further include:
the first prompting module is used for displaying first prompting information under the condition that the proportion of the first target object in the shooting preview picture is smaller than a preset proportion threshold, and the first prompting information is used for prompting a user to move the electronic equipment towards the direction close to the first target object.
In this embodiment, the user is prompted to move the electronic device in the direction of the first target object, and the ratio of the first target object in the shooting preview picture is increased by approaching the first target object, so as to improve the shooting effect. The specific display content of the first prompt message is not limited in the present invention.
In further implementations of the invention, the electronic device may further include:
and the second prompting module is used for starting the wide-angle camera and displaying second prompting information under the condition that the distance from the first target object to any boundary of the shooting preview picture is smaller than a preset pixel threshold value before the zoom multiple is adjusted, wherein the second prompting information is used for prompting a user to control the electronic equipment to move towards the direction far away from the first target object.
In this embodiment, under the too big condition of first target object, through opening wide-angle camera, can catch the shooting object as complete as possible to hold steady electronic equipment and go backward in order to obtain bigger viewing range through reminding the user, thereby can make and can keep in safe distance between the border frame of first target object and shooting preview picture, thereby improve the shooting effect.
The present invention further provides still other embodiments, and optionally, the electronic device may further include:
the second zooming module is used for adjusting the zooming times of the shooting preview picture according to the positions of the second target objects in the shooting preview picture and the corresponding relation between the positions and the zooming times under the condition that at least one second target object except the first target object in at least two preset objects is in the current shooting preview picture;
and the second shooting module is used for shooting the shooting preview picture after the zoom multiple adjustment to obtain a second shooting image.
According to the embodiment, the preset object which is not selected by the user but is in the previous target area is also shot separately and serves as the alternative image for the user to screen, the user does not need to shoot separately, and the user can conveniently obtain the desired image.
The electronic device provided in the embodiment of the present invention can implement each method step implemented in the method embodiments of fig. 1, fig. 8, and fig. 11, and is not described here again to avoid repetition.
Fig. 18 is a schematic diagram illustrating a hardware structure of an electronic device according to an embodiment of the present invention.
Theelectronic device 500 includes, but is not limited to: aradio frequency unit 501, anetwork module 502, anaudio output unit 503, aninput unit 504, asensor 505, adisplay unit 506, auser input unit 507, aninterface unit 508, amemory 509, aprocessor 510, and apower supply 511. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 18 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
Theprocessor 510 is configured to obtain a target area in a shooting preview picture by a user through an eye tracking technology; adjusting the zoom multiple according to the position of a first target object in the target area in the shooting preview picture and the corresponding relation between the preset position and the zoom multiple; receiving a first input of a user through theuser input unit 507; and responding to the first input, and shooting the shooting preview picture after the zoom multiple adjustment to obtain a first shooting image.
In the embodiment of the invention, after the target area of the shooting preview picture by the user is obtained through the eye tracking technology, the zoom multiple adjustment is carried out on the shooting preview picture according to the position of the first target object in the target area, and then the shooting preview picture after zooming is shot. Therefore, in the embodiment, the zoom factor adjustment of the shot preview picture can be realized directly through the eyeball action of the user without manually adjusting the zoom factor of the electronic equipment by the user, so that the zoom operation of the user is simplified, and the convenience and the shooting speed of the user during shooting are improved.
It should be understood that, in the embodiment of the present invention, theradio frequency unit 501 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to theprocessor 510; in addition, the uplink data is transmitted to the base station. In general,radio frequency unit 501 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, theradio frequency unit 501 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via thenetwork module 502, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
Theaudio output unit 503 may convert audio data received by theradio frequency unit 501 or thenetwork module 502 or stored in thememory 509 into an audio signal and output as sound. Also, theaudio output unit 503 may also provide audio output related to a specific function performed by the electronic apparatus 500 (e.g., a call signal reception sound, a message reception sound, etc.). Theaudio output unit 503 includes a speaker, a buzzer, a receiver, and the like.
Theinput unit 504 is used to receive an audio or video signal. Theinput Unit 504 may include a Graphics Processing Unit (GPU) 5041 and amicrophone 5042, and theGraphics processor 5041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on thedisplay unit 506. The image frames processed by thegraphic processor 5041 may be stored in the memory 509 (or other storage medium) or transmitted via theradio frequency unit 501 or thenetwork module 502. Themicrophone 5042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via theradio frequency unit 501 in case of the phone call mode.
Theelectronic device 500 also includes at least onesensor 505, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of thedisplay panel 5061 according to the brightness of ambient light, and a proximity sensor that can turn off thedisplay panel 5061 and/or a backlight when theelectronic device 500 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); thesensors 505 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
Thedisplay unit 506 is used to display information input by the user or information provided to the user. TheDisplay unit 506 may include aDisplay panel 5061, and theDisplay panel 5061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
Theuser input unit 507 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, theuser input unit 507 includes atouch panel 5071 andother input devices 5072.Touch panel 5071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or neartouch panel 5071 using a finger, stylus, or any suitable object or attachment). Thetouch panel 5071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to theprocessor 510, and receives and executes commands sent by theprocessor 510. In addition, thetouch panel 5071 may be implemented in various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to thetouch panel 5071, theuser input unit 507 may includeother input devices 5072. In particular,other input devices 5072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, thetouch panel 5071 may be overlaid on thedisplay panel 5061, and when thetouch panel 5071 detects a touch operation thereon or nearby, the touch operation is transmitted to theprocessor 510 to determine the type of the touch event, and then theprocessor 510 provides a corresponding visual output on thedisplay panel 5061 according to the type of the touch event. Although in fig. 18, thetouch panel 5071 and thedisplay panel 5061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, thetouch panel 5071 and thedisplay panel 5061 may be integrated to implement the input and output functions of the electronic device, and is not limited herein.
Theinterface unit 508 is an interface for connecting an external device to theelectronic apparatus 500. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. Theinterface unit 508 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within theelectronic apparatus 500 or may be used to transmit data between theelectronic apparatus 500 and external devices.
Thememory 509 may be used to store software programs as well as various data. Thememory 509 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, thememory 509 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
Theprocessor 510 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in thememory 509 and calling data stored in thememory 509, thereby performing overall monitoring of the electronic device.Processor 510 may include one or more processing units; preferably, theprocessor 510 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated intoprocessor 510.
Theelectronic device 500 may further include a power supply 511 (e.g., a battery) for supplying power to various components, and preferably, thepower supply 511 may be logically connected to theprocessor 510 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system.
In addition, theelectronic device 500 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, which includes aprocessor 510, amemory 509, and a computer program that is stored in thememory 509 and can be run on theprocessor 510, and when the computer program is executed by theprocessor 510, the processes of the shooting method embodiment are implemented, and the same technical effect can be achieved, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.