Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
With the development of science and technology, more and more functions can be realized by the terminal. For example, when the terminal opens a cellular data network, the user may use the camera of the terminal to conduct a video call with other users. In the video call, the user can directly face the face of the user to the camera of the terminal, so that the effect of directly seeing the other side is achieved. In the process, the terminal can display the collected user image on the terminal interface, and when the face position of the user moves, the user can move or rotate the terminal based on the user image displayed on the terminal interface, so that the use experience of the user can be improved.
However, the face position of the user moves, and when the terminal position does not move, the face of the user in the user image acquired by the terminal is not over against the camera, so that the eyes of the user in the user image acquired by the terminal do not look at the camera of the terminal, the problem of poor shooting effect occurs, and the use experience of the user is influenced.
When a user uses the terminal to carry out self-shooting, the user rotates the terminal by 90 degrees anticlockwise, so that the position of a face area in a user image collected by the terminal is changed, the face area in the user image collected by the terminal is inconsistent with the face area expected by the user, the problem of poor shooting effect is caused, and the use experience of the user is influenced. The embodiment of the application provides a shooting method, which comprises the steps of collecting a user image through a camera, detecting the position of a face area in the user image, adjusting shooting parameters of the camera according to the position of the face area and the offset between the preset positions when the position of the face area is not at the preset position, and controlling the camera to shoot according to the shooting parameters, so that the shooting effect of a terminal can be improved, and the use experience of a user can be improved.
The technical scheme of the embodiment of the application can be used for scenes using the terminal camera, including but not limited to scenes such as video call, image shooting and the like.
Referring to fig. 1, a block diagram of a terminal according to an exemplary embodiment of the present application is shown. A terminal in the present application may include one or more of the following components: aprocessor 110, amemory 120, aninput device 130, anoutput device 140, and abus 150. Theprocessor 110,memory 120,input device 130, andoutput device 140 may be connected by abus 150.
Processor 110 may include one or more processing cores. Theprocessor 110 connects various parts within the entire terminal using various interfaces and lines, and performs various functions of the terminal 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in thememory 120 and calling data stored in thememory 120. Alternatively, theprocessor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), field-programmable gate array (FPGA), and Programmable Logic Array (PLA). Theprocessor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into theprocessor 110, but may be implemented by a communication chip.
Thememory 120 may be divided into an operating system space, where an operating system runs, and a user space, where native and third-party applications run. In order to ensure that different third-party application programs can achieve a better operation effect, the operating system allocates corresponding system resources for the different third-party application programs. However, the requirements of different application scenarios in the same third-party application program on system resources are different, for example, in a local resource loading scenario, the third-party application program has a higher requirement on the disk reading speed; in the animation rendering scene, the third-party application program has a high requirement on the performance of the GPU. The operating system and the third-party application program are independent from each other, and the operating system cannot sense the current application scene of the third-party application program in time, so that the operating system cannot perform targeted system resource adaptation according to the specific application scene of the third-party application program.
Theinput device 130 is used for receiving input instructions or data, and theinput device 130 includes, but is not limited to, a keyboard, a mouse, a camera, a microphone, or a touch device. Theoutput device 140 is used for outputting instructions or data, and theoutput device 140 includes, but is not limited to, a display device, a speaker, and the like. In one example, theinput device 130 and theoutput device 140 may be combined, and theinput device 130 and theoutput device 140 are touch display screens for receiving touch operations of a user on or near the touch display screens by using any suitable object such as a finger, a touch pen, and the like, and displaying user interfaces of various applications. The touch display screen is generally provided at a front panel of the terminal. The touch display screen may be designed as a full-face screen, a curved screen, or a profiled screen. The touch display screen can also be designed to be a combination of a full-face screen and a curved-face screen, and a combination of a special-shaped screen and a curved-face screen, which is not limited in the embodiment of the present application.
In addition, those skilled in the art will appreciate that the configurations of the terminals illustrated in the above-described figures do not constitute limitations on the terminals, as the terminals may include more or less components than those illustrated, or some components may be combined, or a different arrangement of components may be used. For example, the terminal further includes a radio frequency circuit, an input unit, a sensor, an audio circuit, a wireless fidelity (WiFi) module, a power supply, a bluetooth module, and other components, which are not described herein again.
In the embodiment of the present application, the main body of execution of each step may be the terminal described above. Optionally, the execution subject of each step is an operating system of the terminal. The operating system may be an android system, an IOS system, or another operating system, which is not limited in this embodiment of the present application.
The terminal of the embodiment of the application can also be provided with a display device, and the display device can be various devices capable of realizing a display function, for example: a cathode ray tube display (CR), a light-emitting diode display (LED), an electronic ink panel, a Liquid Crystal Display (LCD), a Plasma Display Panel (PDP), and the like. The user can view information such as displayed text, images, video, etc. using the display device on the terminal 101. The terminal may be a smart phone, a tablet computer, a gaming device, an AR (Augmented Reality) device, an automobile, a data storage device, an audio playing device, a video playing device, a notebook, a desktop computing device, a wearable device such as an electronic watch, an electronic glasses, an electronic helmet, an electronic bracelet, an electronic necklace, an electronic garment, or the like.
In the terminal shown in fig. 1, theprocessor 110 may be configured to call an application program stored in thememory 120 and specifically execute the shooting method according to the embodiment of the present application.
When the terminal receives that the user clicks the image shooting control, the terminal can acquire the user image through the camera and call a face recognition algorithm to acquire a face area in the user image. When the terminal detects that the position of the face area in the user image is not at the preset position, the terminal can calculate the shooting parameters of the camera according to the offset between the position of the face area and the preset position. And when the terminal calculates the shooting parameters of the camera, the terminal controls the camera to shoot according to the shooting parameters.
In the following method embodiments, for convenience of description, only the main execution body of each step is described as a terminal.
Fig. 2 is a flowchart illustrating a photographing method according to an embodiment of the present application.
As shown in fig. 2, the photographing method includes:
and S101, acquiring a user image through a camera.
According to some embodiments, the camera of the terminal may be a front camera of the terminal. The front camera has an automatic rotation capability and an automatic focusing capability.
It will be readily appreciated that a user may open the cellular data network of the terminal before the user uses the terminal to conduct a video call with another user. For example, the user may click on an open control of the cellular data network of the terminal. The cellular data network of the terminal may be opened when the terminal detects that the user clicks an open control of the cellular data network. When the terminal receives that the user clicks the video call control of the target user, the terminal can establish a video call with the terminal of the target user by using the cellular data network of the terminal.
Optionally, when the terminal establishes a video call with the terminal of the target user, the terminal may acquire the user image through the camera. For example, when the a-user clicks a video call control with the B-user on the display interface of the a-terminal, the a-terminal may establish a video call with the B-terminal using the cellular data network of the a-terminal. The terminal A can acquire the image of the user A through the camera. The display interface of the terminal a at this time may be as shown in fig. 3.
S102, detecting the position of the face area in the user image.
According to some embodiments, when a user moves and the terminal does not move, the position of a user image acquired by the terminal through the camera changes, and the situation that the human eyes of the user do not face the camera of the terminal is generated. Therefore, when the terminal acquires the user image through the camera, the face region can be identified by adopting a face identification algorithm, and when the face region is identified by the terminal, the position of the face region in the user image can be detected.
It is easy to understand that when a user does not move and the terminal moves, the position of the user image acquired by the terminal through the camera also changes, when the terminal acquires the user image through the camera, the face region can be identified by adopting a face identification algorithm, and when the face region is identified by the terminal, the position of the face region in the user image can be detected.
Alternatively, when the terminal detects a face region of the user image, the face region may be marked using a rectangular frame. Wherein the face region is included within the rectangular frame. The terminal may determine the size of the rectangular frame based on the size of the area of the face region. When the terminal detects the position of the face region in the user image, the center point of the rectangular frame can be used as the position of the face region. Wherein, the center point of the rectangular frame may be the intersection point of the diagonals of the rectangular frame.
According to some embodiments, when the a terminal acquires the image of the a user, a face recognition algorithm may be used to recognize the face region of the image of the a user. When the a terminal identifies the face region of the a user image, the a terminal may determine the size of the rectangular frame according to the size of the area of the face region of the a user image. After the a terminal determines the size of the rectangular frame, the rectangular frame can be used for marking the face area of the A user image. The a terminal may determine an intersection point of diagonal lines of the rectangular frame as a position of the face region of the a user image.
According to some embodiments, fig. 4 shows a flow diagram of a photographing method of an embodiment of the present application. As shown in fig. 4, the photographing method includes: detecting the positions of two face regions included in the face region; and marking the two face areas by using a rectangular frame, determining two central points of the two face areas, and taking the central points of the two central point line segments as the positions of the face areas. The terminal can also take the center point of the center point line segment of the eye areas of the two face areas as the position of the face area.
It is easy to understand that when the a-terminal detects a position where two face regions are included in the face region, the two face regions may be marked using two rectangular frames. The size of the two rectangular boxes is determined based on the area of the two face regions. The terminal a can obtain the intersection point of the diagonals of the two rectangular frames as the central points of the two face regions. When the terminal a acquires the central points of the two face regions, the central point of the line segment connecting the two central points can be used as the position of the face region. At this time, the display interface of the terminal may be as shown in fig. 5.
S103, when the position of the face area is not at the preset position, adjusting shooting parameters of the camera according to the offset between the position of the face area and the preset position.
According to some embodiments, when the terminal acquires the position of the face region of the user image, whether the position of the face region is at a preset position or not can be detected. The preset position can be the position of the camera of the terminal, which is directly viewed by eyes of the user, or the preset position set by the user according to personal preference. When the terminal detects that the position of the face region is not at the preset position, the terminal can acquire the offset between the position of the face region and the preset position. The terminal can adjust shooting parameters of the camera based on the offset. The shooting parameters of the camera are adjusted to include the rotation direction and the rotation angle of the camera. The shooting angle may range, for example, from 0 to 180.
It is easy to understand that the preset position set by the terminal may be, for example, the center position C position of the user image. When the position of the terminal a, which acquires the face area of the user image A, is the position D, the terminal a detects that the position D of the face area of the user image A is not at the position C which is preset. The terminal a can obtain the offset between the position D of the face area of the user image A and the position C of the preset position. The offset may be, for example, 1cm in the positive direction of the Y-axis. The terminal can adjust shooting parameters of the camera based on the offset. The imaging parameter may be, for example, 15 ° in the positive direction of the Y axis. The terminal display interface may be as shown in fig. 6.
And S104, controlling the camera to shoot according to the shooting parameters.
According to some embodiments, when the terminal determines the shooting parameters of the camera based on the offset between the position of the face region and the preset position, the terminal may control the camera to shoot according to the shooting parameters. For example, when the a terminal adjusts the shooting parameter of the camera to rotate 15 degrees in the positive direction of the Y axis based on the offset between the position D of the face area of the user image a and the position C of the preset position, the a terminal may control the camera to rotate 15 degrees in the positive direction of the Y axis according to the shooting parameter to shoot. Therefore, the terminal a controls the camera to shoot according to the shooting parameters, the image of the face area position of the user image at the preset position can be obtained, and the visual experience effect of the user can be improved.
The embodiment of the application provides a shooting method, wherein a camera is used for collecting a user image, the position of a face area in the user image is detected, when the position of the face area is not at a preset position, shooting parameters of the camera can be adjusted according to the offset between the position of the face area and the preset position, and the camera is controlled to shoot according to the shooting parameters. According to the technical scheme, when a user uses the terminal to shoot and the position of the face area in the user image changes, the terminal can adjust shooting parameters of the camera according to the offset between the position of the face area in the user image and the preset position, and can adjust the position of the face area in the user image to the preset position. In the process, the user does not need to adjust the position of the terminal, the terminal can improve the shooting effect of the terminal by adjusting the shooting parameters of the camera, and therefore the use experience of the user can be improved.
Fig. 7 is a flowchart illustrating a photographing method according to an embodiment of the present application.
As shown in fig. 7, the photographing method includes:
s301, collecting the user image through the camera.
The specific process is as described above, and is not described herein again.
S302, the positions of human eyes in the human face area of the user image are obtained.
According to some embodiments, when the terminal acquires the user image through the camera, the terminal may acquire the face area of the user image by using a face recognition algorithm. When the terminal acquires the face region, a human eye recognition algorithm can be adopted to acquire the human eye region of the face region. When the terminal determines the human eye regions, the center points of the two human eye regions can be acquired. The terminal may determine a center point of a line segment between the two center points as a position of the human eye in the face region of the user image.
It is easy to understand that when the Q terminal acquires the Q user image, the AI intelligent recognition model may be invoked to recognize the face region of the Q user image. When the Q terminal acquires two human eye regions of the human face region of the Q user image by adopting a human eye recognition algorithm, the two human eye regions can be marked by using two rectangular frames. The Q terminal may use the center point of the center point line segment of the two rectangular frames as the position of the human eye in the human face area of the Q user image. At this time, the interface display of the terminal may be as shown in fig. 8.
And S303, calculating the offset between the human eye position and the preset position.
According to some embodiments, when the terminal acquires the eye position of the face area of the user image, whether the eye position of the face area is at a preset position can be detected. The preset position can be the central position of the user image, and can also be a preset position set by the user according to personal preference. When the terminal detects that the human eye position of the human face area is not at the preset position, the terminal can calculate the offset between the human eye position and the preset position.
It is easily understood that the preset position set by the Q terminal may be, for example, the center position E position of the Q user image. When the eye position of the face area of the Q user image obtained by the Q terminal is the R position, the a terminal detects that the eye position of the face area of the Q user image is the R position and is not at the preset position E position. The terminal a can calculate the offset between the position R of the human eyes of the human face area of the user image Q and the position E of the preset position E. The q terminal may calculate the offset between the eye position and the preset position to be 2cm in the negative direction along the Y axis, for example.
And S304, determining the rotation direction and the rotation angle of the camera according to the offset.
According to some embodiments, when the terminal calculates an offset between the position of the human eye and the preset position, the terminal may determine the rotation direction and the rotation angle of the camera according to the offset. For example, when the q-terminal calculates that the offset between the position of the human eye and the preset position is 2cm offset in the negative direction of the Y-axis, the q-terminal may determine the rotation direction and the rotation angle of the camera according to the offset, for example, may be 35 ° rotated in the negative direction of the Y-axis.
And S305, controlling the camera to rotate according to the rotating direction and the rotating angle.
According to some embodiments, when the offset between the position of the eyes and the preset position is calculated by the terminal, the rotation direction and the rotation angle of the camera are determined according to the offset, the camera is controlled to rotate according to the rotation direction and the rotation angle, and the terminal can acquire images of a target user through the camera. The position of the face region of the target user image is at a preset position. For example, when the q terminal determines that the rotation direction and the rotation angle of the camera rotate by 35 degrees in the negative direction of the Y axis according to the offset between the position of the human eye and the preset position, the terminal can control the camera to rotate according to the rotation direction and the rotation angle. When the q terminal controls the camera to rotate, the target user image can be acquired.
It is easy to understand that when the terminal detects that the ratio of the area of the face region to the area of the user image is smaller than the preset value, the focal length of the camera can be increased, so that the area of the face region of the user image can be increased. For example, the preset value set by the q terminal may be 1: 3. When the Q terminal detects that the ratio of the area of the face region of the Q user image to the area of the user image is 1:4, the focal length of the camera can be increased, so that the ratio of the area of the face region of the Q user image to the area of the user image is larger than 1: 3.
And S306, when the user image is not acquired through the camera, adjusting the shooting parameters of the camera to be preset shooting parameters.
According to some embodiments, the preset shooting parameters may refer to, for example, shooting parameters set by a user according to personal preferences, and may also refer to shooting parameters for capturing an image of the user through a camera by a terminal last time. When the terminal does not acquire the user image through the camera, the terminal can adjust the shooting parameters of the camera to preset shooting parameters. For example, when the terminal does not acquire the user image through the camera, the shooting parameters of the camera can be adjusted to the shooting parameters of the user image acquired by the terminal through the camera last time.
It is easy to understand that fig. 9 shows a flow chart of the shooting method of the embodiment of the present application. As shown in fig. 9, the photographing method includes: receiving a scene shooting instruction; the scene shooting instruction carries information of a target scene, the target scene is collected through a camera based on the scene shooting instruction, and shooting parameters of the camera are adjusted based on the position of the target scene; and controlling the camera to shoot according to the shooting parameters. Therefore, the terminal does not need to adjust shooting parameters of the camera by means of an auxiliary tool, a user can obtain a required image, and the use experience of the user can be improved.
Optionally, when the q terminal is shot by the camera, the user may click the scene where the camera collects the image. And when the q terminal detects that the user clicks the scene of the image collected by the camera, a scene shooting instruction can be generated. When the processor of the q terminal receives the scene shooting instruction, the scene shooting instruction can be analyzed, and the information of the target scene carried by the scene shooting instruction is obtained. The information of the target scene may be, for example, a duck swimming in a river. When the q terminal acquires the information of the target scene, the duck swimming in the river can be collected through the camera. When the q terminal detects that the duck swimming in the river is not at the central point of the preset position image, the duck can be marked by adopting the rectangular frame, and the central point of the rectangular frame is obtained, namely the q terminal can obtain the position of the target scene. Based on the position of the target scene and the offset of the preset position, the q terminal can adjust shooting parameters of the camera and shoot the duck swimming in the river.
The embodiment of the application provides a shooting method, which comprises the steps of collecting a user image through a camera, calculating the offset between the human eye position and the preset position of a human face area in the user image, determining the rotation direction and the rotation angle of the camera according to the offset, and controlling the camera to rotate according to the rotation direction and the rotation angle. According to the technical scheme, when a user uses the terminal to shoot and the position of the face area in the user image changes, the terminal can adjust shooting parameters of the camera according to the offset between the human eye position of the face area in the user image and the preset position, and can adjust the human eye position of the face area in the user image to the preset position. In the process, the user does not need to adjust the position of the terminal, the terminal can improve the shooting effect of the terminal by adjusting the shooting parameters of the camera, and therefore the use experience of the user can be improved.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Fig. 10 is a schematic structural diagram of a photographing apparatus according to an embodiment of the present application.
As shown in fig. 10, the photographing apparatus includes animage capturing apparatus 1001, aposition detecting unit 1002, aparameter adjusting unit 1003, and animage photographing unit 1004, in which:
animage acquisition device 1001 for acquiring a user image by a camera;
aposition detection unit 1002 for detecting the position of a face region in a user image;
aparameter adjusting unit 1003, configured to adjust a shooting parameter of the camera according to an offset between the position of the face area and a preset position when the position of the face area is not at the preset position; the shooting parameters comprise a rotating direction and a rotating angle;
and animage shooting unit 1004 for controlling the camera to shoot according to the shooting parameters.
According to some embodiments, theparameter adjusting unit 1003 is configured to, when the position of the face area is not at the preset position, adjust the shooting parameter of the camera according to an offset between the position of the face area and the preset position, and specifically:
acquiring the positions of human eyes in a human face area;
calculating the offset between the position of the human eyes and a preset position;
determining the rotation direction and the rotation angle of the camera according to the offset;
and controlling the camera to rotate according to the rotating direction and the rotating angle.
According to some embodiments, theposition detecting unit 1002, when detecting the position of the face region in the user image, is specifically configured to:
detecting a face region;
marking the face area by using a rectangular frame;
and taking the central point of the rectangular frame as the position of the face area.
According to some embodiments, theposition detecting unit 1002, when detecting the position of the face region in the user image, is specifically configured to:
detecting the positions of two face regions included in the face region;
marking two face areas by using a rectangular frame;
determining two central points of the two face areas, and taking the central points of the two central point line segments as the positions of the face areas.
According to some embodiments, theparameter adjusting unit 1003 is configured to, when the position of the face area is not at the preset position, adjust the shooting parameter of the camera according to an offset between the position of the face area and the preset position, specifically:
and when the ratio of the area of the face region to the area of the user image is smaller than a preset value, the focal length of the camera is increased.
According to some embodiments, theparameter adjusting unit 1003 is further configured to adjust the shooting parameters of the camera to be preset shooting parameters when the user image is not acquired by the camera.
According to some embodiments, theparameter adjusting unit 1003 is further configured to receive a scene shooting instruction; the scene shooting instruction carries information of a target scene;
acquiring a target scene through a camera based on a scene shooting instruction;
adjusting shooting parameters of a camera based on the position of the target scene;
and controlling the camera to shoot according to the shooting parameters.
The embodiment of the application provides a shooting device, which comprises an image acquisition device and a camera, wherein the image acquisition device acquires a user image through the camera, a position detection unit detects the position of a face area in the user image, a parameter adjustment unit adjusts shooting parameters of the camera according to the position of the face area and the offset between the preset positions when the position of the face area is not at the preset position, and an image shooting unit controls the camera to shoot according to the shooting parameters. According to the technical scheme, when the user uses the terminal to shoot and the position of the face area in the user image changes, the shooting device can adjust the shooting parameters of the camera according to the offset between the position of the face area in the user image and the preset position, and can adjust the position of the face area in the user image to the preset position. In the process, the user does not need to adjust the position of the shooting device, the shooting device can improve the shooting effect of the shooting device by adjusting the shooting parameters of the camera, and further the use experience of the user can be improved.
Please refer to fig. 11, which is a schematic structural diagram of a terminal according to an embodiment of the present disclosure. As shown in fig. 11, the terminal 1100 may include: at least oneprocessor 1101, at least onenetwork interface 1104, auser interface 1103, amemory 1105, at least onecommunication bus 1102.
Wherein acommunication bus 1102 is used to enable connective communication between these components.
Theuser interface 1103 may include a Display screen (Display) and a camera, and theoptional user interface 1103 may also include a standard wired interface and a wireless interface.
Thenetwork interface 1104 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface).
Processor 1101 may include one or more processing cores, among other things. Theprocessor 1101 connects various parts throughout the terminal 1100 using various interfaces and lines to perform various functions of the terminal 1100 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in thememory 1105 and invoking data stored in thememory 1105. Optionally, theprocessor 1101 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). Theprocessor 1101 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into theprocessor 1101, but may be implemented by a single chip.
TheMemory 1105 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, thememory 1105 includes non-transitory computer-readable storage media. Thememory 1105 may be used to store instructions, programs, code, sets of codes, or sets of instructions. Thememory 1105 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. Thememory 1105 may alternatively be at least one storage device located remotely from theprocessor 1101. As shown in fig. 11, amemory 1105, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and an application program for photographing.
In the terminal 1100 shown in fig. 11, theuser interface 1103 is mainly used as an interface for providing input for a user, and acquiring data input by the user; and theprocessor 1101 may be configured to invoke the application stored in thememory 1105 and specifically perform the following operations:
collecting a user image through a camera;
detecting the position of a face region in a user image;
when the position of the face area is not at the preset position, adjusting the shooting parameters of the camera according to the offset between the position of the face area and the preset position; the shooting parameters comprise a rotating direction and a rotating angle;
and controlling the camera to shoot according to the shooting parameters.
According to some embodiments, when theprocessor 1101 performs the adjustment of the shooting parameters of the camera according to the offset between the position of the face area and the preset position when the position of the face area is not at the preset position, the following steps are specifically performed:
acquiring the positions of human eyes in a human face area;
calculating the offset between the position of the human eyes and a preset position;
determining the rotation direction and the rotation angle of the camera according to the offset;
and controlling the camera to rotate according to the rotating direction and the rotating angle.
According to some embodiments, theprocessor 1101, when performing the detecting the position of the face region in the user image, specifically performs the following steps:
detecting a face region;
marking the face area by using a rectangular frame;
and taking the central point of the rectangular frame as the position of the face area.
According to some embodiments, theprocessor 1101, when performing the detecting the position of the face region in the user image, specifically performs the following steps:
detecting the positions of two face regions included in the face region;
marking two face areas by using a rectangular frame;
determining two central points of the two face areas, and taking the central points of the two central point line segments as the positions of the face areas.
According to some embodiments, when theprocessor 1101 performs the adjustment of the shooting parameters of the camera according to the offset between the position of the face area and the preset position when the position of the face area is not at the preset position, the following steps are specifically performed:
and when the ratio of the area of the face region to the area of the user image is smaller than a preset value, the focal length of the camera is increased.
According to some embodiments, theprocessor 1101 is further configured to perform the following steps:
and when the user images are not acquired through the camera, adjusting the shooting parameters of the camera to be preset shooting parameters.
According to some embodiments, theprocessor 1101 is further configured to perform the following steps:
receiving a scene shooting instruction; the scene shooting instruction carries information of a target scene;
acquiring a target scene through a camera based on a scene shooting instruction;
adjusting shooting parameters of a camera based on the position of the target scene;
and controlling the camera to shoot according to the shooting parameters.
The embodiment of the application provides a shooting method, wherein a camera is used for collecting a user image, the position of a face area in the user image is detected, when the position of the face area is not at a preset position, shooting parameters of the camera can be adjusted according to the offset between the position of the face area and the preset position, and the camera is controlled to shoot according to the shooting parameters. According to the technical scheme, when a user uses the terminal to shoot and the position of the face area in the user image changes, the terminal can adjust shooting parameters of the camera according to the offset between the position of the face area in the user image and the preset position, and can adjust the position of the face area in the user image to the preset position. In the process, the user does not need to adjust the position of the terminal, the shooting effect of the terminal can be improved, and the use experience of the user can be improved.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the above-described method. The computer-readable storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, DVD, CD-ROMs, microdrive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any one of the method embodiments as described above.
It is clear to a person skilled in the art that the solution of the present application can be implemented by means of software and/or hardware. The "unit" and "module" in this specification refer to software and/or hardware that can perform a specific function independently or in cooperation with other components, where the hardware may be, for example, a Field-ProgrammaBLE gate array (FPGA), an Integrated Circuit (IC), or the like.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some service interfaces, devices or units, and may be an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: various media capable of storing program codes, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program, which is stored in a computer-readable memory, and the memory may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above description is only an exemplary embodiment of the present disclosure, and the scope of the present disclosure should not be limited thereby. That is, all equivalent changes and modifications made in accordance with the teachings of the present disclosure are intended to be included within the scope of the present disclosure. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.