Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart of a video processing method according to an embodiment of the present invention. The video processing method shown in fig. 1 may include the following steps:
and 102, recording the depth of field information of the N frames of video images collected by the camera in the video recording process.
Optionally, instep 102, recording is performed by using two cameras. Under the condition, the terminal equipment enters a camera preview work interface after receiving a camera opening instruction of a user, if a double-camera shooting and imaging instruction of the user is received again, the double cameras are sequentially electrified, enabled and initialized, light information in the picture and depth information in the picture are obtained, the digital signals are processed after the light information is converted into the digital signals, data which can be previewed by the user are obtained, and the preview data are displayed on a screen of the terminal equipment, so that the camera preview is realized. The preview screen displays the picture information collected by the main camera, records the picture and collects the sound information at the same time, and generates a video file (for example, generates a default video MP4 video file) by performing a real-time encoding operation.
And 104, outputting the first video after the video recording is finished. And 106, receiving a first input of the first video from a user.
It is to be understood that instep 106, the first input of the first video by the user is used by the terminal device to determine the sharpness to which the video image needs to be adjusted.
Optionally, as shown in fig. 2, beforestep 106, displaying an aperture adjustment control on an output interface of the first video; correspondingly, instep 106, receiving a first input of the first video from the user includes: a first input to an aperture adjustment control by a user is received. Therefore, the definition required to be adjusted can be determined based on the relation between the aperture value and the definition, the operation is simple, and the realization of the terminal equipment can be simplified.
Optionally, in some embodiments, beforestep 106, further comprising: and displaying prompt information on an output interface of the first video, wherein the prompt information is used for prompting that the first video is a video with adjustable definition. Therefore, the user can conveniently know which videos are adjustable in definition, and the user can conveniently operate the video processing system.
For example, as shown in fig. 3, a "video focusing" icon is displayed on the display interface, and when a user sees the icon, the user can know that the current video is a video with adjustable definition, and then can adjust the definition according to the need of the user. For example, a user clicks a "video focusing" icon, at this time, an aperture adjustment control may pop up on the display interface, and the user may adjust the definition of the picture by adjusting the size of the aperture.
Andstep 108, responding to the first input, and adjusting the definition of the M frames of video images in the N frames of video images according to the recorded depth information of the N frames of video images to obtain the adjusted M frames of video images.
In particular, in some embodiments, the first input is a first input to an aperture adjustment control by a user. In this case, before adjusting the sharpness of M frames of video images among the N frames of video images, the method shown in fig. 1 further includes: determining an aperture value corresponding to the first input; and determining the definition of the target according to the aperture value. Correspondingly, instep 108, the definition of the M frames of video images is adjusted to the target definition. Therefore, the user can adjust the definition of the video image by adjusting the size of the aperture up and down, and the operation of the user is facilitated.
Further, the identification information of the N frames of video images and the depth information of the N frames of video images are stored in an associated mode. In this case, adjusting the sharpness of the M-frame video image to the target sharpness includes: determining depth-of-field information associated with identification information of each frame of video images in the M frames of video images; and adjusting the definition of the M frames of video images to the target definition according to the depth information of each frame of video image. By storing the identification information of the video image and the depth of field information of the video image in an associated manner, the depth of field information corresponding to the video image can be conveniently searched when the definition of the video image is adjusted.
For example, the identification information of the video image and the depth information of the video image are stored in a look-up table. As shown in fig. 4, the Frame0,Frame 1,Frame 2,Frame 3, and Frame4 in fig. 4 are identification information of the video image, and theDepth 0,Depth 1,Depth 2,Depth 3, andDepth 4 are Depth information corresponding to the Frame0-Frame4, respectively.
Optionally, in some embodiments, before adjusting the sharpness of N frames of the M frames of video images, the method further includes: displaying the N frames of video images frame by frame; receiving a second input of the M frames of video images from a user; determining a target position to be adjusted in the M frames of video images in response to the second input; further adjusting the definition of the N frames of video images in the M frames of video, including: adjusting a sharpness of an image at the target location in the M frames of video images.
That is, the video image may be played frame by frame, and the user may select a position of the sharpness to be adjusted on the display interface of the video image. Therefore, the user can adjust the definition of different positions of the video images of different frames, the free selection of the position to be adjusted in each frame of video image is realized, and the user experience is improved.
Further, in the case where the video image is played frame by frame, the user may adjust the first input to the aperture adjustment control on the display interface of the video image. For example, the aperture value is adjusted to different values for different video images, so that different video images can be adjusted to different definitions, more interesting and artistic video images are generated, and user experience is further improved.
The frame-by-frame playing may be played at a preset time interval, or may be played when an input triggering playing of a next frame of video image is received from a user.
Step 110, generating a target video according to the adjusted M frames of video images; wherein N, M is a positive integer, N is more than or equal to M.
Specifically, the adjusted M frames of video images may be correspondingly replaced with the M frames of video images before adjustment in the first video, so as to generate the target video.
Of course, in practical application, only the adjusted M frames of video images may be used to generate the target video according to the user's requirements.
According to the embodiment of the invention, the definition of the video image in the recorded video can be adjusted by utilizing the depth-of-field information corresponding to the video image in the video. Therefore, when part of pictures in the recorded video are not clear or the user wants to edit the recorded video to show artistic video and the like, the definition of the image can be adjusted according to the input of the user to the video, and the requirements of the user are further met.
Fig. 5 is a flow chart of a video processing method according to an embodiment of the invention. The video processing method shown in fig. 5 may include the following steps:
step 201, prompting a user that the video is the video with adjustable definition.
For example, a user may browse and view a recorded video in an album, and if the video is a double-shot recorded video, an icon or a prompt is marked at a suitable position (e.g., the upper right corner) of the preview image to prompt the user to edit the depth information of the video file again, so as to adjust the definition of the video image.
Step 202, in response to the user input, enters a re-edit mode.
For example, the user clicks the "video focus" icon in fig. 3, and the terminal device performs the video image re-editing mode.
Step 203, determining the picture needing to readjust the definition.
The user can click on a position of unexpected definition adjustment in the preview picture, and then pop up an aperture adjustment control on the preview interface.
And 204, determining an aperture value corresponding to the input of the aperture adjusting control by the user, and adjusting the definition of the video image based on the aperture value.
The user can realize the regulation of aperture size through the input to aperture adjustment control, and terminal equipment confirms the definition that needs adjust to according to the aperture size, later handles the depth of field information of the video image of record, simulates out the preview effect of the position department that the user needs to readjust the definition.
The terminal equipment can automatically play the video images frame by frame at a low speed, a user can slide the optical ring adjusting control on the video images needing to be subjected to definition adjustment or click the positions needing to be subjected to definition adjustment on the video images, and the terminal equipment adjusts the definition of the images at the positions clicked by the user according to the size of the aperture value.
Step 205, after editing frame by frame, confirm and save.
And after the user finishes editing, saving the generated video file to finish re-editing. The previously generated video file can be overwritten, and the user obtains the customized edited video.
Fig. 6 is a schematic structural diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 6, the terminal device includes:
theprocessing module 11 is configured to record depth-of-field information of the N frames of video images acquired by the camera in a video recording process;
thedisplay module 12 is configured to output a first video after the video recording is finished;
the receivingmodule 13 is configured to adjust, in response to the first input, the definition of M video images in the N video images according to the recorded depth-of-field information of the N video images, so as to obtain an adjusted N video image;
theprocessing module 11 is further configured to generate a target video according to the adjusted M-frame image;
wherein N, M is a positive integer, N is more than or equal to M.
The terminal equipment of the embodiment of the invention can adjust the definition of the video image in the recorded video by utilizing the depth-of-field information corresponding to the video image in the video. Therefore, when part of pictures in the recorded video are not clear or the user wants to edit the recorded video to show artistic video and the like, the definition of the video image can be adjusted according to the input of the user to the video, and the requirements of the user are further met.
In the embodiment of the present invention, optionally, thedisplay module 12 is further configured to:
before the receiving module receives a first input of a user to the first video, displaying an aperture adjusting control on an output interface of the first video;
wherein, the receivingmodule 13 is specifically configured to:
receiving a first input to the aperture adjustment control by the user.
In this embodiment of the present invention, optionally, theprocessing module 11 is further configured to:
determining an aperture value corresponding to the first input before adjusting the sharpness of the M frames of video images in the N frames of video images;
determining the definition of a target according to the aperture value;
in terms of adjusting the sharpness of M frames of video images in the N frames of video images, theprocessing module 11 is specifically configured to:
and adjusting the definition of the N frames of video images to the target definition.
In this embodiment of the present invention, optionally, theprocessing module 11 is specifically configured to:
storing the identification information of the N frames of video images and the depth information of the N frames of video images in a correlation manner;
determining depth information associated with identification information of each frame of the M frames of video images;
and adjusting the definition of the M frames of video images to the target definition according to the depth information of each frame of video image.
In this embodiment of the present invention, optionally, thedisplay module 12 is specifically configured to:
displaying N frames of video images frame by frame before the processing module adjusts the definition of the N frames of video images in the M frames of video images;
the receiving module is further configured to: receiving a second input of the M frames of video images from a user;
theprocessing module 11 is specifically configured to: determining a target position to be adjusted in the M frames of video images in response to the second input;
adjusting a sharpness of an image at the target location in the M-frame video images.
In the embodiment of the present invention, optionally, thedisplay module 12 is further configured to:
before the receiving module receives a first input of a user to the first video, prompt information is displayed on an output interface of the first video, and the prompt information is used for prompting that the first video is a video with adjustable definition.
The terminal device provided in the embodiment of the present invention can implement each process implemented by the terminal device in the method embodiments of fig. 1 to fig. 4, and is not described herein again to avoid repetition.
Fig. 7 is a schematic diagram of a hardware structure of a terminal device according to another embodiment of the present invention, where theterminal device 700 includes but is not limited to: aradio frequency unit 701, anetwork module 702, anaudio output unit 703, aninput unit 704, asensor 705, adisplay unit 706, auser input unit 707, aninterface unit 708, amemory 709, aprocessor 710, apower supply 711, and the like. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 7 does not constitute a limitation of the terminal device, and that the terminal device may include more or fewer components than shown, or combine certain components, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
Wherein, theprocessor 710 is configured to:
recording the depth of field information of N frames of video images collected by a camera in the video recording process;
outputting a first video after the video recording is finished;
receiving a first input of the first video from a user;
responding to the first input, and adjusting the definition of M frames of video images in the N frames of video images according to the recorded depth-of-field information of the N frames of video images to obtain the adjusted M frames of video images;
generating a target video according to the adjusted M frames of video images;
wherein N, M is a positive integer, N is more than or equal to M.
The terminal equipment provided by the embodiment of the invention can adjust the definition of the video image in the recorded video according to the depth-of-field information corresponding to the video image in the video, so that the definition of the image can be adjusted according to the input of the user to the video under the conditions that part of pictures in the recorded video are not clear or the user wants to edit the recorded video to present an artistic video and the like, and the requirements of the user can be further met.
It should be understood that, in the embodiment of the present invention, theradio frequency unit 701 may be used for receiving and sending signals during a message transmission and reception process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to theprocessor 710; in addition, the uplink data is transmitted to the base station. In general,radio frequency unit 701 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, theradio frequency unit 701 may also communicate with a network and other devices through a wireless communication system.
The terminal device provides the user with wireless broadband internet access through thenetwork module 702, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
Theaudio output unit 703 may convert audio data received by theradio frequency unit 701 or thenetwork module 702 or stored in thememory 709 into an audio signal and output as sound. Also, theaudio output unit 703 may also provide audio output related to a specific function performed by the terminal device 700 (e.g., a call signal reception sound, a message reception sound, etc.). Theaudio output unit 703 includes a speaker, a buzzer, a receiver, and the like.
Theinput unit 704 is used to receive audio or video signals. Theinput Unit 704 may include a Graphics Processing Unit (GPU) 7041 and amicrophone 7042, and theGraphics processor 7041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on thedisplay unit 706. The image frames processed by thegraphic processor 7041 may be stored in the memory 709 (or other storage medium) or transmitted via theradio unit 701 or thenetwork module 702. Themicrophone 7042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via theradio frequency unit 701 in case of a phone call mode.
Theterminal device 700 further comprises at least onesensor 705, such as light sensors, motion sensors and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the luminance of thedisplay panel 7061 according to the brightness of ambient light, and a proximity sensor that turns off thedisplay panel 7061 and/or a backlight when theterminal device 700 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; thesensors 705 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
Thedisplay unit 706 is used to display information input by the user or information provided to the user. TheDisplay unit 706 may include aDisplay panel 7061, and theDisplay panel 7061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
Theuser input unit 707 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, theuser input unit 707 includes atouch panel 7071 andother input devices 7072. Thetouch panel 7071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 7071 (e.g., operations by a user on or near thetouch panel 7071 using a finger, a stylus, or any other suitable object or attachment). Thetouch panel 7071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to theprocessor 710, receives a command from theprocessor 710, and executes the command. In addition, thetouch panel 7071 can be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave. Theuser input unit 707 may includeother input devices 7072 in addition to thetouch panel 7071. In particular, theother input devices 7072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein again.
Further, thetouch panel 7071 may be overlaid on thedisplay panel 7061, and when thetouch panel 7071 detects a touch operation on or near thetouch panel 7071, the touch operation is transmitted to theprocessor 710 to determine the type of the touch event, and then theprocessor 710 provides a corresponding visual output on thedisplay panel 7061 according to the type of the touch event. Although in fig. 7, thetouch panel 7071 and thedisplay panel 7061 are implemented as two independent components to implement the input and output functions of the terminal device, in some embodiments, thetouch panel 7071 and thedisplay panel 7061 may be integrated to implement the input and output functions of the terminal device, which is not limited herein.
Theinterface unit 708 is an interface for connecting an external device to theterminal apparatus 700. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. Theinterface unit 708 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within theterminal apparatus 700 or may be used to transmit data between theterminal apparatus 700 and the external device.
Thememory 709 may be used to store software programs as well as various data. Thememory 709 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, thememory 709 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
Theprocessor 710 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in thememory 709 and calling data stored in thememory 709, thereby performing overall monitoring of the terminal device.Processor 710 may include one or more processing units; preferably, theprocessor 710 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated intoprocessor 710.
Theterminal device 700 may further include a power supply 711 (e.g., a battery) for supplying power to various components, and preferably, thepower supply 711 may be logically connected to theprocessor 710 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, theterminal device 700 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides a terminal device, which includes aprocessor 710, amemory 709, and a computer program stored in thememory 709 and capable of running on theprocessor 710, where the computer program is executed by theprocessor 710 to implement each process of the video processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the video processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.