Movatterモバイル変換


[0]ホーム

URL:


CN109819188B - Video processing method and terminal equipment - Google Patents

Video processing method and terminal equipment
Download PDF

Info

Publication number
CN109819188B
CN109819188BCN201910092667.1ACN201910092667ACN109819188BCN 109819188 BCN109819188 BCN 109819188BCN 201910092667 ACN201910092667 ACN 201910092667ACN 109819188 BCN109819188 BCN 109819188B
Authority
CN
China
Prior art keywords
video
frames
video images
definition
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910092667.1A
Other languages
Chinese (zh)
Other versions
CN109819188A (en
Inventor
刘光威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co LtdfiledCriticalVivo Mobile Communication Co Ltd
Priority to CN201910092667.1ApriorityCriticalpatent/CN109819188B/en
Publication of CN109819188ApublicationCriticalpatent/CN109819188A/en
Application grantedgrantedCritical
Publication of CN109819188BpublicationCriticalpatent/CN109819188B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Landscapes

Abstract

Translated fromChinese

本发明公开了一种视频的处理方法和终端设备,该方法包括:在录像过程中,记录摄像头采集的N帧视频图像的景深信息;在录像结束后,输出第一视频接收用户对所述第一视频的第一输入;响应于所述第一输入,根据已记录的N帧视频图像的景深信息,调整所述N帧视频图像中M帧视频图像的清晰度,得到调整后的M帧图像;根据所述调整后的M帧视频图像,生成目标视频;其中,N、M为正整数,N≥M。本发明实施例的方法,能够对已经录制的视频进行清晰度的调整,满足用户的需求。

Figure 201910092667

The invention discloses a video processing method and a terminal device. The method includes: during the recording process, recording depth information of N frames of video images collected by a camera; after the recording ends, outputting a first video to receive user feedback on the first video A first input of a video; in response to the first input, adjust the definition of M frames of video images in the N frames of video images according to the depth of field information of the N frames of video images, to obtain the adjusted M frames of images ; Generate a target video according to the adjusted M frames of video images; wherein, N and M are positive integers, and N≧M. The method of the embodiment of the present invention can adjust the definition of the recorded video to meet the needs of the user.

Figure 201910092667

Description

Video processing method and terminal equipment
Technical Field
The embodiment of the invention relates to the field of image processing, in particular to a video processing method and terminal equipment.
Background
The camera function is one of the main functions of a terminal (e.g., a mobile phone), and a user can record an interesting picture by using the terminal to generate a video and store the video in the terminal for watching. At present, a user can only adjust a focusing position in the video recording process, or the terminal continuously focuses automatically to keep a clear picture, and once the video recording is finished, the user can not adjust a focus of the picture in the video.
In the process of recording the video, the multimedia encoder encodes the recorded data in real time, encodes and compresses the recorded data frames acquired by the camera into video files with standard formats, such as MP4, AVI format and the like, and after the video files are generated, the focus of the picture can not be edited any more.
In practical applications, when a user cannot adjust a focus of a recorded video, and a part of pictures in the recorded video are not clear or the user wants to edit the recorded video to present an artistic video, the user's requirements cannot be met.
Disclosure of Invention
The embodiment of the invention provides a video processing method and terminal equipment, and aims to solve the problem that in the prior art, as a user cannot adjust a focus of a recorded video, partial pictures in the recorded video are not clear, or the user wants to edit the recorded video to present an artistic video and the like, the user requirements cannot be met.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, a method for processing a video is provided, where the method includes:
recording the depth of field information of N frames of video images collected by a camera in the video recording process;
outputting a first video after the video recording is finished;
receiving a first input of the first video from a user;
responding to the first input, and adjusting the definition of M frames of video images in the N frames of video images according to the recorded depth-of-field information of the N frames of video images to obtain the adjusted M frames of video images;
generating a target video according to the adjusted M frames of images;
wherein N, M is a positive integer, N is more than or equal to M.
In a second aspect, a terminal device is provided, which includes:
the processing module is used for recording the depth of field information of the N frames of video images collected by the camera in the video recording process;
the display module is used for outputting a first video after the video recording is finished;
the receiving module is used for responding to the first input, and adjusting the definition of M frames of video images in the N frames of video images according to the recorded depth information of the N frames of video images to obtain the adjusted N frames of images;
the processing module is further used for generating a target video according to the adjusted M frames of images;
wherein N, M is a positive integer, N is more than or equal to M.
In a third aspect, a terminal device is provided, the terminal device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the method according to the first aspect.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the method according to the first aspect.
In the embodiment of the invention, the definition of the video image in the recorded video can be adjusted by utilizing the depth-of-field information corresponding to the video image in the video, so that the definition of the video image can be adjusted according to the input of the user to the video under the conditions that part of pictures in the recorded video are not clear or the user wants to edit the recorded video to present an artistic video and the like, and the requirements of the user can be further met.
Drawings
Fig. 1 is a flow chart of a video processing method according to an embodiment of the present invention.
FIG. 2 is a schematic diagram of a display interface of an embodiment of the invention.
FIG. 3 is a schematic diagram of a display interface of another embodiment of the present invention.
Fig. 4 is a schematic diagram of a correspondence relationship between identification information and depth information of a video image according to an embodiment of the present invention.
Fig. 5 is a flow chart of a video processing method according to an embodiment of the invention.
Fig. 6 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
Fig. 7 is a schematic structural diagram of a terminal device according to another embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart of a video processing method according to an embodiment of the present invention. The video processing method shown in fig. 1 may include the following steps:
and 102, recording the depth of field information of the N frames of video images collected by the camera in the video recording process.
Optionally, instep 102, recording is performed by using two cameras. Under the condition, the terminal equipment enters a camera preview work interface after receiving a camera opening instruction of a user, if a double-camera shooting and imaging instruction of the user is received again, the double cameras are sequentially electrified, enabled and initialized, light information in the picture and depth information in the picture are obtained, the digital signals are processed after the light information is converted into the digital signals, data which can be previewed by the user are obtained, and the preview data are displayed on a screen of the terminal equipment, so that the camera preview is realized. The preview screen displays the picture information collected by the main camera, records the picture and collects the sound information at the same time, and generates a video file (for example, generates a default video MP4 video file) by performing a real-time encoding operation.
And 104, outputting the first video after the video recording is finished. And 106, receiving a first input of the first video from a user.
It is to be understood that instep 106, the first input of the first video by the user is used by the terminal device to determine the sharpness to which the video image needs to be adjusted.
Optionally, as shown in fig. 2, beforestep 106, displaying an aperture adjustment control on an output interface of the first video; correspondingly, instep 106, receiving a first input of the first video from the user includes: a first input to an aperture adjustment control by a user is received. Therefore, the definition required to be adjusted can be determined based on the relation between the aperture value and the definition, the operation is simple, and the realization of the terminal equipment can be simplified.
Optionally, in some embodiments, beforestep 106, further comprising: and displaying prompt information on an output interface of the first video, wherein the prompt information is used for prompting that the first video is a video with adjustable definition. Therefore, the user can conveniently know which videos are adjustable in definition, and the user can conveniently operate the video processing system.
For example, as shown in fig. 3, a "video focusing" icon is displayed on the display interface, and when a user sees the icon, the user can know that the current video is a video with adjustable definition, and then can adjust the definition according to the need of the user. For example, a user clicks a "video focusing" icon, at this time, an aperture adjustment control may pop up on the display interface, and the user may adjust the definition of the picture by adjusting the size of the aperture.
Andstep 108, responding to the first input, and adjusting the definition of the M frames of video images in the N frames of video images according to the recorded depth information of the N frames of video images to obtain the adjusted M frames of video images.
In particular, in some embodiments, the first input is a first input to an aperture adjustment control by a user. In this case, before adjusting the sharpness of M frames of video images among the N frames of video images, the method shown in fig. 1 further includes: determining an aperture value corresponding to the first input; and determining the definition of the target according to the aperture value. Correspondingly, instep 108, the definition of the M frames of video images is adjusted to the target definition. Therefore, the user can adjust the definition of the video image by adjusting the size of the aperture up and down, and the operation of the user is facilitated.
Further, the identification information of the N frames of video images and the depth information of the N frames of video images are stored in an associated mode. In this case, adjusting the sharpness of the M-frame video image to the target sharpness includes: determining depth-of-field information associated with identification information of each frame of video images in the M frames of video images; and adjusting the definition of the M frames of video images to the target definition according to the depth information of each frame of video image. By storing the identification information of the video image and the depth of field information of the video image in an associated manner, the depth of field information corresponding to the video image can be conveniently searched when the definition of the video image is adjusted.
For example, the identification information of the video image and the depth information of the video image are stored in a look-up table. As shown in fig. 4, the Frame0,Frame 1,Frame 2,Frame 3, and Frame4 in fig. 4 are identification information of the video image, and theDepth 0,Depth 1,Depth 2,Depth 3, andDepth 4 are Depth information corresponding to the Frame0-Frame4, respectively.
Optionally, in some embodiments, before adjusting the sharpness of N frames of the M frames of video images, the method further includes: displaying the N frames of video images frame by frame; receiving a second input of the M frames of video images from a user; determining a target position to be adjusted in the M frames of video images in response to the second input; further adjusting the definition of the N frames of video images in the M frames of video, including: adjusting a sharpness of an image at the target location in the M frames of video images.
That is, the video image may be played frame by frame, and the user may select a position of the sharpness to be adjusted on the display interface of the video image. Therefore, the user can adjust the definition of different positions of the video images of different frames, the free selection of the position to be adjusted in each frame of video image is realized, and the user experience is improved.
Further, in the case where the video image is played frame by frame, the user may adjust the first input to the aperture adjustment control on the display interface of the video image. For example, the aperture value is adjusted to different values for different video images, so that different video images can be adjusted to different definitions, more interesting and artistic video images are generated, and user experience is further improved.
The frame-by-frame playing may be played at a preset time interval, or may be played when an input triggering playing of a next frame of video image is received from a user.
Step 110, generating a target video according to the adjusted M frames of video images; wherein N, M is a positive integer, N is more than or equal to M.
Specifically, the adjusted M frames of video images may be correspondingly replaced with the M frames of video images before adjustment in the first video, so as to generate the target video.
Of course, in practical application, only the adjusted M frames of video images may be used to generate the target video according to the user's requirements.
According to the embodiment of the invention, the definition of the video image in the recorded video can be adjusted by utilizing the depth-of-field information corresponding to the video image in the video. Therefore, when part of pictures in the recorded video are not clear or the user wants to edit the recorded video to show artistic video and the like, the definition of the image can be adjusted according to the input of the user to the video, and the requirements of the user are further met.
Fig. 5 is a flow chart of a video processing method according to an embodiment of the invention. The video processing method shown in fig. 5 may include the following steps:
step 201, prompting a user that the video is the video with adjustable definition.
For example, a user may browse and view a recorded video in an album, and if the video is a double-shot recorded video, an icon or a prompt is marked at a suitable position (e.g., the upper right corner) of the preview image to prompt the user to edit the depth information of the video file again, so as to adjust the definition of the video image.
Step 202, in response to the user input, enters a re-edit mode.
For example, the user clicks the "video focus" icon in fig. 3, and the terminal device performs the video image re-editing mode.
Step 203, determining the picture needing to readjust the definition.
The user can click on a position of unexpected definition adjustment in the preview picture, and then pop up an aperture adjustment control on the preview interface.
And 204, determining an aperture value corresponding to the input of the aperture adjusting control by the user, and adjusting the definition of the video image based on the aperture value.
The user can realize the regulation of aperture size through the input to aperture adjustment control, and terminal equipment confirms the definition that needs adjust to according to the aperture size, later handles the depth of field information of the video image of record, simulates out the preview effect of the position department that the user needs to readjust the definition.
The terminal equipment can automatically play the video images frame by frame at a low speed, a user can slide the optical ring adjusting control on the video images needing to be subjected to definition adjustment or click the positions needing to be subjected to definition adjustment on the video images, and the terminal equipment adjusts the definition of the images at the positions clicked by the user according to the size of the aperture value.
Step 205, after editing frame by frame, confirm and save.
And after the user finishes editing, saving the generated video file to finish re-editing. The previously generated video file can be overwritten, and the user obtains the customized edited video.
Fig. 6 is a schematic structural diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 6, the terminal device includes:
theprocessing module 11 is configured to record depth-of-field information of the N frames of video images acquired by the camera in a video recording process;
thedisplay module 12 is configured to output a first video after the video recording is finished;
the receivingmodule 13 is configured to adjust, in response to the first input, the definition of M video images in the N video images according to the recorded depth-of-field information of the N video images, so as to obtain an adjusted N video image;
theprocessing module 11 is further configured to generate a target video according to the adjusted M-frame image;
wherein N, M is a positive integer, N is more than or equal to M.
The terminal equipment of the embodiment of the invention can adjust the definition of the video image in the recorded video by utilizing the depth-of-field information corresponding to the video image in the video. Therefore, when part of pictures in the recorded video are not clear or the user wants to edit the recorded video to show artistic video and the like, the definition of the video image can be adjusted according to the input of the user to the video, and the requirements of the user are further met.
In the embodiment of the present invention, optionally, thedisplay module 12 is further configured to:
before the receiving module receives a first input of a user to the first video, displaying an aperture adjusting control on an output interface of the first video;
wherein, the receivingmodule 13 is specifically configured to:
receiving a first input to the aperture adjustment control by the user.
In this embodiment of the present invention, optionally, theprocessing module 11 is further configured to:
determining an aperture value corresponding to the first input before adjusting the sharpness of the M frames of video images in the N frames of video images;
determining the definition of a target according to the aperture value;
in terms of adjusting the sharpness of M frames of video images in the N frames of video images, theprocessing module 11 is specifically configured to:
and adjusting the definition of the N frames of video images to the target definition.
In this embodiment of the present invention, optionally, theprocessing module 11 is specifically configured to:
storing the identification information of the N frames of video images and the depth information of the N frames of video images in a correlation manner;
determining depth information associated with identification information of each frame of the M frames of video images;
and adjusting the definition of the M frames of video images to the target definition according to the depth information of each frame of video image.
In this embodiment of the present invention, optionally, thedisplay module 12 is specifically configured to:
displaying N frames of video images frame by frame before the processing module adjusts the definition of the N frames of video images in the M frames of video images;
the receiving module is further configured to: receiving a second input of the M frames of video images from a user;
theprocessing module 11 is specifically configured to: determining a target position to be adjusted in the M frames of video images in response to the second input;
adjusting a sharpness of an image at the target location in the M-frame video images.
In the embodiment of the present invention, optionally, thedisplay module 12 is further configured to:
before the receiving module receives a first input of a user to the first video, prompt information is displayed on an output interface of the first video, and the prompt information is used for prompting that the first video is a video with adjustable definition.
The terminal device provided in the embodiment of the present invention can implement each process implemented by the terminal device in the method embodiments of fig. 1 to fig. 4, and is not described herein again to avoid repetition.
Fig. 7 is a schematic diagram of a hardware structure of a terminal device according to another embodiment of the present invention, where theterminal device 700 includes but is not limited to: aradio frequency unit 701, anetwork module 702, anaudio output unit 703, aninput unit 704, asensor 705, adisplay unit 706, auser input unit 707, aninterface unit 708, amemory 709, aprocessor 710, apower supply 711, and the like. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 7 does not constitute a limitation of the terminal device, and that the terminal device may include more or fewer components than shown, or combine certain components, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
Wherein, theprocessor 710 is configured to:
recording the depth of field information of N frames of video images collected by a camera in the video recording process;
outputting a first video after the video recording is finished;
receiving a first input of the first video from a user;
responding to the first input, and adjusting the definition of M frames of video images in the N frames of video images according to the recorded depth-of-field information of the N frames of video images to obtain the adjusted M frames of video images;
generating a target video according to the adjusted M frames of video images;
wherein N, M is a positive integer, N is more than or equal to M.
The terminal equipment provided by the embodiment of the invention can adjust the definition of the video image in the recorded video according to the depth-of-field information corresponding to the video image in the video, so that the definition of the image can be adjusted according to the input of the user to the video under the conditions that part of pictures in the recorded video are not clear or the user wants to edit the recorded video to present an artistic video and the like, and the requirements of the user can be further met.
It should be understood that, in the embodiment of the present invention, theradio frequency unit 701 may be used for receiving and sending signals during a message transmission and reception process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to theprocessor 710; in addition, the uplink data is transmitted to the base station. In general,radio frequency unit 701 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, theradio frequency unit 701 may also communicate with a network and other devices through a wireless communication system.
The terminal device provides the user with wireless broadband internet access through thenetwork module 702, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
Theaudio output unit 703 may convert audio data received by theradio frequency unit 701 or thenetwork module 702 or stored in thememory 709 into an audio signal and output as sound. Also, theaudio output unit 703 may also provide audio output related to a specific function performed by the terminal device 700 (e.g., a call signal reception sound, a message reception sound, etc.). Theaudio output unit 703 includes a speaker, a buzzer, a receiver, and the like.
Theinput unit 704 is used to receive audio or video signals. Theinput Unit 704 may include a Graphics Processing Unit (GPU) 7041 and amicrophone 7042, and theGraphics processor 7041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on thedisplay unit 706. The image frames processed by thegraphic processor 7041 may be stored in the memory 709 (or other storage medium) or transmitted via theradio unit 701 or thenetwork module 702. Themicrophone 7042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via theradio frequency unit 701 in case of a phone call mode.
Theterminal device 700 further comprises at least onesensor 705, such as light sensors, motion sensors and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the luminance of thedisplay panel 7061 according to the brightness of ambient light, and a proximity sensor that turns off thedisplay panel 7061 and/or a backlight when theterminal device 700 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; thesensors 705 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
Thedisplay unit 706 is used to display information input by the user or information provided to the user. TheDisplay unit 706 may include aDisplay panel 7061, and theDisplay panel 7061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
Theuser input unit 707 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, theuser input unit 707 includes atouch panel 7071 andother input devices 7072. Thetouch panel 7071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 7071 (e.g., operations by a user on or near thetouch panel 7071 using a finger, a stylus, or any other suitable object or attachment). Thetouch panel 7071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to theprocessor 710, receives a command from theprocessor 710, and executes the command. In addition, thetouch panel 7071 can be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave. Theuser input unit 707 may includeother input devices 7072 in addition to thetouch panel 7071. In particular, theother input devices 7072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein again.
Further, thetouch panel 7071 may be overlaid on thedisplay panel 7061, and when thetouch panel 7071 detects a touch operation on or near thetouch panel 7071, the touch operation is transmitted to theprocessor 710 to determine the type of the touch event, and then theprocessor 710 provides a corresponding visual output on thedisplay panel 7061 according to the type of the touch event. Although in fig. 7, thetouch panel 7071 and thedisplay panel 7061 are implemented as two independent components to implement the input and output functions of the terminal device, in some embodiments, thetouch panel 7071 and thedisplay panel 7061 may be integrated to implement the input and output functions of the terminal device, which is not limited herein.
Theinterface unit 708 is an interface for connecting an external device to theterminal apparatus 700. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. Theinterface unit 708 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within theterminal apparatus 700 or may be used to transmit data between theterminal apparatus 700 and the external device.
Thememory 709 may be used to store software programs as well as various data. Thememory 709 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, thememory 709 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
Theprocessor 710 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in thememory 709 and calling data stored in thememory 709, thereby performing overall monitoring of the terminal device.Processor 710 may include one or more processing units; preferably, theprocessor 710 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated intoprocessor 710.
Theterminal device 700 may further include a power supply 711 (e.g., a battery) for supplying power to various components, and preferably, thepower supply 711 may be logically connected to theprocessor 710 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, theterminal device 700 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides a terminal device, which includes aprocessor 710, amemory 709, and a computer program stored in thememory 709 and capable of running on theprocessor 710, where the computer program is executed by theprocessor 710 to implement each process of the video processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the video processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

Translated fromChinese
1.一种视频的处理方法,其特征在于,包括:1. a processing method of video, is characterized in that, comprises:在录像过程中,记录摄像头采集的N帧视频图像的景深信息;During the recording process, record the depth of field information of N frames of video images collected by the camera;在录像结束后,输出第一视频;After the video is finished, output the first video;接收用户对所述第一视频的第一输入;receiving a first input from a user to the first video;确定与所述第一输入对应的光圈值;determining an aperture value corresponding to the first input;根据所述光圈值,确定目标清晰度;According to the aperture value, determine the target sharpness;响应于所述第一输入,根据已记录的N帧视频图像的景深信息,调整所述N帧视频图像中M帧视频图像的清晰度,得到调整后的M帧图像,其中,所述调整所述N帧视频图像中M帧视频图像的清晰度,包括:根据所述M帧图像中的每帧视频图像的景深信息,将所述M帧视频图像的清晰度调整为所述目标清晰度;In response to the first input, according to the depth of field information of the recorded N frames of video images, adjust the definition of the M frames of video images in the N frames of video images to obtain the adjusted M frames of images, wherein the adjustment The definition of M frames of video images in the N frames of video images, comprising: adjusting the definition of the M frames of video images to the target definition according to the depth of field information of each frame of video images in the M frames of images;根据所述调整后的M帧视频图像,生成目标视频;Generate a target video according to the adjusted M frames of video images;其中,N、M为正整数,N≥M;Among them, N and M are positive integers, and N≥M;其中,in,在调整所述N帧视频图像中M帧视频图像的清晰度之前,还包括:逐帧显示所述N帧视频图像;接收用户对所述M帧视频图像的第二输入;响应于所述第二输入,确定所述M帧视频图像中待调整的目标位置;Before adjusting the definition of M frames of video images in the N frames of video images, the method further includes: displaying the N frames of video images frame by frame; receiving a second user input on the M frames of video images; Two inputs, determine the target position to be adjusted in the M frames of video images;所述调整所述N帧视频图像中M帧视频图像的清晰度,包括:调整所述M帧视频图像中的所述目标位置处的图像的清晰度。The adjusting the sharpness of the M frames of video images in the N frames of video images includes: adjusting the sharpness of the images at the target position in the M frames of video images.2.根据权利要求1所述的方法,其特征在于,所述接收用户对所述第一视频的第一输入之前,还包括:2. The method according to claim 1, wherein before receiving the first input of the first video from the user, the method further comprises:在所述第一视频的输出界面,显示光圈调节控件;On the output interface of the first video, display an aperture adjustment control;所述接收用户对所述第一视频的第一输入,包括:The receiving the first input from the user to the first video includes:接收所述用户对所述光圈调节控件的第一输入。A first input from the user to the aperture adjustment control is received.3.根据权利要求1所述的方法,其特征在于,所述记录摄像头采集的N帧视频图像的景深信息,包括:3. The method according to claim 1, wherein the depth of field information of the N frames of video images collected by the recording camera comprises:将所述N帧视频图像的标识信息和所述N帧视频图像的景深信息关联存储;Associate and store the identification information of the N frames of video images and the depth of field information of the N frames of video images;其中,所述根据每帧所述视频图像的景深信息,将所述M帧视频图像的清晰度调整为所述目标清晰度,包括:Wherein, adjusting the definition of the M frames of video images to the target definition according to the depth of field information of each frame of the video image includes:确定与所述M帧视频图像中的每帧视频图像的标识信息关联的景深信息;determining the depth of field information associated with the identification information of each frame of video images in the M frames of video images;根据所述每帧视频图像的景深信息,将所述M帧视频图像的清晰度调整为所述目标清晰度。According to the depth information of each frame of video images, the definition of the M frames of video images is adjusted to the target definition.4.根据权利要求1至3中任一项所述的方法,其特征在于,在接收用户对所述第一视频的第一输入之前,还包括:4. The method according to any one of claims 1 to 3, wherein before receiving the first input of the first video from the user, the method further comprises:在所述第一视频的输出界面,显示提示信息,所述提示信息用于提示所述第一视频为清晰度可调节的视频。On the output interface of the first video, prompt information is displayed, where the prompt information is used to prompt that the first video is a video with adjustable definition.5.一种终端设备,其特征在于,包括:5. A terminal device, comprising:处理模块,用于在录像过程中,记录摄像头采集的N帧视频图像的景深信息;The processing module is used to record the depth of field information of N frames of video images collected by the camera during the recording process;显示模块,用于在录像结束后,输出第一视频;The display module is used for outputting the first video after the video recording ends;接收模块,用于响应于第一输入,根据已记录的N帧视频图像的景深信息,调整所述N帧视频图像中M帧视频图像的清晰度,得到调整后的N帧图像,其中,N、M为正整数,N≥M;The receiving module is configured to, in response to the first input, adjust the definition of the M frames of video images in the N frames of video images according to the depth of field information of the N frames of video images, to obtain the adjusted N frames of images, where N , M is a positive integer, N≥M;所述处理模块,还用于根据所述调整后的M帧图像,生成目标视频;The processing module is further configured to generate a target video according to the adjusted M frames of images;其中,所述处理模块还用于:在调整所述N帧视频图像中M帧视频图像的清晰度之前,确定与所述 第一输入对应的光圈值;根据所述光圈值,确定目标清晰度;其中,在调整所述N帧视频图像中M帧视频图像的清晰度方面,所述处理模块具体用于:根据所述M帧视频图像中的每帧视频图像的景深信息,将所述M帧视频图像的清晰度调整为所述目标清晰度;Wherein, the processing module is further configured to: determine the aperture value corresponding to the first input before adjusting the sharpness of the M frames of video images in the N frames of video images; and determine the target sharpness according to the aperture value ; wherein, in terms of adjusting the clarity of M frames of video images in the N frames of video images, the processing module is specifically configured to: according to the depth of field information of each frame of video images in the M frames of video images, The definition of the frame video image is adjusted to the target definition;其中,所述显示模块具体用于:在所述处理模块调整所述M帧视频图像中N帧视频图像的清晰度之前,逐帧显示所述N帧视频图像;所述接收模块还用于:接收用户对所述M帧视频图像的第二输入;所述处理模块具体用于:响应于所述第二输入,确定所述M帧视频图像中待调整的目标位置;调整所述M帧视频图像中的所述目标位置处的图像的清晰度。The display module is specifically configured to: display the N frames of video images frame by frame before the processing module adjusts the definition of the N frames of video images in the M frames of video images; the receiving module is further configured to: receiving a second input from the user on the M frames of video images; the processing module is specifically configured to: in response to the second input, determine a target position to be adjusted in the M frames of video images; adjust the M frames of video images The sharpness of the image at the target location in the image.6.根据权利要求5所述的终端设备,其特征在于,所述显示模块还用于:6. The terminal device according to claim 5, wherein the display module is further used for:在所述接收模块接收用户对所述第一视频的第一输入之前,在所述第一视频的输出界面,显示光圈调节控件;Before the receiving module receives the first input of the first video from the user, display an aperture adjustment control on the output interface of the first video;其中,所述接收模块具体用于:Wherein, the receiving module is specifically used for:接收所述用户对所述光圈调节控件的第一输入。A first input from the user to the aperture adjustment control is received.7.根据权利要求5所述的终端设备,其特征在于,所述处理模块具体用于:7. The terminal device according to claim 5, wherein the processing module is specifically configured to:将所述N帧视频图像的标识信息和所述N帧视频图像的景深信息关联存储;Associate and store the identification information of the N frames of video images and the depth of field information of the N frames of video images;确定与所述M帧视频图像中的每帧视频图像的标识信息关联的景深信息;determining the depth of field information associated with the identification information of each frame of video images in the M frames of video images;根据所述每帧视频图像的景深信息,将所述M帧视频图像的清晰度调整为所述目标清晰度。According to the depth information of each frame of video images, the definition of the M frames of video images is adjusted to the target definition.8.根据权利要求5至7中任一项所述的终端设备,其特征在于,所述显示模块还用于:8. The terminal device according to any one of claims 5 to 7, wherein the display module is further used for:在所述接收模块接收用户对所述第一视频的第一输入之前,在所述第一视频的输出界面,显示提示信息,所述提示信息用于提示所述第一视频为清晰度可调节的视频。Before the receiving module receives the user's first input to the first video, prompt information is displayed on the output interface of the first video, where the prompt information is used to prompt that the first video is adjustable in definition 's video.9.一种移动终端,其特征在于,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至4中任一项所述的方法的步骤。9. A mobile terminal, comprising: a memory, a processor, and a computer program stored on the memory and running on the processor, the computer program being executed by the processor to achieve The steps of the method of any one of claims 1 to 4.10.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至4中任一项所述的方法的步骤。10. A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program according to any one of claims 1 to 4 is implemented. steps of the method.
CN201910092667.1A2019-01-302019-01-30Video processing method and terminal equipmentActiveCN109819188B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201910092667.1ACN109819188B (en)2019-01-302019-01-30Video processing method and terminal equipment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201910092667.1ACN109819188B (en)2019-01-302019-01-30Video processing method and terminal equipment

Publications (2)

Publication NumberPublication Date
CN109819188A CN109819188A (en)2019-05-28
CN109819188Btrue CN109819188B (en)2022-02-08

Family

ID=66605821

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201910092667.1AActiveCN109819188B (en)2019-01-302019-01-30Video processing method and terminal equipment

Country Status (1)

CountryLink
CN (1)CN109819188B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112492212B (en)*2020-12-022022-05-06维沃移动通信有限公司 Photographing method, device, electronic device and storage medium
CN113010738B (en)*2021-02-082024-01-30维沃移动通信(杭州)有限公司Video processing method, device, electronic equipment and readable storage medium
CN114237800A (en)*2021-12-212022-03-25维沃移动通信有限公司 Document processing method, document processing device, electronic device, and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103763477A (en)*2014-02-212014-04-30上海果壳电子有限公司Double-camera after-shooting focusing imaging device and method
US8799167B2 (en)*2010-07-132014-08-05Tec Solutions, Inc.Biometric authentication system and biometric sensor configured for single user authentication
CN106534619A (en)*2016-11-292017-03-22努比亚技术有限公司Method and apparatus for adjusting focusing area, and terminal
CN107454332A (en)*2017-08-282017-12-08厦门美图之家科技有限公司Image processing method, device and electronic equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP2007041046A (en)*2005-07-292007-02-15Eastman Kodak CoImaging apparatus
JP6351257B2 (en)*2013-12-262018-07-04キヤノン株式会社 Optical scanning device and image forming apparatus having the same
TWI549504B (en)*2014-08-112016-09-11宏碁股份有限公司Image capturing device and auto-focus compensation method thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US8799167B2 (en)*2010-07-132014-08-05Tec Solutions, Inc.Biometric authentication system and biometric sensor configured for single user authentication
CN103763477A (en)*2014-02-212014-04-30上海果壳电子有限公司Double-camera after-shooting focusing imaging device and method
CN106534619A (en)*2016-11-292017-03-22努比亚技术有限公司Method and apparatus for adjusting focusing area, and terminal
CN107454332A (en)*2017-08-282017-12-08厦门美图之家科技有限公司Image processing method, device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
华为nova 2s手机中相机的大光圈特效;华为;《华为nova 2s手机 大光圈特效》;20171207;华为nova 2s手机中相机的光圈拍照操作界面截图1-5*

Also Published As

Publication numberPublication date
CN109819188A (en)2019-05-28

Similar Documents

PublicationPublication DateTitle
CN108513070B (en) Image processing method, mobile terminal and computer-readable storage medium
CN107995429B (en)Shooting method and mobile terminal
CN110365907B (en)Photographing method and device and electronic equipment
WO2021036536A1 (en)Video photographing method and electronic device
WO2019137429A1 (en)Picture processing method and mobile terminal
CN109361867B (en)Filter processing method and mobile terminal
CN111010610B (en)Video screenshot method and electronic equipment
CN108989672B (en) A shooting method and mobile terminal
CN111050070B (en)Video shooting method and device, electronic equipment and medium
CN107948562B (en)Video recording method and video recording terminal
CN109102555B (en) An image editing method and terminal
CN108924412B (en)Shooting method and terminal equipment
CN110177296A (en)A kind of video broadcasting method and mobile terminal
CN109922294B (en) A video processing method and mobile terminal
CN110366027B (en)Video management method and terminal equipment
CN111182211B (en) Shooting method, image processing method, and electronic device
CN111147779A (en) Video production method, electronic device and medium
CN109819188B (en)Video processing method and terminal equipment
CN110798621A (en) An image processing method and electronic device
CN110198428A (en)A kind of multimedia file producting method and first terminal
CN110868535A (en) A shooting method, a method for determining shooting parameters, an electronic device and a server
CN108924035B (en)File sharing method and terminal
CN107959755B (en) A kind of photographing method, mobile terminal and computer readable storage medium
CN111447365B (en)Shooting method and electronic equipment
CN108833796A (en) An image capturing method and terminal

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp