Movatterモバイル変換


[0]ホーム

URL:


CN108881742B - Video generation method and terminal equipment - Google Patents

Video generation method and terminal equipment
Download PDF

Info

Publication number
CN108881742B
CN108881742BCN201810690884.6ACN201810690884ACN108881742BCN 108881742 BCN108881742 BCN 108881742BCN 201810690884 ACN201810690884 ACN 201810690884ACN 108881742 BCN108881742 BCN 108881742B
Authority
CN
China
Prior art keywords
images
input
image
screen
terminal device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810690884.6A
Other languages
Chinese (zh)
Other versions
CN108881742A (en
Inventor
杨其豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co LtdfiledCriticalVivo Mobile Communication Co Ltd
Priority to CN201810690884.6ApriorityCriticalpatent/CN108881742B/en
Publication of CN108881742ApublicationCriticalpatent/CN108881742A/en
Application grantedgrantedCritical
Publication of CN108881742BpublicationCriticalpatent/CN108881742B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明实施例提供一种视频生成方法及终端设备,应用于通信技术领域,以解决终端设备编辑图片生成视频的过程较为繁琐的问题。该方案,应用于包括第一屏和第二屏的终端设备,该方案包括:在第一屏上显示第一图像的状态下,接收用户的第一输入;响应于第一输入,在第二屏上显示图像列表,图像列表中包括M个图像;接收用户的第二输入;响应于第二输入,根据图像列表中的N个图像,生成目标视频;其中,M个图像包括第一图像,N个图像与M个图像相同或不同,M、N均为大于1的整数。该方案具体应用于终端设备选择图像并生成视频的过程中。

Figure 201810690884

Embodiments of the present invention provide a video generation method and a terminal device, which are applied in the field of communication technologies, so as to solve the problem that the process of editing a picture by a terminal device to generate a video is cumbersome. The solution is applied to a terminal device including a first screen and a second screen. The solution includes: in a state where the first image is displayed on the first screen, receiving a first input from the user; in response to the first input, in the second An image list is displayed on the screen, and the image list includes M images; a second input from the user is received; in response to the second input, a target video is generated according to the N images in the image list; wherein the M images include the first image, The N images are the same as or different from the M images, and both M and N are integers greater than 1. The solution is specifically applied to the process of the terminal device selecting an image and generating a video.

Figure 201810690884

Description

Video generation method and terminal equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a video generation method and terminal equipment.
Background
With the development of communication technology, the intelligent degree of terminal equipment such as mobile phones and tablet computers is continuously improved so as to meet various requirements of users. For example, users have increasingly high requirements for the convenience of the process of editing pictures into videos by terminal devices.
Typically, pictures are saved in a gallery of the terminal device, and video editing functions to enable editing of pictures into video may be provided by a third party application in the terminal device. Specifically, the user can control the terminal device to select a picture to be edited from pictures stored in the terminal device by using a third-party application program, and edit the picture to be edited into a video.
The method has the problems that in the process of controlling the terminal equipment to select a plurality of pictures to be edited by a user, the user needs to perform selection operation on each picture respectively, namely the user needs to control the terminal equipment to acquire the plurality of pictures to be edited through multiple selection operations. Or, the user can only control the terminal device to acquire one picture group (the picture group comprises a plurality of pictures) through one-time selection operation, and some pictures in the plurality of pictures may not be selected by the user. Therefore, the process of selecting the picture by the terminal device is complicated, namely the process of editing the picture by the terminal device to generate the video is complicated.
Disclosure of Invention
The embodiment of the invention provides a video generation method and terminal equipment, and aims to solve the problem that the process of editing pictures and generating videos by the terminal equipment is complex.
In order to solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an embodiment of the present invention provides a video generation method, which is applied to a terminal device, where the terminal device includes a first screen and a second screen, and the method includes: receiving a first input of a user in a state where a first image is displayed on a first screen; displaying an image list on the second screen in response to the first input, wherein the image list comprises M images; receiving a second input of the user; generating a target video according to the N images in the image list in response to a second input; wherein the M images include a first image, the N images are the same as or different from the M images, and M, N are integers greater than 1.
In a second aspect, an embodiment of the present invention further provides a terminal device, where the terminal device includes a first screen and a second screen, and the terminal device further includes: the device comprises a receiving module, a display module and a generating module; the receiving module is used for receiving a first input of a user in a state that a first image is displayed on a first screen; the display module is used for responding to the first input received by the receiving module and displaying an image list on the second screen, wherein the image list comprises M images; the receiving module is also used for receiving a second input of the user; the generating module is used for responding to the second input received by the receiving module and generating a target video according to the N images in the image list; wherein the M images include a first image, the N images are the same as or different from the M images, and M, N are integers greater than 1.
In a third aspect, an embodiment of the present invention provides a terminal device, which includes a processor, a memory, and a computer program stored on the memory and operable on the processor, and when executed by the processor, the computer program implements the steps of the video generation method according to the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the video generation method according to the first aspect.
In an embodiment of the present invention, a terminal device includes a first screen and a second screen. The terminal device receives a first input from a user in a state where a first image is displayed on a first screen, and displays an image list including M images on a second screen in response to the first input, the M images being acquired by the terminal device at least from the first image, the M images including the first image. The terminal device receives a second input from the user and may generate the target video from the N images in the image list in response to the second input. Wherein the N images are the same as or different from the M images, and M, N are integers greater than 1. Based on the scheme, the terminal device receives the first input in the state that the first screen displays the first image, and can select M images at least according to the first image without receiving the selection input of the user for each image in the plurality of images, so that the M images are added to the image list displayed on the second screen. Thus, the user can select N images that are the same as or different from the M images from the image list and edit the N images into the target video. In addition, the user can view the first image to be edited in the video displayed in the second screen while browsing the first image on the first screen, so that the user can select the image required by the user. Therefore, the step of controlling the terminal equipment to select the image by the user can be simplified, so that the steps of editing the image and generating the video by the terminal equipment are simplified.
Drawings
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an operation provided by an embodiment of the present invention;
fig. 3 is a schematic flowchart of a video generation method according to an embodiment of the present invention;
fig. 4 is one of schematic diagrams of display contents of a terminal device according to an embodiment of the present invention;
fig. 5 is a second schematic diagram of the display content of the terminal device according to the embodiment of the present invention;
fig. 6 is a third schematic diagram of the display content of the terminal device according to the embodiment of the present invention;
fig. 7 is a fourth schematic diagram illustrating content displayed by a terminal device according to an embodiment of the present invention;
fig. 8 is a fifth schematic diagram of the display content of the terminal device according to the embodiment of the present invention;
fig. 9 is a sixth schematic view of the display content of the terminal device according to the embodiment of the present invention;
fig. 10 is a seventh schematic diagram of the display content of the terminal device according to the embodiment of the present invention;
fig. 11 is a schematic structural diagram of a possible terminal device according to an embodiment of the present invention;
fig. 12 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that "/" in this context means "or", for example, A/B may mean A or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. "plurality" means two or more than two.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The terms "first" and "second," and the like, in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first input and the second input, etc. are for distinguishing different inputs, rather than for describing a particular order of inputs.
According to the video generation method provided by the embodiment of the invention, the terminal equipment receives the first input in the state that the first screen displays the first image, and can select M images at least according to the first image without receiving the selection input of the user aiming at each image in the plurality of images, so that the M images are added into the image list displayed on the second screen. Thus, the user can select N images that are the same as or different from the M images from the image list and edit the N images into the target video. Therefore, the step of controlling the terminal equipment to select the image by the user can be simplified, so that the steps of editing the image and generating the video by the terminal equipment are simplified.
The terminal device in the embodiment of the invention can be a mobile terminal device and can also be a non-mobile terminal device. The mobile terminal device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc.; the non-mobile terminal device may be a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, or the like; the embodiments of the present invention are not particularly limited.
It should be noted that, in the video generation method provided in the embodiment of the present invention, the execution main body may be a terminal device (including a mobile terminal device and a non-mobile terminal device), or a Central Processing Unit (CPU) of the terminal device, or a control module in the terminal device for executing the video generation method. The following describes a video generation method provided by an embodiment of the present invention, taking a terminal device as an example to execute the video generation method.
The terminal device in the embodiment of the present invention may be a terminal device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment applied to the video generation method provided by the embodiment of the present invention, taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application. For example, applications such as a system setup application, a system chat application, and a system camera application. And the third-party setting application, the third-party camera application, the third-party chatting application and other application programs.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the video generation method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the video generation method may operate based on the android operating system shown in fig. 1. Namely, the processor or the terminal device can implement the video generation method provided by the embodiment of the invention by running the software program in the android operating system.
Clockwise, anticlockwise, up, down, left, right and the like in the embodiments of the present invention are exemplarily illustrated by taking user input on the display screen of the terminal device as an example, that is, clockwise, anticlockwise, up, down, left, right and the like are in terms of user input on the display screen of the terminal device with respect to the terminal device or the display screen of the terminal device.
Illustratively, taking the user sliding in all directions in the area where the identifier of the Application (APP) is located as an example, as shown in fig. 2, on the display screen of the terminal device, 20 indicates that the user slides clockwise in the area where the identifier of the APP is located, 21 indicates that the user slides counterclockwise in the area where the identifier of the APP is located, 22 indicates that the user slides upward in the area where the identifier of the APP is located, 23 indicates that the user slides downward in the area where the identifier of the APP is located, 24 indicates that the user slides leftward in the area where the identifier of the APP is located, and 25 indicates that the user slides rightward in the area where the identifier of the APP is located.
The video generation method provided by the embodiment of the present invention is described in detail below with reference to a flowchart of the video generation method shown in fig. 3, and the method is applied to a terminal device, where the terminal device includes a first screen and a second screen. Wherein, although the logical order of the video generation methods provided by embodiments of the present invention is shown in a method flow diagram, in some cases, the steps shown or described may be performed in an order different than here. For example, the video generation method shown in fig. 3 may includesteps 301 to 304:
step 301, the terminal device receives a first input of a user in a state that the first image is displayed on the first screen.
Among them, a gallery application (such as a system gallery application) in the terminal device stores a plurality of images (i.e., pictures).
Optionally, in the embodiment of the present invention, the interface displayed on the first screen of the terminal device may be an interface provided by the gallery application program and used for browsing images, and the interface may maximally display one image. For example, the user may control the terminal device to display the first image on the first screen.
Specifically, the first input is used to trigger the terminal device to start selecting an image.
It should be noted that the screen (including the first screen and the second screen) of the terminal device provided in the embodiment of the present invention may be a touch screen, and the touch screen may be configured to receive an input from a user and display a content corresponding to the input to the user in response to the input. The first input may be a touch screen input, a fingerprint input, a gravity input, a key input, or the like. The touch screen input is input such as press input, long press input, slide input, click input, and hover input (input by a user near the touch screen) of a touch screen of the terminal device by the user. The fingerprint input is input by a user to a sliding fingerprint, a long-press fingerprint, a single-click fingerprint, a double-click fingerprint and the like of a fingerprint identifier of the terminal equipment. The gravity input is input such as shaking of the terminal equipment in a specific direction, shaking of the terminal equipment for a specific number of times and the like. The key input corresponds to a single-click input, a double-click input, a long-press input, a combination key input, and the like of the user for a key such as a power key, a volume key, a Home key, and the like of the terminal device. Specifically, the operation mode of the first input is not particularly limited in the embodiment of the present invention, and may be any achievable operation mode.
Illustratively, the click input may be a single click input, a double click input, or any number of click inputs. The slide input may be a slide input in any direction, such as an upward slide, a downward slide, a leftward slide, or a rightward slide.
Alternatively, the first input (denoted as input 1) may include a sub-input (denoted as sub-input Ta) of the user on the first screen, and a sub-input (denoted as sub-input Tb) of the user on the second screen. It will be appreciated that the sub-inputs Ta and Tb of the first input described above operate in the same or different ways.
It should be noted that the absolute value of the time difference between the sub input Ta and the sub input Tb received by the terminal device is within the preset range.
The preset range is greater than or equal to 0 and less than or equal to t1, where t1 may be 1s, 2s, or other values, which is determined by practical circumstances, and the embodiments of the present invention are not limited. The terminal equipment receives the sub input Ta and receives the sub input Tb without any sequence, and can receive the sub input Ta first and then receive the sub input Tb; the sub input Tb may be received first, and then the sub input Ta may be received, which is not limited in the embodiment of the present invention.
Exemplarily, as shown in fig. 4, a schematic diagram of displaying content for a terminal device according to an embodiment of the present invention is provided. The terminal device shown in fig. 4 includes ascreen 41 and ascreen 42, where thescreen 41 and thescreen 42 are the first screen and the second screen, respectively. Specifically, thescreen 41 shown in (4a) in fig. 4 displays an image P1, the image P1 may be the first image, and in the case where nothing is displayed on thescreen 42, the terminal device receives the sub input Ta on theuser screen 41 and the sub input Tb on thescreen 42. The sub-input Ta is a sliding input from the bottom to the top and from the right to the left from the bottom to the left of thescreen 41, and the sliding track of the sliding input is an arc. The sub input Tb is a sliding input from bottom to top and from left to right from the bottom edge to the right edge of thescreen 42, and the sliding track of the sliding input is an arc.
Step 302, responding to the first input, the terminal device displays an image list on the second screen, wherein the image list comprises M images.
Specifically, the M images are acquired by the terminal device at least according to the first image.
Wherein the M images include a first image, and M is an integer greater than 1.
Illustratively, when thescreen 41 shown in (4a) in fig. 4 displays the image P1 and the terminal device receives the first input including the sub input Ta and the sub input Tb shown in (4a) in fig. 4, the terminal device may display the display contents as shown in (4b) in fig. 4. In the case where the image P1 is displayed in thepanel 41 in (4b) in fig. 4, 5 images of the image P2, the image P3, the image P1, the image P4, and the image P5 are included in the image list displayed in thepanel 42 in (4b) in fig. 4, when M is equal to 5. Among these 5 images, image P1 is included, that is, the first image is included in the M images.
Optionally, in the embodiment of the present invention, the resolution of the display of the same image on the first screen and the second screen by the terminal device is the same or different. For example, an image is maximally displayed on a first screen and displayed as a thumbnail of the image on a second screen, i.e., an image has a higher resolution on the first screen than on the second screen.
Optionally, in the embodiment of the present invention, the image displayed on the first screen may be a local image stored in a gallery application program in the terminal device, or a cloud image previewed in the gallery application program or a network-side image.
Step 303, the terminal device receives a second input of the user.
The second input may trigger the user control terminal device to input the displayed M images to determine that multiple images of the video need to be generated, and edit the multiple images into the input of the video. For example, the trigger terminal device edits the image P2, the image P3, the image P1, the image P4, and the image P5 displayed in thescreen 42 in (4b) in fig. 4 as input of video.
And 304, responding to the second input, and generating the target video by the terminal equipment according to the N images in the image list.
Wherein the N images are the same as or different from the M images, and N is an integer greater than 1.
Illustratively, the terminal device edits the image P2, the image P3, the image P1, the image P4, and the image P5 displayed in thescreen 42 in (4b) in fig. 4 as input of video. Wherein the N images are identical to the M images.
In the prior art, in the process that a user controls a terminal device to use a video editing function of a third-party application program, the user may trigger the terminal device to display a selection interface for selecting an image in the third application program, and control the terminal device to select the image to be edited on the selection interface. And then, the user can trigger the terminal device to quit displaying the selection interface and display an editing interface for editing the image to be edited in the third application program so as to control the terminal device to edit the image to be edited into a video. When a plurality of third-party application programs with video editing functions are installed in the terminal equipment, the input of the video editing functions triggered by the plurality of third-party application program users to the terminal equipment may be different. Therefore, in the prior art, when the user control terminal uses the video editing function of the third-party application program, the user control terminal needs to learn the use steps of the video editing function of each third-party application program in a time-consuming and labor-consuming manner.
It can be understood that, in the video generation method provided in the embodiment of the present invention, the gallery application (e.g., the system gallery application) in the terminal device has a video editing function. During the process that the terminal device uses the video editing function of the gallery application program, the user can trigger the terminal device to select the image to be edited (such as the above M images) through a specific input (such as the first input). At this time, the interface of the gallery application program is normally displayed on the first screen of the terminal device, the image list displayed on the second screen includes the image to be edited selected by the user, and the editing interface for editing the image to be edited is displayed. Therefore, the terminal equipment can select the image to be edited while browsing the image by using the gallery application program.
It is conceivable that a user may have a need to edit some of the images displayed by the gallery application into a video while browsing the images using the gallery application of the terminal device. In the prior art, a terminal device exits a gallery application program and enters a third-party application program, so that a user browses and selects an image to be edited again. In the embodiment of the invention, when the terminal equipment uses the gallery application program to browse the images, the user can select the images to be edited. Therefore, the operation of the user in the process of controlling the terminal equipment to generate the video can be simplified, and the display effect of the image to be edited selected by the user in the process can be improved. Therefore, the user experience of the user in the process of editing the image into the video by using the terminal equipment is improved.
It should be noted that, in the video generation method provided in the embodiment of the present invention, the terminal device includes a first screen and a second screen. The terminal device receives a first input from a user in a state where a first image is displayed on a first screen, and displays an image list including M images on a second screen in response to the first input, the M images being acquired by the terminal device at least from the first image, the M images including the first image. The terminal device receives a second input from the user and may generate the target video from the N images in the image list in response to the second input. Wherein the N images are the same as or different from the M images, and M, N are integers greater than 1. Based on the scheme, the terminal device receives the first input in the state that the first screen displays the first image, and can select M images at least according to the first image without receiving the selection input of the user for each image in the plurality of images, so that the M images are added to the image list displayed on the second screen. Thus, the user can select N images that are the same as or different from the M images from the image list and edit the N images into the target video. In addition, the user can view the first image to be edited in the video displayed in the second screen while browsing the first image on the first screen, so that the user can select the image required by the user. Therefore, the step of controlling the terminal equipment to select the image by the user can be simplified, so that the steps of editing the image and generating the video by the terminal equipment are simplified.
In a possible implementation manner, according to the video generation method provided by the embodiment of the present invention, a user may control a terminal device to update the number or the arrangement order of M images to obtain N images, where the N images are different from the M images. Specifically, the second input includes a first sub-input, and the first sub-input is used to trigger updating of M images in the image list into N images. Step 304 provided by the embodiment of the present invention may be replaced with step 304':
and step 304', the terminal equipment updates the M images in the image list into N images and generates a target video according to the N images.
For example, the operation mode of the first sub-input may refer to the description of the operation mode of the first input in the foregoing embodiment, and the description of the embodiment of the present invention is omitted.
The first sub-input may be used to add or reduce the images to be edited, that is, add images to the M images, or delete images from the M images.
Specifically, the user may control the terminal device to edit the finally selected image to be edited (i.e., the N images) into one video, so as to obtain the target video.
It should be noted that, with the video generation method provided in the embodiment of the present invention, the terminal device may update and display the image to be edited newly selected by the user on the second screen while displaying the image in the gallery application on the first screen. Namely, the user does not need to control the terminal equipment to push out the gallery display application program and then display the editing interface for editing the image. In this way, the step of selecting the image by the user through the terminal device can be further simplified, so that the steps of editing the image and generating the video through the terminal device can be further simplified.
In a possible implementation manner, in the video generation method provided by the embodiment of the present invention, the second input further includes N-1 second sub-inputs, where the N-1 second sub-inputs are used to determine an order of combining the N images into a frame of the video.
Specifically, the "generating the target video from N images" in theabove step 304 may be realized by the step 304':
step 304', the terminal synthesizes N images in the order of frames to generate a target video.
Specifically, each second sub-input is used to arrange the order of the images to obtain the order of the frames of the images in the target video to be generated. Wherein each second sub-input may be an input of the user at any position on the second screen of the terminal device.
It should be noted that, with the video generation method provided in the embodiment of the present invention, a user can arrange the arrangement order of the selected images to be edited (such as the above N images) by controlling the terminal device according to a requirement, so that the order of frames in the target video meets the requirement of the user. The N-1 sub-inputs can be input at any position on the second screen of the terminal equipment by a user, and the images in the N images do not need to be dragged, so that the complicated operation of the dragging process of the user is avoided. Therefore, the steps of editing the image and generating the video by the terminal equipment are further simplified.
In a possible implementation manner, in the video generating method provided by the embodiment of the present invention, a pointer identifier is further displayed on the image list, and the image displayed on the first screen is an image indicated by the pointer identifier. The second sub-input is further used to trigger moving the pointer identification from a first position of the second image on the image list to a second position of the first image on the image list and updating the second image displayed on the first screen to the first image.
It is understood that the pointer on the image list identifies the indicated image as the image selected by the user.
Specifically, the pointer displayed on the second screen is identified to be located at a position of an image on the image list, that is, a position corresponding to the image.
Alternatively, the pointer is identified at a position corresponding to an image, which may be above the image or near the image, such as the left side of the image or the right side of the image.
Optionally, each second sub-input may be a sliding input with a sliding track of a circle on the second screen by the user, and the sub-input is marked as the first type sub-input. When the sliding track is a clockwise circle, the terminal device takes the image indicated by the current pointer identification as a previous frame image and takes the next image as a next frame image; when the sliding track is a counterclockwise circle, the terminal device takes the image indicated by the current pointer identifier as a previous frame image and takes the previous frame image as a next frame image.
In addition, each second sub-input may also be a sub-input including one input (denoted as input Tc) of the user on the first screen and another input (denoted as input Td) on the second screen, and the sub-inputs are denoted as a second type of sub-input. Illustratively, the input Td is a user's press input on the second screen, and the input Tc is a user's slide input on the first screen. Specifically, in the process that the finger of the user performs the second-type sub-input on the first screen and the second screen of the terminal device, the terminal device may move the target identifier to a position corresponding to the operation along with the operation of the finger of the user, and use the image as a current frame image, that is, a next frame image of a previous frame image.
Note that the absolute value of the time difference between the time when the terminal device receives the input Tc and the time when the terminal device receives the input Td is within a preset range.
Wherein the terminal device may use the image indicated by the current pointer identification (e.g., image P1) as the first frame image when the terminal device receives the first one of the N-1 second sub inputs, which is typically the first type of sub input.
Exemplarily, as shown in fig. 5, a schematic diagram of displaying content for a terminal device according to an embodiment of the present invention is provided. The right side of the image P3 shown by thescreen 42 in (5a) in fig. 5 shows theobject marker 43. At this time, the image P3 may be the above-described second image. The terminal device may receive an input T2 as shown by thescreen 42 in (5a) in fig. 5, the input T2 may be the first and second sub-inputs among the above-mentioned N-1 sub-inputs, and the input T2 is a slide input whose slide trajectory is a clockwise circle. Subsequently, after the terminal device receives the input T2, display content as shown in (5b) in fig. 5 may be displayed.
In (5b) in fig. 5, an image P4 is displayed in thescreen 41, and thepointer sign 43 displayed in thescreen 41 is located on the right side of the image P1. The terminal apparatus moves the pointer identification from the position corresponding to the image P1 to the position corresponding to the image P4, and updates the image P3 displayed on the first screen to the image P1. At this time, the image P3 is the second image, and the image P1 is the first image. Specifically, the terminal device takes the image P3 as the first frame image of the video, and takes the image P1 as the second frame image of the video.
Subsequently, as shown in fig. 6, a schematic diagram of displaying content for a terminal device according to an embodiment of the present invention is provided. In fig. 5, in conjunction with this, thetarget mark 43 is displayed on the right side of the image P4 shown by thescreen 42 in (6a) in fig. 6. Thescreen 41 shown in fig. 6 (6a) receives a slide input (input Tc) from the user from bottom to top and from right to left from the bottom edge to the left edge of the first screen, the slide trajectory of which is a circular arc, and receives a press input (input Td) for the user on the second screen. In particular, the user may control the terminal device to moveobject marker 43 onscreen 42 from the right side of image P1 to the right side of image P2. Subsequently, as shown in (6b) in fig. 6, theobject marker 43 in thescreen 42 is displayed on the right side of the image P2. Specifically, the terminal device regards the image P2 as the third frame image of the video.
Similarly, the terminal device may take the image P4 as the fourth frame image of the video and the image P5 as the fifth frame image of the video. As such, the sequence of frames of images in the target video is image P3, image P1, image P2, image P4, and image P5.
It should be noted that, in the video generation method provided in the embodiment of the present invention, the terminal device may control the pointer identifier to move on the second screen, so as to determine the sequence of frames of images in the video to be generated. Meanwhile, the image displayed on the first screen may be changed as the pointer identification moves. In this way, the user is facilitated to arrange the sequence of the N images according to the requirement, and the maximized display result of the selected image (namely, the image indicated by the pointer identification) is viewed through the first screen in the process of arranging the sequence of the N images.
In a possible implementation manner, in the video generating method provided in the embodiment of the present invention, before thestep 302, the method may further include the step 305:
step 305, the terminal device obtains M images at least according to the first image.
Optionally, M may be fixed values preset by a user in the terminal device, where the M images include the first image. Wherein M may be modified by the user before making the first input.
Alternatively, step 305 may be implemented as step 305'.
In step 305', the terminal device obtains M images according to the attribute information of the first image, where M is a preset value, and the arrangement order of the M images is associated with the attribute information of the M images.
For example, the attribute information of the image may be information of a data size, a name (e.g., an initial of the name), a saving time, a saving path, a shooting location, and the like of the image. For example, the arrangement order of the images may be an attribute order of data from small to large, an order of first letters of names from first to last, an order of saving time from first to last, an order of saving first letters of paths from first to last, and an order of photographing first letters of places from first to last.
Alternatively, the M images may be composed of several images with attribute information before the first image and several images with attribute information after the first image, centered on the first image, and the first image. Alternatively, the M images may be composed of M-1 images before the first image and the first image for the attribute information. Alternatively, the M images may be composed of M-1 images before the first image and the first image with the attribute information.
Alternatively, the above step 305 may be implemented by step 305'.
Step 305', the terminal apparatus acquires the M images in the order of the attribute information of the M images based on the attribute information of the first image and the first input.
Wherein the value of M is associated with the input parameter of the first input; the input parameter includes at least one of an input duration, a length of an input sliding trajectory, and an input pressure value.
Illustratively, the longer the duration of the first input in the embodiment of the present invention is, the larger the value of M determined by the terminal device is.
Similarly, in the embodiment of the present invention, the value of the input sliding track length of the first input and the value of the input pressure value pair M may refer to the above description of the duration of the first input, and the embodiment of the present invention is not described again.
It should be noted that, in the video generation method provided in the embodiment of the present invention, the terminal device may obtain a fixed number of M images including the first image. Therefore, the steps of editing the image and generating the video by the terminal equipment are further simplified.
In a possible implementation manner, in the video generation method provided by the embodiment of the present invention, the first sub-input is specifically used to trigger the removal of the image indicated by the pointer identifier from the second screen.
Optionally, the description of the first sub-input may refer to the above description of the first input, and the embodiment of the present invention is not described again.
Alternatively, the first sub-input may comprise one input by the user on the first screen (denoted as input Te) and another input on the second screen (denoted as input Tf). Illustratively, the input Te is a sliding input from bottom to top and from right to left from the bottom edge to the left edge of the first screen, and a sliding track of the sliding input is an arc. The input Tf is a sliding input in which a sliding track of the user on the second screen is a straight line from left to right.
Exemplarily, as shown in fig. 7, a schematic diagram of displaying content for a terminal device according to an embodiment of the present invention is provided. The right side of the image P1 shown by thescreen 42 in (7a) in fig. 7 is displayed with theobject marker 43. Thescreen 41 shown in fig. 7 (7a) receives a slide input (input Te) from the bottom to the top and from the right to the left of the user from the bottom edge to the left edge of the first screen, the slide trace of which is an arc, and receives a slide input (input Tf) from the left to the right, which is a straight line of the slide trace of the user on the second screen.
It will be appreciated that the above-mentioned input Te is mainly used for moving the position of the pointer identification, i.e. the selected image. And the input Tf is mainly used to remove the image indicated by the pointer identification, i.e. the selected image.
Specifically, the "the terminal device updates M images in the image list to N images" in the above step 304' can be realized by the step 306:
step 306, the terminal device removes the image indicated by the pointer identifier from the second screen to update and display the M images as N images.
For example, as shown in (7b) in fig. 7, the user can control the terminal device to display theobject marker 43 on thescreen 42 to move from the right side of the image P1 to the right side of the image P4, and to display the image P4 on thescreen 41.
Illustratively, the above N is equal to 4, and the N pictures include a picture P2, a picture P3, a picture P4, and a picture P5.
It should be noted that, with the video generation method provided by the embodiment of the present invention, the terminal device may support the user to delete the unnecessary images displayed on the second screen, so that the images in the target video that the user can generate are all images that the user needs.
In a possible implementation manner, in the video generation method provided by the embodiment of the present invention, the first sub-input is specifically used to trigger addition of P images to the image list, a value of P is associated with an input parameter of the first sub-input, and P is a positive integer.
Specifically, the "the terminal apparatus updates M images in the image list to N images" in the above step 304 'may be implemented by the step 306':
in step 306', the terminal device acquires the P images, and arranges and displays the P images and the M images according to the attribute information of the P images and the attribute information of the M images.
The arrangement order of the P images is associated with the attribute information of the P images, the image list includes the arranged P images and M images, and N is M + P.
Optionally, the image contents of the P images are determined according to the input area of the first sub-input.
Illustratively, P images are arranged before M images; alternatively, the P images are arranged after the M images; alternatively, some of the P images are arranged before the M images, and the other images of the P images except for the partial images are arranged after the M images.
Alternatively, the first sub-input may comprise one input by the user on the first screen (denoted as input Tg) and another input on the second screen (denoted as input Th). Illustratively, the input Tg is a sliding input in which a sliding track of the user on the first screen is a straight line from right to left. The input Th is a sliding input in which the user slides on the second screen in a straight line from left to right. At this time, the third input is used for triggering the terminal device to acquire images other than the M images, and the images are added as images to be edited, that is, added to the M images.
Note that the absolute value of the time difference between the time when the terminal device receives the input Tg and the time when the terminal device receives the input Th is within the preset range.
Alternatively, the relationship between the arrangement order of the P images and the arrangement order of the M images may be determined by an input position of a third input on the first screen.
Illustratively, the input Tg in the first sub-input is in the upper one-third region of the first screen (denoted as region Q1), the input Th is in the upper one-third region of the second screen (denoted as region Q2), and the arrangement order of the P images precedes that of the M images.
The input Tg in the first sub-input is in the middle third region of the first screen (denoted as region Q3), the input Th is in the upper third region of the second screen (denoted as region Q4), the arrangement order of some of the P images (denoted as first partial images) is before the arrangement order of the M images, and the arrangement order of the other images (denoted as second partial images) of the P images except for the partial images is after the arrangement order of the M images. Specifically, the number of the first partial images is the same as or different from the number of the second partial images.
The input Tg in the first sub-input is in the lower one-third region of the first screen (denoted as region Q5), the input Th is in the lower one-third region of the second screen (denoted as region Q6), and the arrangement order of the P images follows the arrangement order of the M images.
Exemplarily, as shown in fig. 8, a schematic diagram of displaying content for a terminal device according to an embodiment of the present invention is provided. In conjunction with fig. 4, the input Tg shown in fig. 8 (8a) is a slide input in which the user slides on the region Q3 of the first screen in a straight line from right to left. The input Th is a slide input in which the user slides on the area Q4 of the second screen in a straight line from left to right. Subsequently, after the terminal device receives the input Tg and the input Th, it may display, as in (8b) of fig. 8, a picture P6 which is a picture in the order before the arrangement order of M pictures, a picture P7 which is a picture in the order after the arrangement order of M pictures, the picture P6 and the picture P7 which are the above-mentioned P pictures, when P is equal to 2.
In the video generation method provided in the embodiment of the present invention, the terminal device may select P images according to the input parameter and the input area of the first sub-input in the second input of the user, and update the M images into N images using the P images. Therefore, the flexibility of the image selecting process is improved, and the steps of editing the image and generating the video by the terminal equipment are further simplified.
Optionally, the first sub-input is used to trigger the terminal device to copy and display images on the second screen, the arrangement order of the images is determined by the direction and/or the end position of the first sub-input on the second screen, and L is a positive integer.
Specifically, the "the terminal apparatus updates M images in the image list to N images" in the above-described step 304 'may be realized by the step 306':
in step 306', the terminal apparatus copies the image indicated by the pointer identification, and arranges and displays the copied image indicated by the pointer identification and the M images in accordance with the attribute information of the copied image indicated by the pointer identification and the attribute information of the M images.
The first sub-input may comprise one input by the user on the first screen (denoted as input Ti) and another input on the second screen (denoted as input Tj). Illustratively, the input Ti is a sliding input in which a sliding track of the user on the first screen is a straight line from right to left. The input Tj is a sliding input in which a sliding track of the user on the second screen is a straight line from top to bottom.
Exemplarily, as shown in fig. 9, a schematic diagram of displaying content for a terminal device according to an embodiment of the present invention is provided. Referring to fig. 4, an input Ti shown in (9a) of fig. 9 is a slide input in which the slide trajectory of the user on the first screen is a straight line from right to left. The input Tj is a sliding input in which the user slides on the second screen in a straight line from top to bottom to trigger the terminal device to copy the image P1, resulting in an image P8. Subsequently, after the terminal device receives the input Ti and the input Tj, it may display, as shown in (9b) of fig. 9, an arrangement order of the image P8 shown in (9b) of fig. 9 that follows the arrangement order of the image 1.
Therefore, the terminal equipment can conveniently and quickly copy the images of the video to be generated, so that the contents of a plurality of (such as two) images which are continuously arranged in the generated target video are the same, namely, the effect of freezing the images in the target video is realized.
In a possible implementation manner, in the video generating method provided by the embodiment of the present invention, after "the terminal device updates M images in the image list to N images" in step 304', the method may further include steps 307 and 308:
and 307, the terminal equipment receives a third input of the user.
For example, the operation manner of the third input may refer to the description of the operation manner of the first input in the foregoing embodiment, and the description of the embodiment of the present invention is omitted.
Alternatively, the third input may include a sub-input (denoted as input Tx) of the user on the first screen and a sub-input (denoted as input Ty) of the user on the second screen. It will be appreciated that the input Tx and the input Ty in the first input described above operate in the same or different ways.
Note that the absolute value of the time difference between the time when the terminal device receives the input Tx and the time when the input Ty is within the preset range.
Exemplarily, as shown in fig. 10, a schematic diagram of displaying content for a terminal device according to an embodiment of the present invention is provided. In conjunction with (7b) in fig. 7, the input Tx shown in (10a) in fig. 10 is a sliding input from top to bottom and from left to right from the left side edge to the bottom edge of the first screen, and the sliding trajectory of the sliding input is a circular arc. The input Ty is a sliding input from top to bottom and from right to left from the right side edge to the bottom edge of the second screen, and the sliding track of the sliding input is an arc.
And step 308, responding to the third input, and updating the N images in the image list into M images by the terminal equipment.
Subsequently, after the terminal device receives the input Tx and the input Ty, the operation of removing the display image 1 may be canceled, and the display including the image P2, the image P3, the image P1, the image P4, and the image P5 in the second screen as shown in (10b) in fig. 10 is displayed.
It should be noted that, in the video generation method provided by the embodiment of the present invention, in the process of selecting an image and editing the sequence of frames of the image, the user may cancel the result of the last output through a specific input (e.g., a third input). Therefore, the flexibility of the image selecting process is improved, and the steps of editing the image and generating the video by the terminal equipment are further simplified.
The video generation method according to the embodiment of the present invention will be described below with reference to the terminal device shown in fig. 11. Theterminal device 11 shown in fig. 11 includes a first screen and a second screen, and theterminal device 11 further includes: a receivingmodule 11a, adisplay module 11b and a generating module 11 c; areceiving module 11a receiving a first input of a user in a state where a first image is displayed on a first screen; adisplay module 11b, configured to display an image list on the second screen in response to the first input received by the receivingmodule 11a, where the image list includes M images; the receivingmodule 11a is further configured to receive a second input of the user; a generating module 11c, configured to generate, in response to the second input received by the receivingmodule 11a, a target video according to N images in the image list displayed by the displayingmodule 11 b; wherein the M images include a first image, the N images are the same as or different from the M images, and M, N are integers greater than 1.
Optionally, the N images are different from the M images; the second input comprises a first sub-input, and the first sub-input is used for triggering the updating of the M images in the image list into N images; the generating module 11c is specifically configured to update the M images in the image list into N images, and generate the target video according to the N images.
Optionally, the second input further includes N-1 second sub-inputs, where the N-1 second sub-inputs are used to determine an order of synthesizing the N images into a frame of the video; the generating module 11c is specifically configured to synthesize N images according to the sequence of the frames, and generate a target video.
Optionally, a pointer identifier is further displayed on the image list, and the image displayed on the first screen is an image indicated by the pointer identifier; the second sub-input is further used to trigger moving the pointer identification from a first position of the second image on the image list to a second position of the first image on the image list and updating the second image displayed on the first screen to the first image.
Optionally, theterminal device 11 further includes: a first acquisition module; the first obtaining module is used for obtaining the M images according to the attribute information of the first image before thedisplay module 11b displays the M images on the second screen; wherein M is a preset numerical value, and the arrangement sequence of the M images is associated with the attribute information of the M images.
Optionally, theterminal device 11 further includes: a second acquisition module; a second obtaining module, configured to obtain the M images according to the attribute information of the first image and the first input before thedisplay module 11b displays the M images on the second screen; wherein the value of M is associated with the input parameter of the first input; the input parameters comprise at least one of input duration, length of input sliding track and input pressure value; the arrangement order of the M images is associated with the attribute information of the M images.
Optionally, the first sub-input is specifically used to trigger the removal of the image indicated by the pointer identifier from the second screen; the generating module 11c is specifically configured to remove the image indicated by the pointer identifier from the image list displayed on the second screen.
Optionally, the first sub-input is specifically configured to trigger addition of P images to the image list, where a value of P is associated with an input parameter of the first sub-input; the input parameters comprise at least one of input duration, input sliding track length and input pressure value, and P is a positive integer; a generating module 11c, configured to obtain P images, and arrange and display the P images and the M images according to the attribute information of the P images and the attribute information of the M images; the arrangement order of the P images is associated with the attribute information of the P images, the image list includes the arranged P images and M images, and N is M + P.
Optionally, the image contents of the P images are determined according to the input area of the first sub-input; the P images are arranged before the M images; alternatively, the P images are arranged after the M images; alternatively, some of the P images are arranged before the M images, and the other images of the P images except for the partial images are arranged after the M images.
Optionally, theterminal device 11 further includes: an update module; the receiving module is further configured to receive a third input of the user after the generating module 11c updates the M images in the image list to N images; and the updating module is used for responding to the third input received by the receiving module and updating the N images in the image list into M images.
It should be noted that, the terminal device provided in the embodiment of the present invention includes a first screen and a second screen. The terminal device receives a first input from a user in a state where a first image is displayed on a first screen, and displays an image list including M images on a second screen in response to the first input, the M images being acquired by the terminal device at least from the first image, the M images including the first image. The terminal device receives a second input from the user and may generate the target video from the N images in the image list in response to the second input. Wherein the N images are the same as or different from the M images, and M, N are integers greater than 1. Based on the scheme, the terminal device receives the first input in the state that the first screen displays the first image, and can select M images at least according to the first image without receiving the selection input of the user for each image in the plurality of images, so that the M images are added to the image list displayed on the second screen. Thus, the user can select N images that are the same as or different from the M images from the image list and edit the N images into the target video. In addition, the user can view the first image to be edited in the video displayed in the second screen while browsing the first image on the first screen, so that the user can select the image required by the user. Therefore, the step of controlling the terminal equipment to select the image by the user can be simplified, so that the steps of editing the image and generating the video by the terminal equipment are simplified.
Theterminal device 11 provided in the embodiment of the present invention can implement each process implemented by the terminal device in the method embodiment, and is not described here again to avoid repetition.
Fig. 12 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present invention, where theterminal device 100 includes, but is not limited to:radio frequency unit 101,network module 102,audio output unit 103,input unit 104,sensor 105, display unit 106 (including first and second screens),user input unit 107,interface unit 108,memory 109,processor 110, and power supply 111. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 12 does not constitute a limitation of the terminal device, and that the terminal device may include more or fewer components than shown, or combine certain components, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal device, a wearable device, a pedometer, and the like.
Auser input unit 107 for receiving a first input of a user in a state where a first image is displayed on a first screen; adisplay unit 106 for displaying an image list including M images on the second screen in response to the first input received by theuser input unit 107; auser input unit 107 for receiving a second input from the user; aprocessor 110 for generating a target video from N images in the image list displayed by thedisplay unit 106 in response to a second input received by theuser input unit 107; wherein the M images include a first image, the N images are the same as or different from the M images, and M, N are integers greater than 1.
It should be noted that, the terminal device provided in the embodiment of the present invention includes a first screen and a second screen. The terminal device receives a first input from a user in a state where a first image is displayed on a first screen, and displays an image list including M images on a second screen in response to the first input, the M images being acquired by the terminal device at least from the first image, the M images including the first image. The terminal device receives a second input from the user and may generate the target video from the N images in the image list in response to the second input. Wherein the N images are the same as or different from the M images, and M, N are integers greater than 1. Based on the scheme, the terminal device receives the first input in the state that the first screen displays the first image, and can select M images at least according to the first image without receiving the selection input of the user for each image in the plurality of images, so that the M images are added to the image list displayed on the second screen. Thus, the user can select N images that are the same as or different from the M images from the image list and edit the N images into the target video. In addition, the user can view the first image to be edited in the video displayed in the second screen while browsing the first image on the first screen, so that the user can select the image required by the user. Therefore, the step of controlling the terminal equipment to select the image by the user can be simplified, so that the steps of editing the image and generating the video by the terminal equipment are simplified.
It should be understood that, in the embodiment of the present invention, theradio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by theprocessor 110; in addition, the uplink data is transmitted to the base station. Typically,radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, theradio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The terminal device provides wireless broadband internet access to the user through thenetwork module 102, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
Theaudio output unit 103 may convert audio data received by theradio frequency unit 101 or thenetwork module 102 or stored in thememory 109 into an audio signal and output as sound. Also, theaudio output unit 103 may also provide audio output related to a specific function performed by the terminal device 100 (e.g., a call signal reception sound, a message reception sound, etc.). Theaudio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
Theinput unit 104 is used to receive an audio or video signal. Theinput Unit 104 may include a Graphics Processing Unit (GPU) 1041 and amicrophone 1042, and theGraphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on thedisplay unit 106. The image frames processed by thegraphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via theradio frequency unit 101 or thenetwork module 102. Themicrophone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via theradio frequency unit 101 in case of a phone call mode.
Theterminal device 100 also includes at least onesensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of thedisplay panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off thedisplay panel 1061 and/or the backlight when theterminal device 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; thesensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
Thedisplay unit 106 is used to display information input by a user or information provided to the user. TheDisplay unit 106 may include aDisplay panel 1061, and theDisplay panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
Theuser input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, theuser input unit 107 includes atouch panel 1071 andother input devices 1072.Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or neartouch panel 1071 using a finger, stylus, or any suitable object or attachment). Thetouch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to theprocessor 110, and receives and executes commands sent by theprocessor 110. In addition, thetouch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to thetouch panel 1071, theuser input unit 107 may includeother input devices 1072. Specifically,other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, thetouch panel 1071 may be overlaid on thedisplay panel 1061, and when thetouch panel 1071 detects a touch operation thereon or nearby, thetouch panel 1071 transmits the touch operation to theprocessor 110 to determine the type of the touch event, and then theprocessor 110 provides a corresponding visual output on thedisplay panel 1061 according to the type of the touch event. Although in fig. 12, thetouch panel 1071 and thedisplay panel 1061 are two independent components to implement the input and output functions of the terminal device, in some embodiments, thetouch panel 1071 and thedisplay panel 1061 may be integrated to implement the input and output functions of the terminal device, and is not limited herein.
Theinterface unit 108 is an interface for connecting an external device to theterminal apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. Theinterface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within theterminal apparatus 100 or may be used to transmit data between theterminal apparatus 100 and the external device.
Thememory 109 may be used to store software programs as well as various data. Thememory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, thememory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
Theprocessor 110 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in thememory 109 and calling data stored in thememory 109, thereby performing overall monitoring of the terminal device.Processor 110 may include one or more processing units; preferably, theprocessor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into theprocessor 110.
Theterminal device 100 may further include a power supply 111 (such as a battery) for supplying power to each component, and preferably, the power supply 111 may be logically connected to theprocessor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, theterminal device 100 includes some functional modules that are not shown, and are not described in detail here.
Preferably, an embodiment of the present invention further provides a terminal device, which includes aprocessor 110, amemory 109, and a computer program stored in thememory 109 and capable of running on theprocessor 110, where the computer program is executed by theprocessor 110 to implement each process of the foregoing method embodiment, and can achieve the same technical effect, and for avoiding repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the processes of the method embodiments, and can achieve the same technical effects, and in order to avoid repetition, the details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (13)

Translated fromChinese
1.一种视频生成方法,应用于终端设备,所述终端设备包括第一屏和第二屏,其特征在于,所述方法包括:1. A video generation method, applied to a terminal device, the terminal device comprising a first screen and a second screen, wherein the method comprises:在所述第一屏上显示第一图像的状态下,接收用户的第一输入;所述第一输入用于触发所述终端设备开始选择图像;In a state where the first image is displayed on the first screen, a first input from the user is received; the first input is used to trigger the terminal device to start selecting an image;响应于所述第一输入,在所述第二屏上显示图像列表,所述图像列表中包括M个图像;所述M个图像为所述终端设备根据所述第一图像的属性信息获取的;其中,M的取值与第一输入的输入参数相关联;所述输入参数包括输入时长、输入滑动轨迹的长度和输入压力值中的至少一项;In response to the first input, display an image list on the second screen, where the image list includes M images; the M images are acquired by the terminal device according to the attribute information of the first image ; wherein, the value of M is associated with the input parameter of the first input; the input parameter includes at least one of the input duration, the length of the input sliding track and the input pressure value;接收用户的第二输入;receiving a second input from the user;响应于所述第二输入,根据所述图像列表中的N个图像,生成目标视频;In response to the second input, generating a target video according to the N images in the image list;其中,所述M个图像包括所述第一图像,所述N个图像与所述M个图像相同或不同,M、N均为大于1的整数。The M images include the first image, the N images are the same as or different from the M images, and both M and N are integers greater than 1.2.根据权利要求1所述的方法,其特征在于,所述N个图像与所述M个图像不同;2. The method of claim 1, wherein the N images are different from the M images;所述第二输入中包括第一子输入,所述第一子输入用于触发将所述图像列表中的所述M个图像更新为所述N个图像;The second input includes a first sub-input, and the first sub-input is used to trigger updating the M images in the image list to the N images;所述根据所述图像列表中的所述N个图像,生成所述目标视频,包括:The generating the target video according to the N images in the image list includes:将所述图像列表中的所述M个图像更新为所述N个图像,并根据所述N个图像,生成所述目标视频。The M images in the image list are updated to the N images, and the target video is generated according to the N images.3.根据权利要求2所述的方法,其特征在于,所述第二输入中还包括N-1个第二子输入,所述N-1个第二子输入用于确定将所述N个图像合成视频的帧的顺序;3. The method according to claim 2, wherein the second input further includes N-1 second sub-inputs, and the N-1 second sub-inputs are used to determine whether the N The order of the frames of the image composite video;所述根据所述N个图像,生成所述目标视频,包括:The generating the target video according to the N images includes:按照所述帧的顺序,合成所述N个图像,生成所述目标视频。According to the sequence of the frames, the N images are synthesized to generate the target video.4.根据权利要求3所述的方法,其特征在于,所述图像列表上还显示有指针标识,所述第一屏上显示的图像为所述指针标识指示的图像;4 . The method according to claim 3 , wherein a pointer logo is further displayed on the image list, and the image displayed on the first screen is an image indicated by the pointer logo; 4 .所述第二子输入还用于触发将所述指针标识从第二图像在所述图像列表上的第一位置移动到所述第一图像在所述图像列表上的第二位置,并将所述第一屏上显示的所述第二图像更新为所述第一图像。The second sub-input is also used to trigger the movement of the pointer mark from the first position of the second image on the image list to the second position of the first image on the image list, and to move the all The second image displayed on the first screen is updated to the first image.5.根据权利要求3所述的方法,其特征在于,所述在所述第二屏上显示所述图像列表之前,所述方法还包括:5. The method according to claim 3, wherein before the displaying the image list on the second screen, the method further comprises:根据所述第一图像的属性信息,获取所述M个图像;obtaining the M images according to the attribute information of the first image;其中,M为预设数值,所述M个图像的排列顺序与所述M个图像的属性信息相关联。Wherein, M is a preset value, and the arrangement order of the M images is associated with the attribute information of the M images.6.根据权利要求5所述的方法,其特征在于,所述在所述第二屏上显示所述图像列表之前,所述方法还包括:6. The method according to claim 5, wherein, before the displaying the image list on the second screen, the method further comprises:根据所述第一图像的属性信息和所述第一输入,获取所述M个图像;obtaining the M images according to the attribute information of the first image and the first input;其中,M的取值与所述第一输入的输入参数相关联;所述输入参数包括输入时长、输入滑动轨迹的长度和输入压力值中的至少一项;所述M个图像的排列顺序与所述M个图像的属性信息相关联。Wherein, the value of M is associated with the input parameter of the first input; the input parameter includes at least one of the input duration, the length of the input sliding track and the input pressure value; the arrangement order of the M images is the same as The attribute information of the M images is associated.7.根据权利要求4所述的方法,其特征在于,所述第一子输入具体用于触发从所述第二屏上移除所述指针标识指示的图像;7. The method according to claim 4, wherein the first sub-input is specifically used to trigger the removal of the image indicated by the pointer logo from the second screen;所述将所述图像列表中的所述M个图像更新为所述N个图像,包括:The updating of the M images in the image list to the N images includes:从所述第二屏上显示的所述图像列表中移除所述指针标识指示的图像。The image indicated by the pointer identification is removed from the image list displayed on the second screen.8.根据权利要求2所述的方法,其特征在于,所述第一子输入具体用于触发向所述图像列表中添加P个图像,P的取值与所述第一子输入的输入参数相关联;所述输入参数包括输入时长、输入滑动轨迹的长度和输入压力值中的至少一项,P为正整数;8 . The method according to claim 2 , wherein the first sub-input is specifically used to trigger adding P images to the image list, and the value of P is the same as the input parameter of the first sub-input. 9 . associated; the input parameter includes at least one of the input duration, the length of the input sliding track and the input pressure value, and P is a positive integer;所述将所述图像列表中的所述M个图像更新为所述N个图像,包括:The updating of the M images in the image list to the N images includes:获取所述P个图像,并按照所述P个图像的属性信息和所述M个图像的属性信息,排列并显示所述P个图像和所述M个图像;Acquiring the P images, and arranging and displaying the P images and the M images according to the attribute information of the P images and the attribute information of the M images;其中,所述P个图像的排列顺序与所述P个图像的属性信息相关联,所述图像列表中包括排列后的所述P个图像和所述M个图像,N=M+P。Wherein, the arrangement order of the P images is associated with attribute information of the P images, and the image list includes the arranged P images and the M images, N=M+P.9.根据权利要求8所述的方法,其特征在于,所述P个图像的图像内容根据所述第一子输入的输入区域确定;9. The method according to claim 8, wherein the image content of the P images is determined according to the input area of the first sub-input;所述P个图像排列在所述M个图像之前;the P images are arranged before the M images;或者,所述P个图像排列在所述M个图像之后;Alternatively, the P images are arranged after the M images;或者,所述P个图像中部分图像排列在所述M个图像之前,且所述P个图像中除所述部分图像之外的其他图像排列在所述M个图像之后。Alternatively, some images in the P images are arranged before the M images, and other images except the partial images in the P images are arranged after the M images.10.根据权利要求2所述的方法,其特征在于,所述将所述图像列表中的所述M个图像更新为所述N个图像之后,所述方法还包括:10. The method according to claim 2, wherein after the M images in the image list are updated to the N images, the method further comprises:接收用户的第三输入;receive a third input from the user;响应于所述第三输入,将所述图像列表中的所述N个图像更新为所述M个图像。In response to the third input, the N images in the image list are updated to the M images.11.一种终端设备,其特征在于,所述终端设备包括第一屏和第二屏,所述终端设备还包括:接收模块、显示模块和生成模块;11. A terminal device, characterized in that the terminal device comprises a first screen and a second screen, and the terminal device further comprises: a receiving module, a display module and a generating module;所述接收模块,用于在所述第一屏上显示第一图像的状态下,接收用户的第一输入;所述第一输入用于触发所述终端设备开始选择图像;The receiving module is configured to receive a first input from a user in a state where the first image is displayed on the first screen; the first input is used to trigger the terminal device to start selecting an image;所述显示模块,用于响应于所述接收模块接收的所述第一输入,在所述第二屏上显示图像列表,所述图像列表中包括M个图像;所述M个图像为所述终端设备根据所述第一图像的属性信息获取的;其中,M的取值与第一输入的输入参数相关联;所述输入参数包括输入时长、输入滑动轨迹的长度和输入压力值中的至少一项;The display module is configured to display an image list on the second screen in response to the first input received by the receiving module, where the image list includes M images; the M images are the Obtained by the terminal device according to the attribute information of the first image; wherein the value of M is associated with the input parameter of the first input; the input parameter includes at least one of the input duration, the length of the input sliding track and the input pressure value an item;所述接收模块,还用于接收用户的第二输入;The receiving module is further configured to receive the second input of the user;所述生成模块,用于响应于所述接收模块接收的所述第二输入,根据显示模块显示的所述图像列表中的N个图像,生成目标视频;The generating module is configured to, in response to the second input received by the receiving module, generate a target video according to the N images in the image list displayed by the display module;其中,所述M个图像包括所述第一图像,所述N个图像与所述M个图像相同或不同,M、N均为大于1的整数。The M images include the first image, the N images are the same as or different from the M images, and both M and N are integers greater than 1.12.一种终端设备,其特征在于,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至10中任一项所述的视频生成方法的步骤。12. A terminal device, characterized in that it comprises a processor, a memory, and a computer program stored on the memory and running on the processor, the computer program being executed by the processor to achieve the right The steps of the video generation method of any one of claims 1 to 10 are required.13.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如权利要求1至10中任一项所述的视频生成方法的步骤。13. A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the video according to any one of claims 1 to 10 is realized Generate the steps of the method.
CN201810690884.6A2018-06-282018-06-28Video generation method and terminal equipmentActiveCN108881742B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201810690884.6ACN108881742B (en)2018-06-282018-06-28Video generation method and terminal equipment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201810690884.6ACN108881742B (en)2018-06-282018-06-28Video generation method and terminal equipment

Publications (2)

Publication NumberPublication Date
CN108881742A CN108881742A (en)2018-11-23
CN108881742Btrue CN108881742B (en)2021-06-08

Family

ID=64296499

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201810690884.6AActiveCN108881742B (en)2018-06-282018-06-28Video generation method and terminal equipment

Country Status (1)

CountryLink
CN (1)CN108881742B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109769089B (en)*2018-12-282021-03-16维沃移动通信有限公司Image processing method and terminal equipment
CN110022445B (en)*2019-02-262022-01-28维沃软件技术有限公司Content output method and terminal equipment
CN109889757B (en)2019-03-292021-05-04维沃移动通信有限公司 A video call method and terminal device
CN119052543A (en)*2023-05-292024-11-29荣耀终端有限公司Video creation method and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US8600214B2 (en)*2007-10-292013-12-03Samsung Electronics Co., Ltd.Portable terminal and method for managing videos therein
CN105230005A (en)*2013-05-102016-01-06三星电子株式会社 Display device and control method thereof
CN105791976A (en)*2015-01-142016-07-20三星电子株式会社 Generation and display of highlight videos associated with source content
CN106961559A (en)*2017-03-202017-07-18维沃移动通信有限公司The preparation method and electronic equipment of a kind of video
CN107948730A (en)*2017-10-302018-04-20百度在线网络技术(北京)有限公司Method, apparatus, equipment and storage medium based on picture generation video

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US8600214B2 (en)*2007-10-292013-12-03Samsung Electronics Co., Ltd.Portable terminal and method for managing videos therein
CN105230005A (en)*2013-05-102016-01-06三星电子株式会社 Display device and control method thereof
CN105791976A (en)*2015-01-142016-07-20三星电子株式会社 Generation and display of highlight videos associated with source content
CN106961559A (en)*2017-03-202017-07-18维沃移动通信有限公司The preparation method and electronic equipment of a kind of video
CN107948730A (en)*2017-10-302018-04-20百度在线网络技术(北京)有限公司Method, apparatus, equipment and storage medium based on picture generation video

Also Published As

Publication numberPublication date
CN108881742A (en)2018-11-23

Similar Documents

PublicationPublication DateTitle
CN110851051B (en) Object sharing method and electronic device
CN109862267B (en)Shooting method and terminal equipment
CN109002243B (en) A kind of image parameter adjustment method and terminal device
CN109525874B (en)Screen capturing method and terminal equipment
CN108495029B (en) A kind of photographing method and mobile terminal
WO2019137429A1 (en)Picture processing method and mobile terminal
CN111061574A (en) Object sharing method and electronic device
CN109859307B (en) An image processing method and terminal equipment
CN107992244A (en)The control method and terminal device of a kind of application program
CN111638837B (en) A message processing method and electronic device
CN110908558A (en)Image display method and electronic equipment
WO2019206036A1 (en)Message management method and terminal
CN111127595A (en)Image processing method and electronic device
CN109828731B (en) A search method and terminal device
CN109634494A (en)A kind of image processing method and terminal device
CN110244884B (en) A desktop icon management method and terminal device
CN111399715B (en)Interface display method and electronic equipment
CN108881742B (en)Video generation method and terminal equipment
CN110868633A (en) A video processing method and electronic device
CN110909180B (en)Media file processing method and electronic equipment
CN111064848B (en)Picture display method and electronic equipment
CN111159449A (en) An image display method and electronic device
CN110045890B (en)Application identifier display method and terminal equipment
CN111596990A (en)Picture display method and device
CN109491634B (en) A screen display control method and terminal device

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp