Disclosure of Invention
The application mainly aims to provide an image acquisition method, a terminal device and a computer readable storage medium, and aims to solve the technical problem that the shooting and imaging effect of the terminal device is poor.
In order to achieve the above object, the present application provides an image acquisition method applied to a terminal device with a camera, where the terminal device includes at least two cameras, and a focal length captured by each camera is different, and the image acquisition method includes the following steps:
when the triggering operation of the cameras is detected, starting the cameras, controlling the cameras to pick up image information respectively, and forming a camera preview interface;
when the photographing triggering operation is detected, generating a picture according to the camera preview interface;
and saving the picture.
Optionally, the step of forming a camera preview interface includes:
determining a target camera according to a preset zoom value;
and combining main preview image information and supplementary preview image information to form the camera preview interface, wherein the main preview image information is image information picked up by the target camera, and the supplementary preview image information is image information picked up by other cameras except the target camera.
Optionally, the step of combining the main preview image information and the supplemental preview image information to form the camera preview interface includes:
acquiring an area overlapped with the main preview image information in the supplementary preview image information;
and combining the area overlapped with the main preview image information in the supplementary preview image information into the main preview image information to form the camera preview interface.
Optionally, the step of merging the region of the supplemental preview image information that overlaps with the main preview image information into the main preview image information to form the camera preview interface includes:
acquiring coordinates of the target camera and relative position parameters of the other cameras and the target camera;
calculating the coordinate position of the pixel of the region overlapped with the main preview image information in each supplementary preview image information according to the coordinate of the target camera and the relative position parameter;
and converting each pixel into a pixel plane corresponding to the main preview image information according to the coordinate position to form the camera preview interface.
Optionally, the preset zoom value is a default zoom value of the terminal device, or the preset zoom value is a set zoom value of a user.
Optionally, the step of saving the picture is executed while:
and storing the image information picked up by each camera, and associating the picture with each image information.
Optionally, after the step of saving the image information picked up by each camera and associating the picture with each image information, the method further includes:
when the picture editing operation is detected, acquiring editing parameters corresponding to the editing operation;
acquiring target image information corresponding to the editing parameters from each image information associated with the picture;
and generating edited target picture preview data according to the target image information.
Optionally, the step of acquiring target image information corresponding to the editing parameter from each piece of image information associated with the picture includes:
determining a zoom value of the adjusted picture according to the editing parameters;
acquiring a focal length where the zoom value is located, and taking a camera matched with the focal length as a target camera;
and taking the image information picked up by the target camera as the target image information.
Optionally, the step of generating edited target picture preview data according to the target image information includes:
and adjusting the target image information according to the zoom value, and generating edited target picture preview data based on the adjusted target image information.
Optionally, the step of generating edited target picture preview data according to the target image information includes:
and combining and generating edited target picture preview data according to the target image information and other image information associated with the picture.
Optionally, the editing operation comprises at least one of zooming in, zooming out, and cropping.
Optionally, when the editing operation is a zoom-in operation, a focal length in which the zoom value of the adjusted picture is located is larger than a focal length in which the current zoom value of the picture is located; and when the editing operation is a zooming-out operation, the focal length of the zoom value of the adjusted picture is smaller than the focal length of the current zoom value of the picture.
Optionally, when the editing operation is cropping, after the step of generating edited target picture preview data according to the target image information, the method further includes:
and after the cutting determining operation is detected, cutting the target picture according to the target picture preview data and the editing parameters.
The present application further provides a terminal device, including: a memory, a processor and an image acquisition program stored on the memory and executable on the processor, the image acquisition program, when executed by the processor, implementing the steps of the image acquisition method as described above.
Optionally, the processor includes at least two image processing modules, and each image processing module is connected to one camera.
Furthermore, the present application also provides a computer-readable storage medium having stored thereon an image acquisition program which, when executed by a processor, implements the steps of the image acquisition method as described above.
According to the image acquisition method, the terminal device and the computer readable storage medium, when the terminal device detects the triggering operation of the camera, the terminal device starts each camera, controls each camera to pick up image information respectively, generates a camera preview interface according to the image information picked up by each camera, and generates and stores a picture according to the camera preview interface when the triggering operation of photographing is detected. Because the picture is formed by combining the image information of different focal sections acquired by a plurality of cameras simultaneously, the front and back scene definition of the picture and the three-dimensional effect of the image in the picture are improved, and the imaging effect of the camera is good.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The main solution of the embodiment of the application is as follows: when the triggering operation of the cameras is detected, starting the cameras, controlling the cameras to pick up image information respectively, and forming a camera preview interface; when the photographing triggering operation is detected, generating a picture according to the camera preview interface; and saving the picture.
As shown in fig. 1, fig. 1 is a schematic structural diagram of a terminal device in a hardware operating environment according to an embodiment of the present application.
The terminal device can be a PC, and also can be a terminal device with a shooting function, such as a smart phone and a tablet personal computer.
As shown in fig. 1, the terminal device may include: aprocessor 1001, such as a CPU, anetwork interface 1004, auser interface 1003, amemory 1005, acommunication bus 1002. Wherein acommunication bus 1002 is used to enable connective communication between these components. Theuser interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and theoptional user interface 1003 may also include a standard wired interface, a wireless interface. Thenetwork interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). Thememory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). Thememory 1005 may alternatively be a storage device separate from theprocessor 1001.
Further, terminal equipment includes two at least cameras, every the focus section that the camera was shot is different, for example, when terminal equipment includes three camera, be long focus camera, wide-angle camera and super wide-angle camera respectively, wherein, long focus camera corresponds the focus section of shooing and is 3X-30X, wide-angle camera corresponds the focus section of shooing and is 1X-3X, and super wide-angle camera corresponds the focus section of shooing and is 0.6X-1X. The cameras are all connected with the processor.
Optionally, the processor includes at least two image processing modules, and each image processing module is connected to one of the cameras. The camera transmits image information to an image processing module connected with the camera after the image information is collected at the front end of the camera, and the image processing module processes the image information collected by the camera to form image information in the focal length and stores the image information in a memory. Because the image information that every camera gathered alone is handled by solitary image processing module, so can save the original image data information that this camera gathered in the memory, at least two when the camera gathered the image simultaneously, can also simultaneously respectively to at least two the image data that the camera gathered are handled, and then carry out data merging based on at least two the image data that the camera gathered simultaneously, form the image based on different focal length or different angle data merging, improve the shooting effect.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, amemory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and an image acquisition program.
In the terminal shown in fig. 1, thenetwork interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; theuser interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and theprocessor 1001 may be configured to invoke an image acquisition program stored in thememory 1005 and perform the following operations:
when the triggering operation of the cameras is detected, starting the cameras, controlling the cameras to pick up image information respectively, and forming a camera preview interface;
when the photographing triggering operation is detected, generating a picture according to the camera preview interface;
and saving the picture.
Further, theprocessor 1001 may call the image acquisition program stored in thememory 1005, and also perform the following operations:
determining a target camera according to a preset zoom value;
and combining main preview image information and supplementary preview image information to form the camera preview interface, wherein the main preview image information is image information picked up by the target camera, and the supplementary preview image information is image information picked up by other cameras except the target camera.
Further, theprocessor 1001 may call the image acquisition program stored in thememory 1005, and also perform the following operations:
acquiring an area overlapped with the main preview image information in the supplementary preview image information;
and combining the area overlapped with the main preview image information in the supplementary preview image information into the main preview image information to form the camera preview interface.
Further, theprocessor 1001 may call the image acquisition program stored in thememory 1005, and also perform the following operations:
acquiring coordinates of the target camera and relative position parameters of the other cameras and the target camera;
calculating the coordinate position of the pixel of the region overlapped with the main preview image information in each supplementary preview image information according to the coordinate of the target camera and the relative position parameter;
and converting each pixel into a pixel plane corresponding to the main preview image information according to the coordinate position to form the camera preview interface.
Further, theprocessor 1001 may call the image acquisition program stored in thememory 1005, and also perform the following operations:
and storing the image information picked up by each camera, and associating the picture with each image information.
Further, theprocessor 1001 may call the image acquisition program stored in thememory 1005, and also perform the following operations:
when the picture editing operation is detected, acquiring editing parameters corresponding to the editing operation;
acquiring target image information corresponding to the editing parameters from each image information associated with the picture;
and generating edited target picture preview data according to the target image information.
Further, theprocessor 1001 may call the image acquisition program stored in thememory 1005, and also perform the following operations:
determining a zoom value of the adjusted picture according to the editing parameters;
acquiring a focal length where the zoom value is located, and taking a camera matched with the focal length as a target camera;
and taking the image information picked up by the target camera as the target image information.
Further, theprocessor 1001 may call the image acquisition program stored in thememory 1005, and also perform the following operations:
and adjusting the target image information according to the zoom value, and generating edited target picture preview data based on the adjusted target image information.
Further, theprocessor 1001 may call the image acquisition program stored in thememory 1005, and also perform the following operations:
and combining and generating edited target picture preview data according to the target image information and other image information associated with the picture.
Further, theprocessor 1001 may call the image acquisition program stored in thememory 1005, and also perform the following operations:
and after the cutting determining operation is detected, cutting the target picture according to the target picture preview data and the editing parameters.
Referring to fig. 2, the present application provides a first embodiment of an image acquisition method, where the image acquisition method is applied to a terminal device with a camera, the terminal device includes at least two cameras, and focal segments captured by the cameras are different, and the image acquisition method includes:
step S10, when detecting the trigger operation of the camera, opening each camera, controlling each camera to pick up image information respectively, and forming a camera preview interface;
the terminal device in this embodiment may be a mobile phone, a tablet, a camera, or the like. The terminal equipment is provided with a camera application, and a user can trigger the camera application to carry out shooting work.
When a user triggers a camera application of the terminal equipment, the terminal equipment starts each camera and controls each camera to pick up image information respectively. The camera transmits the acquired image information to different processing modules respectively, and the acquired image information is stored in different storage areas after being processed by the processing modules. And the same trigger operation is correspondingly associated with the acquired image data.
It should be noted that the terminal device in this embodiment includes, but is not limited to, a tele camera, a wide camera, and an ultra-wide camera, where a focal length shot by the tele camera is 3X to 30X, a focal length shot by the wide camera is 1X to 3X, and a focal length shot by the ultra-wide camera is 0.6X to 1X.
The terminal equipment is provided with a display interface, and after the camera application is triggered, image data collected by the cameras are displayed on the display interface in a preview mode.
It can be understood that, based on the present embodiment that at least two cameras simultaneously pick up image information, and the picked-up image information is acquired based on different focal lengths and different coordinates, the camera preview interface may be formed by combining multiple sets of the image information.
Specifically, in an embodiment, referring to fig. 3, the camera preview interface is formed in a manner including, but not limited to, one of the following:
step S11, determining a target camera according to a preset zoom value;
step S12, combining main preview image information and supplemental preview image information to form the camera preview interface, where the main preview image information is image information picked up by the target camera, and the supplemental preview image information is image information picked up by other cameras except the target camera.
Namely, the main preview image information is determined according to the preset zoom value, and then the main preview image information is corrected by using the image information of other cameras, so that the display effect of the shot image is improved.
Specifically, the preset zoom value may be a default zoom value of the terminal device, or the preset zoom value may be a set zoom value of a user. And if the user triggers the camera application, the terminal equipment controls each camera to be started, and each camera respectively picks up image information and transmits the image information back to each processing module. At this time, if the terminal device does not detect the user-set zoom value, the default zoom value of the terminal device is adopted as the preset zoom value. And if the terminal equipment detects that the user sets the zoom value of the current camera, the set zoom value of the user is adopted as the preset zoom value. It can be understood that a zoom control is arranged on the terminal device, and a user can set a zoom value by triggering the zoom control.
When a camera application of the terminal device is triggered, the default of the camera preview interface is a default zoom value set by a system, and when it is detected that a user triggers a zoom control to adjust the zoom value of the preview interface, the default zoom value is adjusted to be a set zoom value. In the focusing process, since the present embodiment employs a plurality of cameras to pick up image information simultaneously, when the implied zoom value is adjusted to the set zoom value, the terminal device directly takes the image information picked up by the camera corresponding to the set zoom value as the main preview image information and the image information picked up by the other cameras as the supplementary preview image information, and combines and images without the need of the exemplary technique: when the default zoom value is adjusted to the set zoom value, the camera corresponding to the default zoom value is closed, the camera corresponding to the set zoom value is opened, and then image information is picked up. The embodiment omits the pick-up time of the image information after the zoom value is adjusted in the image pick-up process, and can avoid missed moment pictures caused by adjusting the zoom value to a certain extent.
It should be noted that, in this embodiment, a target camera is determined according to the preset zoom value, where the target camera is a camera corresponding to the preset zoom value, and if the preset zoom value is 1.0X, the corresponding target camera is a wide-angle camera; if the preset zoom value is 0.6X, the corresponding target camera is an ultra-wide-angle camera.
In this embodiment, after a target camera is determined according to a preset zoom value, image information picked up by the target camera is used as main preview image information, image information picked up by other cameras except the target camera is used as supplementary preview image information, and then the main preview image information is corrected by using the supplementary preview image information, so as to finally form a preview interface.
Because the picture is all represented on the photosensitive element of camera through the equal proportion formation of image by the object, and each photosensitive element all comprises different plane pixel, because the position coordinate of each camera has relative position, when a plurality of cameras shoot same object of being shot, have the visual angle of a plurality of differences, if merge into a picture with the data of a plurality of different visual angles, the little stereoeffect of the object of being shot is stronger, also can be stronger on the reduction degree of visual angle impression, simultaneously because the difference of focus section formation of image, make at the composite picture, the fore-and-aft view of picture also can promote by a wide margin in the definition.
Based on the camera preview interface in the embodiment, the image information picked up by the multiple cameras is combined to form the camera preview interface, and the multiple cameras form images, so that the camera preview interface in the embodiment has a good imaging effect.
Step S20, when the photographing triggering operation is detected, generating a picture according to the camera preview interface;
and step S30, saving the picture.
And the display interface of the terminal equipment is provided with a photographing determining control, when a user triggers the photographing determining control, the terminal equipment is judged to detect photographing triggering operation, and the terminal equipment generates a picture according to the image information currently displayed on the camera preview interface, stores the picture and finishes photographing.
In this embodiment, when the terminal device detects a camera trigger operation, the terminal device starts each camera, controls each camera to respectively pick up image information, generates a camera preview interface according to the image information picked up by each camera, generates a picture according to the camera preview interface when the terminal device detects a photographing trigger operation, and stores the picture. Because the picture is formed by combining the image information of different focal sections acquired by a plurality of cameras simultaneously, the front and back scene definition of the picture and the three-dimensional effect of the image in the picture are improved, and the imaging effect of the camera is good.
Further, referring to fig. 4, the present application provides a second embodiment of an image obtaining method, and based on the first embodiment, the step of combining the main preview image information and the supplemental preview image information to form the camera preview interface includes:
step S121, acquiring an area overlapping with the main preview image information in the supplementary preview image information;
step S122, merging the region overlapping with the main preview image information in the supplemental preview image information into the main preview image information, and forming the camera preview interface.
At least two cameras pick up the image information of the same object from different angles, so that an overlapping area and a non-overlapping area are necessary between each image information, the depth information of the edge position of the formed picture is increased through the combination of the overlapping areas in the embodiment, and the transparency of the picture content can be improved qualitatively. And based on multi-focus segment fusion, the edge of the picture can be supplemented and corrected through the supplementary preview image information of other focus segments, and compared with a single shot picture, the edge of the shot picture cannot be distorted.
In this embodiment, the supplementary preview data is converted to the plane where the main preview data is located by coordinate conversion, so that the supplementary preview data is calibrated on one plane, and thus, under the condition that a pixel point is not changed, four-axis scattering extension may exist by taking the point as a center.
The coordinate conversion mode is that the coordinate of the main preview image data is used as a central coordinate, and the supplementary preview information is converted to the central coordinate based on the relative relation between the central coordinate and the coordinate of the supplementary preview image information, so as to complete the coordinate conversion.
Specifically, the step of combining the region overlapping with the main preview image information in the supplemental preview image information into the main preview image information to form the camera preview interface includes:
acquiring coordinates of the target camera and relative position parameters of the other cameras and the target camera;
calculating the coordinate position of the pixel of the region overlapped with the main preview image information in each supplementary preview image information according to the coordinate of the target camera and the relative position parameter;
and converting each pixel into a pixel plane corresponding to the main preview image information according to the coordinate position to form the camera preview interface.
It should be noted that the target camera is located as the central coordinate, and the phase position parameter refers to a relative position of the coordinate positions of the other cameras with respect to the coordinate of the target camera. After the coordinates of the target camera and the relative position parameters are acquired, calculating the coordinate position of the pixel of the area, which is overlapped with the main preview image information, of the supplementary preview image information at the coordinates of the target camera, and specifically converting the coordinates of the pixel at the corresponding camera to the coordinates of the target camera based on the relative position parameters, so that each pixel point is converted to the pixel plane of the main preview image information to complete the combination with the main preview image information, and further displaying the combined image in the camera preview interface.
In the embodiment of the invention, the edge depth information of a single photo is increased through multi-path fusion imaging, so that the transparency of the photo content can be qualitatively improved; the problem of poor edge resolution is optimized, so that edge noise of the picture is generated, the picture is indirectly changed into a large pixel through multi-pixel splicing and proofreading, and the light sensitivity and the color restoration degree of the camera are improved.
If the user needs to adjust the zoom value first in the shooting process of the terminal device and then takes a picture, the process of adjusting the zoom value needs time, and the user may miss an instant picture due to adjustment of the zoom value.
Specifically, as a third embodiment of the image obtaining method provided by the present application, referring to fig. 5 based on the first and/or second embodiment, the image obtaining method performs the step of saving the picture and also performs:
step S40, saving the image information picked up by each camera and associating the picture with each image information.
That is, after the user triggers the photographing operation, the terminal device generates a picture according to the camera preview interface, stores the picture, stores the image information picked up by each camera, and associates the picture with the image information picked up by each camera. Therefore, the pictures are correspondingly associated with the multi-path image information data, when the terminal equipment carries out zooming processing on the pictures, the multi-path image information can be called based on the association relation between the pictures and the image information data, and the data processing is carried out by adopting the originally picked image information, so that when the zooming processing is carried out on the pictures, the definition of the pictures can be always kept as same as that of the original pictures.
Based on the picture associated with the image information picked up by each camera, when the user edits the image, the image obtaining method of the embodiment of the application may perform the following processing on the picture:
referring to fig. 5 specifically, after the step of saving the image information picked up by each camera and associating the picture with each image information, the method further includes:
step S50, when the picture editing operation is detected, acquiring editing parameters corresponding to the editing operation;
step S60, acquiring target image information corresponding to the editing parameter from each image information associated with the picture;
step S70, generating edited preview data of the target picture according to the target image information.
The method comprises the steps that a user can click the picture to edit the picture, when the terminal device detects that the user carries out editing operation triggered based on the picture, editing parameters corresponding to the editing operation are obtained, then target image information corresponding to the editing parameters is obtained from each piece of image information related to the picture, and edited target picture preview data are generated by adopting the target image information.
Wherein the editing operation comprises at least one of zooming in, zooming out and cropping, and the editing parameters comprise one or more of zooming in multiple, zooming out multiple and cropping size. When a user magnifies and edits a picture, target image information corresponding to the magnification factor is searched from each image information associated with the picture, and then the target image information is used as target picture preview data for the user to preview the display effect of the magnified picture. When the user triggers the zoom-out operation or the clipping operation, the terminal device processes the picture in the same manner as described above, which is not described herein again.
It is understood that, in this embodiment, the focal length of each camera is different, and the picked-up image information is also divided according to the zoom value, so as to facilitate reasonable invoking of the image information associated with the picture to generate the target image preview data, so that the edited picture display effect is optimal, in an embodiment, referring to fig. 6, the step of obtaining the target image information corresponding to the editing parameter from each image information associated with the picture includes:
step S61, determining the zoom value of the adjusted picture according to the editing parameters;
step S62, acquiring a focal length corresponding to the zoom value, and taking a camera matched with the focal length as a target camera;
step S63, regarding the image information picked up by the target camera as the target image information.
That is, in this embodiment, when the editing parameter corresponding to the editing operation is obtained, the editing parameter is converted into a zoom value, a camera matched with a focal length where the zoom value is located is determined according to the zoom value, image information picked up by the camera is used as target image information, and then the target picture preview data is generated according to the target image information.
The image information picked up by each camera is different based on different focal sections of the cameras, the terminal equipment determines a target camera according to the zoom value of the picture adjusted by the editing parameters, the target image information corresponding to the target camera is adopted to generate target picture preview data, and the target image information is original information collected by the cameras and is the same as the image information of the focal section of the zoom value adjusted by the picture, and pixels of the target image information do not need to be changed, so that the image displayed by the adjusted target picture preview data is clear, and the problem that the definition can be reduced in a mode of achieving the purpose of focusing by changing the image pixels in the exemplary technology is solved. In contrast, the embodiment improves the clear effect of editing the picture.
In addition, in the embodiment, the image information which is most matched with the editing parameter is determined through the zoom value, and then the target picture preview data is generated by adopting the image information, so that the display effect of the convenient picture is optimized.
It should be noted that each focal length has an upper limit value and a lower limit value, the upper limit value or the lower limit value of two adjacent focal lengths is the same, and if the zoom value is between the upper limit value or the lower limit value, but does not belong to the upper limit value or the lower limit value, the target picture preview data is generated in the following two ways:
firstly, generating edited target picture preview data based on the target image information;
that is, the edited target picture preview data is generated by directly adopting the target image information, so that the adjusted target picture preview data meets the focal length.
Secondly, the target image information is adjusted according to the zoom value, and edited target picture preview data is generated based on the adjusted target image information.
That is, in this embodiment, after the target image information is determined according to the focal length of the zoom value, the target image information is adjusted according to the zoom value, so that the adjusted target image information is matched with the zoom value, and the edited target picture preview data is generated according to the adjusted target image information, so that the target picture preview data meets the requirement of the zoom value.
When the editing operation is enlargement or reduction, the terminal device generates enlarged or reduced picture preview data, and displays the enlarged or reduced picture. At this time, the user may select to capture the enlarged or reduced picture and store it in the memory, or may select to quit the picture editing, so that the picture is restored to the state before editing.
When the editing operation is cutting, the terminal equipment generates target picture preview data in a cutting area, at the moment, a user can select cutting determination, and after the terminal equipment detects the cutting determination operation, the terminal equipment cuts the target picture according to the target picture preview data and the editing parameters. Namely, the terminal equipment determines the cutting size according to the editing parameters to form a cutting area, and then forms the data in the cutting area into the target picture.
Further, the present application provides a fourth embodiment of the image obtaining method based on the third embodiment, and with reference to fig. 7, the step of generating edited target picture preview data according to the target image information includes:
and step S71, combining the target image information and other image information related to the picture to generate edited target picture preview data.
In this embodiment, when a user edits a generated picture, and after determining target image information, in order to make the definition of a foreground and a background presented by edited target picture preview data high, an image presented by the edited target picture preview data also achieves a stereoscopic effect, so that the picture effect presented by the edited target picture preview data is consistent with the picture effect obtained during shooting, when the edited target picture preview data is generated, the target image information is used as main preview image information, and other image information associated with the picture is used as supplementary preview image information, and the main target image information and the other image information are combined to form the target picture preview data.
It should be noted that, when the terminal device edits the picture, the picture is edited from all image information associated with the picture, and the target picture preview data may be generated after merging based on all image information. The specific merging method is the same as the merging method for the camera preview interface in the photographing process of the terminal device, and reference may be made to the second embodiment specifically, which is not repeated here.
Furthermore, an embodiment of the present application also provides a computer-readable storage medium, on which an image acquisition program is stored, and the image acquisition program, when executed by a processor, implements the steps of the image acquisition method as described above.
The present application further provides a terminal device, the terminal device includes: a memory, a processor and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the method as described above.
Embodiments of the present application also provide a computer program product, which includes computer program code, when the computer program code runs on a computer, the computer is caused to execute the method as described in the above various possible embodiments.
An embodiment of the present application further provides a chip, which includes a memory and a processor, where the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that a device in which the chip is installed executes the method described in the above various possible embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the recitation of an element by the phrase "comprising an … …" does not exclude the presence of additional like elements in the process, method, article, or apparatus that comprises the element, and further, where similarly-named elements, features, or elements in different embodiments of the disclosure may have the same meaning, or may have different meanings, that particular meaning should be determined by their interpretation in the embodiment or further by context with the embodiment.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context. Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least some of the steps in the figures may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, in different orders, and may be performed alternately or at least partially with respect to other steps or sub-steps of other steps.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.