Detailed Description
For a more complete understanding of the nature and the technical content of the embodiments of the present application, reference should be made to the following detailed description of embodiments of the application, taken in conjunction with the accompanying drawings, which are meant to be illustrative only and not limiting of the embodiments of the application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
It should also be noted that the term "first/second/third" in relation to the embodiments of the present application is merely used to distinguish similar objects, and does not represent a specific ordering of the objects, and it is understood that the "first/second/third" may be interchanged with a specific order or sequence, as permitted, so that the embodiments of the present application described herein may be implemented in other orders than those illustrated or described herein.
It should also be noted that in the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it should be understood that "some embodiments" may be the same subset or different subsets of all possible embodiments and may be combined with each other without conflict.
In order to facilitate understanding, the following description will briefly explain related art of an embodiment of the present application. The related technology can be optionally combined with the technical scheme of the embodiment of the application as an alternative, and belongs to the protection scope of the embodiment of the application.
Fig. 1A is an arrangement schematic diagram of a common camera. As shown in fig. 1A, the first camera 11, the second camera 12, and the third camera 13 are arranged in the upper left corner of the electronic apparatus 10.
For example, fig. 1B is a schematic view of angles of view of a common camera, and as shown in fig. 1B, a first camera corresponds to a first angle of view 14, a second camera corresponds to a second angle of view 15, and a third camera corresponds to a third angle of view 16. As can be seen from fig. 1B, the Field of View (FOV) of each camera is different.
In the related art, since the chip of the mobile device has a performance limitation, all cameras cannot be turned on at the same time. Therefore, in the process of switching cameras of a plurality of focal segments, the situation of image stagnation caused by the fact that the cameras are not pulled up in time can be frequently encountered. Moreover, because physical distances exist between the placement positions of the cameras, the FOV of each camera is different, and at the moment of switching the cameras, the preview picture can have abrupt change of the FOV. Both of these reasons affect the smoothness of the multi-camera continuous zoom process.
Based on the above, the embodiment of the application provides a camera zooming method and device, electronic equipment and storage medium, wherein a second camera to be started can be determined in response to zooming operation of a first camera, when the zooming operation meets camera switching conditions, the second camera is started, transition animation is generated based on a first image of the first camera and a second image of the second camera, and the first camera is switched to the second camera based on the transition animation until the second image is displayed. In addition, in the process of switching the cameras, transition animation is generated based on the images of the first camera which is started currently and the images of the second camera which is to be switched, so that the problems of unsmooth zooming and the like caused by the limitation of the number of the started cameras of a chip and the difference of the angles of view caused by the physical placement positions of the cameras can be solved, the preview effect can be ensured, and the smoothness of continuous switching among cameras with different focal segments can be improved.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
In an embodiment of the present application, fig. 2 is a schematic diagram illustrating an implementation flow of a camera zooming method according to an embodiment of the present application. As shown in fig. 2, the camera zooming method may include the steps of:
step 101, responding to the zooming operation of the first camera, and determining a second camera to be started.
In the embodiment of the application, the camera zooming method is applied to electronic equipment, and the electronic equipment can comprise a plurality of cameras, such as a first camera, a second camera and the like.
Here, the first camera is a camera currently being used by the electronic apparatus. The first camera may be any one of the cameras in the electronic device. In addition, the second camera is a camera to be switched. The second camera may be any one of the cameras in the electronic device other than the first camera.
It should be noted that, in the embodiment of the present application, the camera may also be referred to as a "camera", which has basic functions of image acquisition, capturing, etc., and processes an image by a photosensitive component circuit and a control component in the camera after acquiring the image, converts the image into a digital signal that can be identified by the device, and then displays the image on a screen of the device.
It should be further noted that, in the embodiment of the present application, the cameras (including the first camera, the second camera, and other cameras mentioned later) have corresponding FOV and zoom magnification intervals. FOV refers to the range of scenes that a camera can capture, typically expressed in terms of angles. The size of the FOV determines the field of view of the optical instrument, the larger the FOV the larger the field of view. The FOV may be of any suitable size, e.g., 120 °, 150 °, etc. The zoom magnification range may be of any suitable size, for example, [0.6,1], [1,3], or the like.
In some embodiments, the zoom magnification interval corresponding to the camera represents an interval in which the zoom effect of the camera is better, that is, the actual zoom range of the camera is greater than the zoom magnification interval corresponding to the camera. The actual zoom range of the camera is [0.4,1.2] by way of example, but the zoom magnification range corresponding to the camera may be set to [0.6,1], but is not limited in any way.
In addition, the zooming operation may refer to an operation of adjusting a focal length of a lens of the camera, and the zooming operation may change a view angle size and a distance of a photographed picture. In some embodiments, the zoom operation is implemented by any suitable operation, such as a sliding operation, a clicking operation, or the like.
In some embodiments, the zooming operation may be triggered according to a zooming instruction, and the device may perform the zooming operation on the camera according to the received zooming instruction.
In some embodiments, there is a correspondence between the zoom operation and the camera. A first correspondence between the zoom operation and the camera may be established. Therefore, after the zooming operation of the first camera is received, the second camera can be determined and started according to the first corresponding relation.
In some embodiments, the zoom operation has a corresponding target magnification, and a second correspondence between the zoom operation and the target magnification may be established, so that after the zoom operation of the first camera is received, the target magnification corresponding to the zoom operation may be determined according to the second correspondence.
In some embodiments, the zoom operation has a corresponding zoom mode, and the corresponding zoom mode of the zoom operation may be determined to determine the second camera to be turned on based on the zoom mode. The zooming mode comprises, but is not limited to, one of a sliding zooming mode and a clicking zooming mode.
And 102, when the zooming operation meets the camera switching condition, starting a second camera, and generating transition animation based on the first image of the first camera and the second image of the second camera.
Here, the camera switching condition is a condition for determining whether to switch the camera. The camera switching condition may include, but is not limited to, one of a zoom operation being greater than a preset duration, a zoom magnification reaching a switching threshold, etc. The predetermined duration may be of any suitable size, for example, 0.2s (seconds), 0.5s, etc. The handover threshold may be of any suitable size, e.g., 1, 0.5, etc.
In some embodiments, since the zoom magnification of the camera may be in a continuously varying state, it may be determined that the current zoom magnification at different times when the zoom operation is performed, when the current zoom magnification reaches the switching threshold, it is determined that the zoom operation satisfies the camera switching condition, and when the current zoom magnification does not reach the switching threshold, it is determined that the zoom operation does not satisfy the camera switching condition. The current zoom magnification may be of any suitable size, e.g., 1,4, etc.
In some embodiments, different zoom magnification intervals correspond to different switching thresholds, and a third correspondence between the zoom magnification intervals and the switching thresholds may be established, so that the switching threshold corresponding to the first camera may be determined according to the third correspondence based on the zoom magnification interval corresponding to the first camera.
In some embodiments, the switching threshold may be a median value corresponding to the zoom magnification interval. The median value corresponding to the zoom magnification section is the median value of the zoom magnification in the zoom magnification section. The median value corresponding to the zoom magnification section may be of any suitable size, for example, 0.6, 1, or the like. Illustratively, the zoom magnification interval is [0.5,1.5], and the median value corresponding to the zoom magnification interval may be 1.
The first image refers to an image acquired by the first camera. The second image is an image acquired by the second camera. The transition animation is an animation played by the device in a time period when the first camera is switched to the second camera.
In some embodiments, at least one frame of transition image is included in the transition animation. This is because switching the first camera to the second camera is done during a time period in which the first camera will capture at least one first image and the second camera will capture at least one second image, and the transitional animation is generated based on the first image and the second image, and thus the transitional animation includes at least one transitional image.
In some embodiments, when the transition animation is generated based on the first image and the second image, the first image and the second image acquired at the same acquisition time are needed, so that the first image and the second image at the acquisition time are obtained, and the transition animation is generated based on the first image and the second image at least one acquisition time.
In some embodiments, when the zoom operation does not meet the camera switching condition, the first image may be cropped to obtain a cropped image, so that the cropped image is enlarged to obtain a preview image, and the preview image is displayed.
The method of generating the transition animation may include, but is not limited to, generating the transition animation based on the first image and the second image by calling a preset function, generating the transition animation based on the first image and the second image by using video editing software, and the like. The preset function may include, but is not limited to, at least one of SETINTERVAL, TRANSITION, KEYFRAMES or the like. The video editing software may include, but is not limited to, at least one of Adobe Premiere Pro, final Cut Pro, AFTER EFFECTS, and the like.
In some embodiments, the transition animation may be generated by overlaying the first image and the second image, and the method of overlaying the first image and the second image may include, but is not limited to, overlaying the first image and the second image by multiplicative blending, overlaying the first image and the second image by channel blending, and the like. Wherein the multiplicative blending may achieve image superposition by multiplying the pixel values of the two images. Channel mixing mixes RGB channels of the image respectively to realize image superposition.
In some embodiments, at least one first feature point of the first image and at least one second feature point of the second image may be determined, then feature point matching is performed on the at least one first feature point and the at least one second feature point to determine at least one feature point pair, thereby aligning the first image and the second image based on the at least one feature point pair, resulting in a transition image, and finally a transition animation is generated based on the transition image.
And 103, switching the first camera to the second camera based on the transition animation until the second image is displayed.
Here, switching the first camera to the second camera based on the transition animation means that the screen displayed by the apparatus is the transition animation in the process of switching the first camera to the second camera, and the screen displayed by the apparatus is the second image after switching to the second camera.
In some embodiments, after switching to the second camera, the first camera may be switched to the closed state, and only the second camera is kept in the open state, so that power consumption of the device may be reduced.
The embodiment of the application provides a camera zooming method, which can be used for responding to zooming operation of a first camera to determine a second camera to be started, starting the second camera when the zooming operation meets the camera switching condition, generating transition animation based on a first image of the first camera and a second image of the second camera, and switching the first camera to the second camera based on the transition animation until the second image is displayed. In addition, in the process of switching the cameras, transition animation is generated based on the images of the first camera which is started currently and the images of the second camera which is to be switched, so that the problems of unsmooth zooming and the like caused by the limitation of the number of the started cameras of a chip and the difference of the angles of view caused by the physical placement positions of the cameras can be solved, the preview effect can be ensured, and the smoothness of continuous switching among cameras with different focal segments can be improved.
In another embodiment of the present application, fig. 3 is a schematic diagram showing a implementation flow of a camera zooming method according to an embodiment of the present application. As shown in fig. 3, when the first image of the first camera and the second image of the second camera generate the transitional animation, the camera zooming method may include the steps of:
step 121, determining at least one first feature point of the first image and at least one second feature point of the second image.
Here, the first feature point refers to a feature point in the first image. The number of first feature points is at least one. The second feature points refer to feature points in the second image. The number of the second feature points is at least one. In some embodiments, the number of first feature points may be the same as or different from the number of second feature points.
The method of determining the feature points of the image (including the first image, the second image) may include, but is not limited to, determining the feature points of the image through a feature point detection network, determining the feature points of the image through edge detection, and the like. For example, the feature point detection network is a deep learning network for feature point detection, and the feature point detection network directly outputs feature points in an image by learning local features of the image. For another example, edge detection may calculate gradients of the image in the horizontal and vertical directions, and then determine feature points by calculating gradient magnitudes and directions. Edge detection may include, but is not limited to, canny edge detection, sobel edge detection, and the like.
In some embodiments, at least one first feature point may be determined by feature extraction of a first image and at least one second feature point may be determined by feature extraction of a second image. Methods of feature extraction of images may include, but are not limited to, feature extraction of images by directional gradient histograms (Histogram of Oriented Gradient, HOG), feature extraction of images by local binary patterns (Local Binary Pattern, LBP), and the like. For example, by HOG, the image is divided into small connected regions, then gradient or edge direction histograms of each pixel in the connected regions are acquired, and finally these histograms are combined to obtain at least one feature point.
In some embodiments, the feature points may be identified by keypoint identification of the image. Key point identification methods include, but are not limited to Scale-invariant feature transforms (Scale-INVARIANT FEATURE TRANSFORM, SIFT), acceleration robust features (Speeded-Up Robust Features, SURF), fast-directed and rotational BRIEF (Oriented FAST and Rotated BRIEF, ORB), and the like. Wherein SIFT can detect feature points of an image by constructing a gaussian differential pyramid. SURF can detect feature points by constructing a scale space of an image. ORB removes edge response by improving FAST and detects feature points of an image using Harris corner detection.
Step 122, performing feature point matching on the at least one first feature point and the at least one second feature point, and determining at least one feature point pair.
Here, the pair of feature points includes feature points in which there is a matching relationship in the first image and the second image. Since the number of the first feature points and the second feature points is at least one, the number of the feature point pairs is at least one.
In some embodiments, each feature point pair may include a first feature point and a second feature point.
Feature point matching is an important task in computer vision for determining correspondence between two images. The method for matching the first feature point with the second feature point may include, but is not limited to, matching the first feature point with the second feature point based on euclidean distance, matching the first feature point with the second feature point based on a depth feature matching network, and the like. For example, euclidean distance refers to the true distance between two feature points. The euclidean distance between two feature points is calculated, and the smaller the euclidean distance is, the more similar the two feature points are represented, so that for a certain first feature point, a second feature point with the minimum euclidean distance with the first feature point is taken as a feature point matched with the first feature point, and a feature point pair is obtained. For another example, a depth feature matching network may be used to learn a depth feature representation of feature points of an image, and a matching relationship between the feature points may be directly obtained through output of the network, so as to perform feature point matching on the first feature point and the second feature point.
In some embodiments, the feature points have color information and texture information, and when the feature point matching is performed, the matching can be performed by combining the color information and the texture information of the feature points, so that the accuracy of the feature point matching can be improved.
And step 123, aligning the first image and the second image based on at least one characteristic point pair to obtain a transition image.
Here, the transition image refers to an image obtained by aligning the first image and the second image. In some embodiments, the number of transition images may be at least one during switching of the first camera to the second camera.
In some embodiments, since a frame of first image and a frame of second image may be obtained at each acquisition time, at least one feature point pair of the first image and the second image needs to be determined at each acquisition time, and the first image and the second image are aligned based on the at least one feature point pair, so as to obtain transition images at a plurality of acquisition times.
Methods of aligning the first image and the second image may include, but are not limited to, aligning the first image and the second image by a geometric transformation model, aligning the first image and the second image by a convolutional neural network, and the like. For example, the geometric transformation model may include, but is not limited to, at least one of a rigid transformation model, an affine transformation, a perspective transformation model, and the like. The geometric transformation model calculates pixel values of the transition image based on the feature point pairs, so that the first image and the second image are aligned to obtain the transition image. For another example, at least one pair of feature points may be input into a trained convolutional neural network, the result of which is a transition image.
In some embodiments, the geometric transformation model has corresponding transformation parameters. The matched feature point pairs can be utilized to estimate the transformation parameters of the geometric transformation model through a least square method, RANSAC (Random Sample Consensus) algorithm and the like, and then the geometric transformation model is adjusted according to the calculated transformation parameters so as to align the first image and the second image through the adjusted geometric transformation model. For example, a system of linear equations is constructed by using the least squares method, and transformation parameters are obtained by solving the system of equations. For another example, the geometric transformation model is estimated by randomly selecting partial matching point pairs by the RANSAC algorithm, then calculating the errors between all matching point pairs and the model, and selecting the model with the smallest error as the final geometric transformation model.
In some embodiments, the first image and the second image may be respectively divided based on at least one feature point pair, at least one first division block corresponding to the first image and at least one second division block corresponding to the second image are determined, and then the first division block and the second division block are locally aligned based on a matching relationship between the feature point in the first division block and the feature point in the second division block, so as to obtain at least one registration block, so that boundary fusion is performed on the at least one registration block, and a transition image is obtained.
And 124, generating transition animation based on the transition image.
Here, the number of transition images is at least one, and at least one transition image may be directly used as the transition animation, so that the transition animation may be generated based on the transition image.
In some embodiments, the transition images are generated from the first and second images at different acquisition times, so that the different transition images are generated at different times. The transition images may be ordered in an early-to-late order according to the corresponding generation moments of the transition images, and then the ordered transition images are used as transition animations, so that the transition animations may be generated based on the transition images.
In the embodiment of the application, the first image and the second image are aligned through the characteristic point pairs determined by the first characteristic point of the first image and the second characteristic point of the second image to generate the transition animation, and the obtained transition animation can comprise the characteristics of the first image and the second image, so that the accuracy of the transition animation is improved, and the cameras can be switched more smoothly when the cameras are switched based on the transition animation.
In another embodiment of the present application, fig. 4 is a schematic diagram of a implementation flow of a camera zooming method according to an embodiment of the present application. As shown in fig. 4, when the first image and the second image are aligned based on at least one feature point to obtain a transition image, the camera zooming method may include the steps of:
Step 1231, dividing the first image and the second image based on at least one feature point pair, and determining at least one first dividing block corresponding to the first image and at least one second dividing block corresponding to the second image.
Here, the first divided block is a partial image of the first image. The number of first division blocks may be at least one. The second divided block is a partial image of the second image. The number of second division blocks may be at least one. In some embodiments, the number of first partition blocks may be the same as the number of second partition blocks.
Methods of dividing the image (including the first image and the second image) may include, but are not limited to, dividing the image based on straight lines of feature point pairs, dividing the image based on region growth of feature point pairs, and the like. For example, a pair of matched feature point pairs may be selected, a straight line may be determined by using two feature points in the feature point pairs as end points, and the straight line may be used as a dividing line of the image, so that the dividing line of at least one image may be determined according to at least one feature point pair, so as to divide the image into dividing blocks. For another example, one of the matched pair of feature points may be used as a seed point, and then adjacent pixel points may be gradually brought into the same region from the seed point according to a set growth criterion until the growth cannot be continued, so that the image may be divided into divided blocks.
In some embodiments, when the image is divided into the division blocks based on at least one feature point pair, the division blocks may be of a fixed size, or may be adaptively divided according to the feature points of the image. The purpose of this is to be able to handle the parallax problem for each local area more effectively.
Step 1232, locally aligning the first partition block and the second partition block based on the matching relationship between the feature points in the first partition block and the feature points in the second partition block, so as to obtain at least one registration block.
Here, since the alignment result obtained by directly performing image alignment through the first image and the second image is only rough registration, there may still be a case where large parallax is misaligned in the images of the alignment result. Parallax refers to the difference in position of the same scene in two images due to the difference in camera shooting angle or the difference in distance between cameras. Therefore, an alignment method employing local alignment is required.
The registration block is a partial image obtained by partially aligning the first divided block with the second divided block. The number of registration blocks may be at least one. In some embodiments, the number of registration blocks, the number of first partition blocks, and the number of second partition blocks may be the same.
Methods of locally aligning the first partition block with the second partition block may include, but are not limited to, locally aligning the first partition block with the second partition block by a geometric transformation model, locally aligning the first partition block with the second partition block by a convolutional neural network, and the like. The geometric transformation model may include, but is not limited to, at least one of a rigid transformation model, an affine transformation, a perspective transformation model, and the like.
In some embodiments, when the first partition block and the second partition block are locally aligned, a loss function may be set for the region where the first partition block and the second partition block are located. The penalty function may measure how good the current registration is, and may output the registration block to obtain at least one registration block when the penalty value of the registration block satisfies a penalty condition. For example, a loss function may be defined using a square error based on feature point bias, photometric consistency error, or the like. At the same time, the transformation parameters for each local region are optimized by minimizing the loss function. This is typically an iterative process that can be solved using optimization algorithms such as gradient descent, newton's method, etc.
And step 1233, performing boundary fusion on at least one registration block to obtain a transition image.
Here, since the registration blocks are part of the transition image, it is necessary to perform boundary fusion on at least one registration block to obtain the transition image.
Boundary fusion is the process of fusing several smaller images to stitch into a larger image. Methods of boundary fusing the registration blocks may include, but are not limited to, boundary fusing the registration blocks by using Python and NumPy, boundary fusing the registration blocks by using OpenCV, and the like. For example, when using Python and NumPy to perform boundary fusion on the registration blocks, firstly determining the image size of the transition image, then creating a new blank canvas, the size of the blank canvas is the same as the image size of the transition image, finally circularly traversing each registration block, and according to the position of each registration block in the transition image, placing each registration block in the corresponding position in the canvas to fuse the registration blocks, thereby obtaining the transition image. For another example, when OpenCV is used to perform boundary fusion on the registration block, the registered block may be called contoured_image () or other functions to perform boundary fusion on the registration block, so as to obtain a transition image.
In some embodiments, when at least one registration block is subjected to boundary fusion, in order to ensure that alignment results among local areas are consistent, smoothing, transition area optimization and other technologies can be adopted to reduce artifacts of area boundaries, and the registration blocks are fused more accurately.
In the embodiment of the application, the first image and the second image are respectively divided through the characteristic points, so that the registration block is obtained by carrying out local alignment on the first division block and the second division block which are obtained by dividing, and the transition image is obtained by carrying out boundary fusion on the registration block, thereby reducing the possibility of large parallax misalignment of the transition image obtained by only adopting integral alignment, and simultaneously, the artifact of the boundary of the registration block can be reduced and the accuracy of the transition image is improved by carrying out boundary fusion on the registration block.
In one possible implementation manner, when the second camera to be started is determined, the method can further comprise the steps of determining a zooming mode corresponding to zooming operation, and determining the second camera to be started based on the zooming mode, wherein the zooming mode comprises a sliding zooming mode or a clicking zooming mode.
Here, since the trigger actions of the zoom operations are different, the zoom modes corresponding to the different zoom operations may be different. In some embodiments, the zoom mode may include a slide zoom mode or a click zoom mode. The sliding zoom mode is a mode in which the zoom magnification is changed by sliding. The click zoom mode is a mode in which the zoom magnification is changed by clicking.
In some embodiments, a zoom mode corresponding to the zoom operation may be determined according to the detected interactive operation. The interaction may be any suitable operation, e.g., sliding, clicking, double-clicking, etc. For example, in a case where the interactive operation is a slide, the zoom mode corresponding to the zoom operation may be a slide zoom mode.
In some embodiments, there is a correspondence between the zoom operation and the zoom mode. A fourth correspondence between the zoom operation and the zoom mode may be established, so that the zoom mode corresponding to the zoom operation may be determined according to the fourth correspondence based on the acquired zoom operation.
In some embodiments, the second camera may be any one of the cameras in the device. In some embodiments, the second cameras corresponding to different zooming modes may be the same or different.
In some embodiments, there is a correspondence between the zoom mode and the camera. A fifth correspondence between the zoom mode and the camera may be established, so that the second camera to be turned on may be determined according to the fifth correspondence based on the zoom mode corresponding to the zoom operation.
In some embodiments, when the zoom mode is a sliding zoom mode, a transition camera adjacent to the first camera may be determined based on the sliding zoom mode, and then the transition camera may be determined as the second camera. When the zoom mode is a click zoom mode, the target magnification can be determined based on the click zoom mode, and then the target camera corresponding to the target magnification is determined as the second camera.
In the embodiment of the application, the second camera to be started is determined by the zooming mode corresponding to the zooming operation, the zooming mode including the sliding zooming mode or the clicking zooming mode is provided, the second camera can be determined in different zooming modes, different scenes can be compatible, and the flexibility of the zooming method of the camera is improved.
In another possible implementation manner, when determining the second camera to be turned on, the method may further include:
when the zooming mode is a sliding zooming mode, determining a transition camera adjacent to the first camera based on the sliding zooming mode, and determining the transition camera as a second camera;
And when the zooming mode is a click zooming mode, determining the target multiplying power based on the click zooming mode, and determining the target camera corresponding to the target multiplying power as the second camera.
Here, for a target magnification that a user desires to reach, its corresponding camera is referred to as a target camera. The transition camera may refer to other cameras existing between the first camera and the target camera. Since the sliding zoom mode is a process of gradually changing the magnification of the tele, one transition camera adjacent to the first camera may be determined based on the sliding zoom mode, thereby determining the transition camera as the second camera.
In the embodiment of the application, the transition camera may be a camera having an adjacent relationship with the first camera. For example, the transition camera may be the next camera of the first camera or the last camera of the first camera. This is because the sliding direction of the sliding zoom system is different and the change process of the apparent focal length is different. The sliding direction of the sliding zoom mode may include, but is not limited to, sliding forward or sliding backward. Illustratively, the first camera is i, the transition camera is j, and j may be i+1 or i-1.
In some embodiments, where the sliding direction of the sliding zoom mode is sliding forward, the table Jiao Beilv may vary from small to large, at which point j may be i+1. Correspondingly, in the case that the sliding direction of the sliding zoom mode is backward sliding, the table Jiao Beilv may be changed from large to small, and j may be i-1.
In some embodiments, there is a continuous relationship between the zoom magnification interval corresponding to the transition camera and the zoom magnification interval corresponding to the first camera, and therefore, the transition camera is adjacent to the first camera. For example, the zoom magnification interval corresponding to the first camera is [0.6,1], and the zoom magnification interval corresponding to the transition camera may be [0.1,0.6] or [1,1.9].
In some embodiments, a target camera corresponding to a target magnification may be obtained, then when at least one transition camera exists between the first camera and the target camera, the at least one transition camera is sequentially used as the first camera, and based on the current sliding magnification, a transition camera adjacent to the first camera is used as the second camera until the target camera is started.
In some embodiments, the target magnification refers to the magnification that the zoom operation needs to achieve. The target magnification may be of any suitable size, e.g., 5, 2.3, etc.
In some embodiments, there is a correspondence between click-zoom mode and zoom magnification. A sixth correspondence between the zoom mode and the zoom magnification may be established, so that the target magnification may be determined according to the sixth correspondence based on the zoom mode corresponding to the zoom operation.
In some embodiments, a target camera corresponding to the target magnification may be acquired, and then when at least one transition camera exists between the first camera and the target camera, the at least one transition camera is skipped, and the target camera is directly opened.
In the embodiment of the application, when the zooming mode is a sliding zooming mode, the transition camera adjacent to the first camera is used as the second camera, and when the zooming mode is a click zooming mode, the target camera corresponding to the target multiplying power is determined as the second camera, so that the second camera can be determined by adopting different methods according to different scenes, and the accuracy of the second camera is improved.
In one possible implementation manner, when the zooming mode is a sliding zooming mode, the method can further comprise the steps of obtaining a target camera corresponding to the target magnification, when at least one transition camera exists between the first camera and the target camera, sequentially taking the at least one transition camera as the first camera, and taking the transition camera adjacent to the first camera as the second camera based on the current sliding magnification until the target camera is started.
Here, the target camera refers to a camera that is finally turned on by the zoom operation. The target camera may be any one of the cameras in the device. In some embodiments, the target camera may be the same as the second camera or may be different.
In some embodiments, there is a correspondence between zoom magnification and camera. A seventh correspondence between zoom magnification and cameras may be established, so that a target camera corresponding to the target magnification may be determined according to the seventh correspondence based on the target magnification.
In some embodiments, a target camera corresponding to the target magnification may be determined based on a zoom magnification interval in which the target magnification corresponds to the camera. After the target magnification is determined, a zoom magnification interval in which the target magnification is located can be determined, and then the camera corresponding to the zoom magnification interval in which the target magnification is located is used as a target camera corresponding to the target magnification. Illustratively, the target magnification is 1.5, the zoom magnification interval includes [0.6,1], [1,3], and a camera with the zoom magnification interval of [1,3] may be taken as the target camera.
In some embodiments, since the sliding zoom mode may be a continuously performed sliding operation, the zoom magnification of the device is in a continuously changing state, and when the zoom operation is stopped, the zoom magnification of the device may be taken as the target magnification, so as to obtain the target camera corresponding to the target magnification.
In some embodiments, since the sliding zoom mode may be a continuously performed sliding operation, when sliding to a transition camera adjacent to the first camera, the sliding may continue to switch the cameras until switching to the target camera, so there may be at least one transition camera between the first camera and the target camera.
In some embodiments, the current sliding magnification refers to the zoom magnification that the device detects at the current moment of switching cameras. The current sliding magnification may be of any suitable size, e.g., 0.6, 1.5, etc.
In some embodiments, since the sliding zoom mode may be a continuously performed sliding operation, the current sliding magnification is continuously changed, a current sliding magnification change process may be determined based on a sliding direction of the sliding zoom mode, and then a next camera or a previous camera of the first camera is used as the second camera.
In some embodiments, since at least one transition camera is sequentially used as the first camera, the number of transition cameras adjacent to the first camera is at least one, and correspondingly, the number of second cameras is at least one.
In some embodiments, when there is no transition camera between the first camera and the target camera, the target camera may be a transition camera adjacent to the first camera, where the transition camera is used as the target camera, and then the target camera may be turned on.
In the embodiment of the application, when the zooming mode is a sliding zooming mode, at least one transition camera is sequentially used as the first camera, and the transition camera adjacent to the first camera is used as the second camera until the target camera is started, so that the target camera can be gradually and smoothly transited and started, and the smoothness of the continuous zooming process of the cameras is improved.
In another possible implementation manner, when the zooming mode is a click zooming mode, the method further comprises the steps of obtaining a target camera corresponding to the target magnification, and skipping at least one transition camera when the at least one transition camera exists between the first camera and the target camera.
Here, the target camera may be any one of cameras in the device. In some embodiments, the target camera may be the same as the second camera or may be different.
In some embodiments, a zoom magnification interval corresponding to the device may be displayed in a control manner, so that when a click position of a click zoom manner in the control is detected, a target magnification is determined from the zoom magnification interval corresponding to the device based on the click position, and then a target camera corresponding to the target magnification is determined based on the zoom magnification interval corresponding to the camera. Illustratively, the first camera is i, the target camera is i, which may be i+n or i-n, where n=1, 2,3.
In some embodiments, there is a correspondence between zoom magnification and camera. A seventh correspondence between zoom magnification and cameras may be established, so that a target camera corresponding to the target magnification may be determined according to the seventh correspondence based on the target magnification.
In some embodiments, there may be at least one transitional camera between the first camera and the target camera, as the target camera may be a camera that is not in an adjacent relationship with the first camera.
In some embodiments, since the duration of the click-zoom mode is short, when at least one transition camera exists between the first camera and the target camera, the target camera may be directly used as the second camera without turning on the at least one transition camera, and then the second camera is turned on to generate a transition animation based on the first image of the first camera and the second image of the second camera, so that the first camera is switched to the second camera based on the transition animation.
In the embodiment of the application, when the zooming mode is a click zooming mode, the target camera can be directly switched from the first camera to the target camera by skipping at least one transition camera, the transition camera is not required to be started, the zooming flow of the camera can be simplified, and the zooming efficiency of the camera is improved.
In one possible implementation, when the zoom operation does not meet the camera switching condition, the method may further include the steps of performing a cropping operation on the first image to obtain a cropped image, performing an enlarging operation on the cropped image to obtain a preview image, and displaying the preview image.
Here, the camera switching condition may include, but is not limited to, one of a zoom operation longer than a preset period of time, a zoom magnification reaching a switching threshold, and the like. For example, the zoom operation not satisfying the camera switching condition may be that the current zoom magnification does not reach the switching threshold.
The cropping method of the first image may include, but is not limited to, performing a cropping operation on the first image through a detection frame, performing a cropping operation on the first image through deep learning, and the like. For example, the detection frame may enclose the object so that the first image may be subjected to a cropping operation. And the first image can be cut according to the detection frame by calling a preset function in the OpenCV library. The preset functions may include, but are not limited to, cv2.waitkey (), cv2.imwrite (), and the like. OpenCV is an open-source computer vision and machine learning software library. It provides the functions of reading, displaying and processing images. For another example, the deep learning adopts a multi-layer neural network to simulate the complex decision capability of the human brain, and can perform foreground extraction on the input target image, so as to realize clipping operation on the first image.
In some embodiments, since the cropped image is obtained by performing a cropping operation on the first image, the cropped image may be a partial image of the first image, and the image size of the cropped image may be smaller than the image size of the first image.
In some embodiments, the first image may be cropped according to the relationship between the magnification and the focal length by a digital zoom method, so as to obtain a cropped image, where the number of pixels of the cropped image is reduced.
In some embodiments, when the first image is subjected to a cropping operation, the image of the central region of the first image may be cropped to obtain a cropped image.
In some embodiments, the preview image refers to an image that needs to be presented. In some embodiments, since the preview image is obtained by performing the zoom-in operation on the cropped image, the display content of the preview image is the same as the display content of the cropped image, and the image size of the preview image is larger than the image size of the cropped image.
Methods of magnifying a cropped image may include, but are not limited to, magnifying a cropped image by pixel interpolation, magnifying a cropped image by image editing software, and magnifying a cropped image by a graphics processor (Graphic Process Unit, GPU). For example, pixel interpolation may include, but is not limited to, at least one of nearest neighbor interpolation, bilinear interpolation, and the like. Wherein nearest neighbor interpolation fills in new pixels by copying the nearest pixel value to perform an enlarging operation on the cropped image. Bilinear interpolation computes new pixel values by averaging the values of four surrounding pixels to perform an enlarging operation on the cropped image. For another example, the image editing software may include, but is not limited to, at least one of Photoshop, GIMP or the like. The image editing software can be built in with a magnifying tool to magnify the image, so that the image after clipping can be magnified. For another example, a GPU is a coprocessor that handles graphics and graphics operations, and may be used to zoom in on cropped graphics by invoking the GPU.
In some embodiments, the cropped image is enlarged, corresponding to the central region of the first image being displayed enlarged. This process is similar to zooming in an area in image processing software, and the resulting preview image can be displayed on a screen.
In the embodiment of the application, when the zoom operation does not meet the camera switching condition, the cut image obtained by cutting the first image is amplified to obtain the displayed preview image, and the preview image can be changed according to the zoom operation in a digital zoom mode, so that the accuracy of the displayed image is improved, and the first image can be obtained more accurately when the camera is required to be switched.
The embodiment of the application provides a camera zooming method, which can respond to the zooming operation of a first camera to determine a second camera to be started, when the zooming operation meets the camera switching condition, the second camera is started, transition animation is generated based on a first image of the first camera and a second image of the second camera, and the first camera is switched to the second camera based on the transition animation until the second image is displayed. In addition, in the process of switching the cameras, transition animation is generated based on the images of the first camera which is started currently and the images of the second camera which is to be switched, so that the problems of unsmooth zooming and the like caused by the limitation of the number of the started cameras of a chip and the difference of the angles of view caused by the physical placement positions of the cameras can be solved, the preview effect can be ensured, and the smoothness of continuous switching among cameras with different focal segments can be improved.
The application of the camera zooming method provided by the embodiment of the application in an actual scene is described below.
Based on the foregoing embodiments, fig. 5 is a schematic diagram of an implementation flow of a camera zooming method according to an embodiment of the present application. As shown in fig. 5, the camera zooming method may include the steps of:
Step 201, designating zoom ratio intervals to which different cameras in the device belong;
Step 202, obtaining a user zooming instruction;
Step 203, obtaining a camera double-opening strategy corresponding to a zooming instruction;
204, based on a camera double-opening strategy, performing digital zooming when the zooming operation does not meet the camera switching condition, and starting a second camera when the zooming operation meets the camera switching condition;
step 205, acquiring a first image of a first camera and a second image of a second camera;
step 206, generating transition animation based on the first image and the second image;
Step 207, switching the first camera to the second camera based on the transition animation until the second image is displayed.
Each camera has different zooming ratio intervals, a specific preview camera (a first camera) is selected according to the magnification input by a user, a camera double-opening strategy is formulated in advance, a specific other camera (a second camera) corresponding to a zooming end point is opened under the magnification, when the camera is not required to be switched, the picture acquired by the first camera is cut and scaled according to the relation between the magnification and the focal length in a digital zooming mode, the picture is sent to a display for previewing, when the camera is required to be switched, the picture of the first camera and the picture of the second camera are used for making transition animation in a mode of image splicing fusion and edge elimination, and finally, the previewing picture is naturally transited to a picture of the second camera.
Specifically, each camera has a different zoom magnification section from wide angle to long focus, and can satisfy various shooting requirements. For example, the magnification interval of camera a is [0.6,1], and the magnification interval of camera B is [1,3]. And the corresponding preview camera is automatically selected as the first camera according to the magnification interval according to the magnification system input by the user, so that the best image quality and detail performance are ensured to be provided. The intelligent algorithm of the system can also evaluate the current light condition, the distance of the shot object and other factors, further optimize the selection and the setting of the camera, and the user can obtain a satisfactory shooting effect without manual adjustment.
Meanwhile, a camera double-opening strategy is formulated in advance. Under a specific multiplying power, a specific camera is opened except for the first camera, wherein the specific camera comprises a target camera corresponding to a zooming end point. Therefore, the problem of image stagnation caused by untimely opening of the camera when the camera is switched can be avoided.
When the camera does not need to be switched, the image is cut according to the relation between the multiplying power and the focal length by the image acquired by the first camera in a digital zooming mode, then the cut image is interpolated and amplified, the number of pixels of the cut part is reduced, and the central area of the image is amplified and displayed. This is just as an area is enlarged in the image processing software. And sending the digital zooming result to a display for previewing.
When the cameras need to be switched, the system can intelligently use the pictures of the first camera and the pictures of the second camera, and smooth and natural transition animation is manufactured through the modes of image stitching, fusion and edge elimination. The process not only ensures seamless connection of image switching, but also greatly improves visual experience of users. Firstly, the system analyzes the pictures of the first camera and the pictures of the second camera in detail, and extracts key feature points in the images. And then, aligning the characteristic points of the two images through an image stitching algorithm, so that accurate image superposition is realized. In order to avoid obvious boundary lines at the image splicing position, the system adopts an advanced edge elimination technology to process the transition region, so that the colors, the brightness and the textures of the two pictures are more harmonious and consistent.
And finally, displaying the picture acquired by the second camera in a natural transition way in the preview picture.
In the camera zooming method provided by the embodiment of the application, the second camera to be started can be determined in response to the zooming operation of the first camera, when the zooming operation meets the camera switching condition, the second camera is started, transition animation is generated based on the first image of the first camera and the second image of the second camera, and the first camera is switched to the second camera based on the transition animation until the second image is displayed. In addition, in the process of switching the cameras, transition animation is generated based on the images of the first camera which is started currently and the images of the second camera which is to be switched, so that the problems of unsmooth zooming and the like caused by the limitation of the number of the started cameras of a chip and the difference of the angles of view caused by the physical placement positions of the cameras can be solved, the preview effect can be ensured, and the smoothness of continuous switching among cameras with different focal segments can be improved.
Based on the foregoing embodiments, fig. 6 is a schematic diagram of an implementation flow of a camera zooming method according to an embodiment of the present application. As shown in fig. 6, the camera zooming method includes a camera double-open strategy, and the camera zooming method may include the steps of:
step 31, obtaining a zooming mode and a first camera;
step 32, judging whether the zooming mode is a sliding zooming mode or not;
If yes, the process proceeds to step 33, if no, the zoom mode is the click zoom mode, and if no, the process proceeds to step 35.
Step 33, determining a target multiplying power corresponding to the sliding zooming mode, and acquiring a target camera corresponding to the target multiplying power;
Step 34, opening at least one transition camera based on the current sliding multiplying power until the target camera is opened;
Step 35, determining a target multiplying power corresponding to the click zoom mode, and acquiring a target camera corresponding to the target multiplying power;
Step 36, skipping at least one transition camera, and starting a target camera;
And 37, generating transition animation according to the zooming multiplying power changing process, and switching cameras based on the transition animation.
The specific processing steps are as follows:
And acquiring the magnification input by the user, and determining a source camera (corresponding to the first camera) corresponding to the current input magnification by combining the zoom magnification intervals of the cameras. The determination of the target magnification is discussed in two cases:
When the user is a sliding zoom, determining that a camera j adjacent to the source camera magnification section is a target camera (corresponding to the aforementioned second camera) according to whether the current zoom is zoomin (zoom magnification is from small to large) or zoom out (zoom magnification is from large to small), i.e., j is equal to i+1 or i-1;
When the user clicks the zoom, the clicking multiplying power input by the user and the zoom multiplying power interval of each camera are obtained to determine the target camera corresponding to the currently input clicking multiplying power. Since click zooming is also classified into zoomin (zoom magnification from small to large) or zoom out (zoom magnification from large to small), but the input magnification of the user may not be a magnification section adjacent to the source camera, j is equal to i+n or i-n, (where n=1, 2, 3.).
In the embodiment of the application, the first camera can be a source camera, correspondingly, the first image can be a source picture, the second camera can be a target camera, and correspondingly, the second image can be a target picture.
In the embodiment of the application, a camera double-opening strategy is formulated. Here, the camera double-on strategy is to simultaneously turn on two cameras in a specific magnification interval.
When in sliding zooming and clicking zooming, the zooming multiplying power is continuously changed, so that continuous transition of preview pictures can be ensured. The treatment of this scheme is different in that:
and when the sliding zoom is carried out, selecting a transition camera which is required to be opened in the middle according to the multiplying power which continuously changes, and opening the transition camera in advance until the transition camera is switched to the target camera before the camera is switched according to the formulated double-opening strategy.
When clicking zoom, the target camera is directly opened, namely, a camera which spans a plurality of adjacent multiplying power intervals in the middle is directly opened, and a cross-camera strategy is realized.
And when the sliding zooming is performed, when the switching point is not reached, digital zooming processing is performed based on the source camera, and when the intermediate transition camera is switched, animation transition processing is performed.
When clicking zoom, when the target camera switching point is not reached, transitional animation processing is always performed based on the source camera until the target camera is switched, namely a plurality of transitional cameras are crossed in the middle.
In the camera double-opening strategy provided by the embodiment of the application, different processes can be selected to switch the cameras in different zooming modes, different scenes can be compatible, and the flexibility of a camera zooming method is improved.
Based on the foregoing embodiments, fig. 7 is a schematic diagram showing an implementation flow of a camera zooming method according to an embodiment of the present application. As shown in fig. 7, the camera zooming method includes a process flow of transition images, and the camera zooming method may include the steps of:
Step 41, performing feature point matching based on feature points of the source picture and feature points of the target picture, and determining at least one feature point pair;
step 42, dividing and locally aligning the source picture and the target picture based on at least one feature point pair respectively to obtain at least one registration block;
And 43, carrying out boundary fusion on at least one registration block to obtain a transition image.
The specific processing steps are as follows:
1. and performing feature point matching based on the feature points of the source picture and the feature points of the target picture, and determining at least one feature point pair.
2. Dividing and locally aligning the source picture and the target picture respectively based on at least one characteristic point pair to obtain at least one registration block:
Here, since there is an influence of parallax, the image alignment result obtained in the first step is only rough registration, and there is still a case where large parallax is misaligned in the result map. Parallax refers to the difference in position of the same scene in two images due to the difference in camera shooting angle or the difference in distance between cameras. In the face of this we resort to more precise alignment methods.
To more precisely align the image, the image is divided into several smaller blocks or grids. The blocks may be of fixed size or may be adaptively divided according to the characteristics of the image. The purpose of this is to be able to handle the parallax problem for each local area more effectively.
Within each divided region, feature points of the image are detected and matched. The quality of feature point matching has great influence on the subsequent alignment precision, so that proper feature point detection and descriptor matching methods need to be selected.
For local alignment, a loss function needs to be set for each divided region. The loss function may measure how well the current registration is. For example, a loss function may be defined using a square error based on feature point bias, photometric consistency error, or the like.
The transformation parameters for each local region are optimized by minimizing the loss function. This is typically an iterative process that can be solved using optimization algorithms such as gradient descent, newton's method, etc.
3. Performing boundary fusion on at least one registration block to obtain a transition image:
And fusing the transformation results of the local areas to obtain the fine alignment of the whole image. In order to ensure consistent alignment results among the local areas, the artifact of the area boundary is reduced by adopting the technologies of smoothing, transition area optimization and the like.
In the processing flow of the transition image provided by the embodiment of the application, the picture of the source camera and the picture of the target camera are analyzed in detail, and the key feature points in the image are extracted. And then, aligning the characteristic points of the two images through an image stitching algorithm, so that accurate image superposition is realized. Meanwhile, in order to avoid obvious boundary lines at the image splicing position, the boundary fusion is adopted to process the registration block, so that artifacts of the boundary of the registration block are reduced, and the accuracy of the transition image is improved.
Based on the above embodiments, a camera zoom device is provided in an embodiment of the present application.
In still another embodiment of the present application, fig. 8 is a schematic diagram of a composition structure of a camera zoom device according to an embodiment of the present application. As shown in fig. 8, a camera zoom apparatus 800 according to an embodiment of the present application may include:
a determining unit 801 for determining a second camera to be turned on in response to a zooming operation of the first camera;
a generating unit 802, configured to turn on the second camera when the zoom operation meets the camera switching condition, and generate a transition animation based on the first image of the first camera and the second image of the second camera;
And a switching unit 803 for switching the first camera to the second camera based on the transition animation until the second image is displayed.
In some embodiments, the generating unit 802 is further configured to determine at least one first feature point of the first image and at least one second feature point of the second image, perform feature point matching on the at least one first feature point and the at least one second feature point, determine at least one feature point pair, align the first image and the second image based on the at least one feature point pair, obtain a transition image, and generate a transition animation based on the transition image.
In some embodiments, the generating unit 802 is further configured to divide the first image and the second image respectively based on at least one feature point pair, determine at least one first division block corresponding to the first image and at least one second division block corresponding to the second image, locally align the first division block and the second division block based on a matching relationship between feature points in the first division block and feature points in the second division block to obtain at least one registration block, and perform boundary fusion on the at least one registration block to obtain the transition image.
In some embodiments, the determining unit 801 is further configured to determine a zoom mode corresponding to the zoom operation, and determine the second camera to be turned on based on the zoom mode, where the zoom mode includes a sliding zoom mode or a click zoom mode.
In some embodiments, the determining unit 801 is further configured to determine, when the zoom mode is a sliding zoom mode, a transition camera adjacent to the first camera based on the sliding zoom mode, determine the transition camera as the second camera, and determine, when the zoom mode is a click zoom mode, a target magnification based on the click zoom mode, and determine a target camera corresponding to the target magnification as the second camera.
In some embodiments, when the zoom mode is a sliding zoom mode, the determining unit 801 is further configured to obtain a target camera corresponding to the target magnification, sequentially take at least one transition camera as the first camera when at least one transition camera exists between the first camera and the target camera, and take a transition camera adjacent to the first camera as the second camera based on the current sliding magnification until the target camera is started.
In some embodiments, when the zoom mode is a click zoom mode, the determining unit 801 is further configured to obtain a target camera corresponding to the target magnification, and skip at least one transition camera when at least one transition camera exists between the first camera and the target camera.
In some embodiments, when the zoom operation does not meet the camera switching condition, the camera zoom apparatus 800 may further include an operation unit configured to perform a cropping operation on the first image to obtain a cropped image, perform a zooming operation on the cropped image to obtain a preview image, and display the preview image.
In still another embodiment of the present application, fig. 9 is a schematic diagram of a composition structure of an electronic device according to an embodiment of the present application. As shown in fig. 9, an electronic device 900 according to an embodiment of the present application may include a processor 901, a memory 902, a communication interface 903, and a bus 904 for connecting the processor 901, the memory 902, the communication interface 903, and a camera module 905. The camera zooming device comprises a communication interface 903, a memory 902 and a processor 903, wherein the communication interface 903 is used for receiving and transmitting signals in the process of receiving and transmitting information between other external devices, the memory 902 is used for storing a computer program capable of running on the processor 903, and the processor 903 is used for executing the camera zooming method according to any one of the previous embodiments when the computer program runs.
In an embodiment of the present application, the Processor 901 may be at least one of an Application SPECIFIC INTEGRATED Circuit (ASIC), a digital signal Processor (DIGITAL SIGNAL Processor, DSP), a digital signal processing device (DIGITAL SIGNAL Processing Device, DSPD), a programmable logic device (Programmable Logic Device, PLD), a field programmable gate array (Field Programmable GATE ARRAY, FPGA), a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, and a microprocessor. It will be appreciated that the electronics for implementing the above-described processor functions may be other for different devices, and embodiments of the present application are not particularly limited. The electronic device 900 may further comprise a memory 902, which memory 902 may be connected to the processor 901, wherein the memory 902 is adapted to store executable program code comprising computer operation instructions, the memory 902 may comprise a high speed RAM memory, and may further comprise a non-volatile memory, e.g. at least two disk memories.
In an embodiment of the application, a bus 904 is used to connect the communication interface 903, the processor 901, and the memory 902 and to communicate with each other between these devices.
In practical applications, the Memory 902 may be a volatile Memory (RAM), such as a Random-Access Memory (RAM), or a non-volatile Memory (non-volatile Memory), such as a Read-Only Memory (ROM), a flash Memory (flash Memory), a hard disk (HARD DISK DRIVE, HDD) or a Solid state disk (Solid-state disk) (STATE DRIVE, SSD), or a combination of the above types of memories, and provides instructions and data to the processor 901.
In an embodiment of the present application, the camera module 905 includes at least one camera, where the at least one camera may include, but is not limited to, at least one of a first camera, a second camera, and the like.
In addition, each functional module in the embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional modules.
The integrated units, if implemented in the form of software functional modules, may be stored in a computer-readable storage medium, if not sold or used as separate products, and based on this understanding, the technical solution of the present embodiment may be embodied essentially or partly in the form of a software product, or all or part of the technical solution may be embodied in a storage medium, which includes several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or processor (processor) to perform all or part of the steps of the method of the present embodiment. The storage medium includes various media capable of storing program codes, such as a U disk, a removable hard disk, a Read Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk.
An embodiment of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a camera zooming method as described above.
Specifically, the computer program or instructions corresponding to one camera zooming method in the present embodiment may be stored on a storage medium such as an optical disc, a hard disk, or a usb disk, and when the computer program or instructions corresponding to one camera zooming method in the storage medium is read or executed by an electronic device, the method includes the following steps:
determining a second camera to be started in response to zooming operation of the first camera;
When the zooming operation meets the camera switching condition, a second camera is started, and transition animation is generated based on a first image of the first camera and a second image of the second camera;
and switching the first camera to the second camera based on the transition animation until the second image is displayed.
The embodiment of the application also provides a computer program product.
In some embodiments, the computer program product may include a computer program or instructions.
In some embodiments, the computer program product may be applied to an electronic device in the embodiments of the present application, and when the computer program or the instructions run on the electronic device, the electronic device executes corresponding processes implemented by the electronic device in the methods in the embodiments of the present application, which are not described herein for brevity.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus and units described above may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
It should be noted that, in each embodiment of the present application, each functional unit may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. The storage medium includes a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
It should also be noted that, in the present disclosure, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
The methods disclosed in the method embodiments provided by the application can be arbitrarily combined under the condition of no conflict to obtain a new method embodiment.
The features disclosed in the several product embodiments provided by the application can be combined arbitrarily under the condition of no conflict to obtain new product embodiments.
The features disclosed in the embodiments of the method or the apparatus provided by the application can be arbitrarily combined without conflict to obtain new embodiments of the method or the apparatus.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application.