Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be understood that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and that the present application is not limited by the example embodiments described herein.
Summary of the application
Currently, images can be synthesized by 3D (Dimensions) engine rendering while generating depth images. However, the 3D engine rendering of the synthesized image has a large difference from the real captured image, and the training of the depth estimation neural network model using such an image usually requires the introduction of additional countertraining to reduce the influence of the difference from the real captured image.
In view of the above technical problems, a basic concept of the present application is to provide a method, an apparatus, and an electronic device for generating an image, which can add different preset object models in different positions of a first image according to needs, so that a large number of second images and second depth images can be obtained through the different preset object models, and further, the large number of second images and second depth images are used as annotation data for training a depth estimation neural network model to train the depth estimation neural network model, so that time and effort for training the neural network can be saved; the cost can be reduced because a large amount of sample data is avoided to be collected; in addition, the second depth image is obtained through the real first image, so that the possibility of errors of the second depth image can be reduced, and additional countermeasure training on the annotation data is avoided.
It should be noted that the application scope of the present application is not limited to the field of image processing technology. For example, the technical solution mentioned in the embodiments of the present application may also be applied to other intelligent mobile devices for providing technical support for image processing of the intelligent mobile devices.
Various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
Exemplary System
FIG. 1 is a schematic diagram of a scenario of an exemplary system of the present application. As shown in fig. 1, parameter estimation is performed on a first image (the first image may be an RGB image or a grayscale image); performing light source estimation based on the first image and the result of the parameter estimation (or based on the first image, the result of the parameter estimation, and the first depth image); and editing and rendering according to the light source estimation result, the first image, the first depth image and the preset object model to obtain a second image and a second depth image. Specific implementation procedures are described in detail below in the following method and apparatus embodiments.
Exemplary method
Fig. 2 is a flowchart illustrating a method for generating an image according to an exemplary embodiment of the present application. The method for generating the image can be applied to the technical field of image processing of automobiles and can also be applied to the field of image processing functions of intelligent robots. As shown in fig. 2, a method for generating an image according to an embodiment of the present application includes the following steps:
step 101, determining reflection information of each pixel point in the first image.
It should be noted that the first image may be an RGB image or a grayscale image, and the first image may be a sample image in a sample library.
The reflection information includes diffuse reflection parameters and specular reflection parameters. In this embodiment, the reflection information of each pixel point may refer to a diffuse reflection parameter corresponding to the pixel point. The diffuse reflection refers to a phenomenon that light rays are randomly reflected to all directions by a rough surface, and is used for indicating how the material of an object reflects illumination. In this embodiment, the diffuse reflection parameter r (x, y) of each pixel point (x, y) in the first image may be determined by the following formula:
wherein r (x, y) represents the diffuse reflection parameter of the pixel point (x, y),
i
xrepresents the gradient i of the pixel point (x, y) in the horizontal direction
yThe gradient of the pixel points (x, y) in the vertical direction is represented, T is a preset threshold value, T is more than or equal to 0 and less than or equal to 255, p is a natural number and is generally 1 or 2.
And 102, determining light source information in a scene where the first image is shot according to the first image, the surface normal map corresponding to the first image and the reflection information.
In one embodiment, the light source information may include light source location and light source intensity, and the like. When the light source information in the scene in which the first image is captured is obtained, that is, when the first image is captured, the light source information in the scene in which the first image is located is, for example: the scene where the first image is shot is in a room, the scene in the room is shot through the camera to obtain the first image, the first image comprises a window in the room and a table lamp in a lighting state, and then the sunlight passing through the window and the table lamp in the lighting state can be regarded as the light source information.
In an embodiment, the first image may be input into a trained preset normal map to extract a neural network, so as to obtain a surface normal map corresponding to the first image. The preset normal map extraction neural network can be obtained by training a convolution neural network through a large number of sample images.
And 103, editing and rendering the preset object model to be added in the first image according to the first image, the surface normal map, the reflection information and the light source information to obtain a second image.
In an embodiment, the preset object model may be a person, an animal, a machine, or the like, and the preset object model may be added to the first image according to an actual application condition, and the editing and rendering are performed to obtain the second image. For example: the scene where the first image is shot is in a room, a window and a desk lamp in a lighting state are arranged in the room, if the preset object model is the three-dimensional model of the cat, the three-dimensional model of the cat can be added below the window in the first image, the three-dimensional model of the cat is edited and rendered, and a second image is obtained, so that the three-dimensional model of the cat can be added in the second image.
And 104, obtaining a second depth image corresponding to the second image according to the first depth image corresponding to the first image and a preset object model.
It should be noted that the first depth image corresponds to the first image, and the first depth image may be a sample depth image in a sample library.
According to the method for generating the image, different preset object models can be added in different positions of the first image according to needs, so that a large number of second images and second depth images can be obtained through the different preset object models, and then the large number of second images and the second depth images are used as annotation data for training the depth estimation neural network model to train the depth estimation neural network model, so that time and energy for training the neural network can be saved; the cost can be reduced because a large amount of sample data is avoided to be collected; in addition, the second depth image is obtained through the real first image, so that the possibility of errors of the second depth image can be reduced, and additional countermeasure training on the annotation data is avoided.
An exemplary embodiment of the present application provides another method of generating an image. The embodiment shown in the present application is extended based on the embodiment shown in fig. 2 of the present application, and the differences between the embodiment shown in the present application and the embodiment shown in fig. 2 are mainly described below, and the same parts are not described again. The method for generating the image provided by the embodiment of the application further comprises the following steps:
and determining a surface normal corresponding to each pixel point in the first depth image to obtain a surface normal map corresponding to the first image.
In an embodiment, the surface normal corresponding to each pixel point can be obtained by calculating a normal of a plane to which each pixel point and a preset number of pixel points around the pixel point are fitted in a 3D coordinate system. The number of the preset pixel points can be selected according to the actual application condition, and no specific limitation is made on the number.
According to the method for generating the image, the surface normal map corresponding to the first image can be directly obtained by using the first depth image, the implementation process is simple, extra resources are not needed, resources and space can be saved, and the implementation speed is improved.
Fig. 3 is a schematic flowchart of determining light source information in a scene where a first image is captured according to the first image, a surface normal map corresponding to the first image, and reflection information according to an exemplary embodiment of the present application. The embodiment shown in fig. 3 of the present application is extended based on the embodiment shown in fig. 2 of the present application, and the differences between the embodiment shown in fig. 3 and the embodiment shown in fig. 2 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 3, in the method for generating an image provided in the embodiment of the present application, the light source information includes a light source position and a light source intensity; determining light source information in a scene where the first image is shot according to the first image, the surface normal map corresponding to the first image and the reflection information (namely step 102), wherein the method comprises the following steps:
step 102a, image segmentation is carried out on the first image to obtain a plurality of image sub-regions.
The first image is image-divided to obtain a plurality of image sub-regions (sets of pixels, also referred to as super-pixels). The super-pixel is a small area formed by a series of pixel points which are adjacent in position and similar in characteristics such as color, brightness, texture and the like. Most of these small regions retain effective information for further image segmentation, and generally do not destroy the boundary information of objects in the image.
Step 102b, determining a feature vector of each image sub-region by using the surface normal map and the reflection information.
It should be noted that, the feature vector of each image sub-region is determined by using the surface normal map and the reflection information, and may be implemented by any feasible manner according to the actual application condition, which is not specifically limited.
In this embodiment, the reflection information is a diffuse reflection parameter, and the feature vector E of each image sub-region j is determined by using the surface normal map and the reflection informationjThe following were used:
wherein E isj(n) values of the feature vectors of the image sub-region j calculated by the nth operator, SjRepresenting the area range of the image sub-area j, I (x, y) representing the superposition of the surface normal value and the diffuse reflection parameter value corresponding to the pixel point (x, y), Fn(x, y) represents n operators, n takes the value of 17, comprises 9 texture template operators, 6 edge operators in different directions and 2 color operators, k takes the values of 2 and 4, represents energy characteristics when k takes the value of 2, and represents peak characteristics when k takes the value of 4.
And then calculating the feature vectors of four adjacent image subregions around the image subregion j and the feature vectors of two scales by the formula, and superposing the calculated feature vectors to construct a feature vector with dimensions of 17 × 2 × 5 × 2 to 340, wherein in the feature vector with dimensions of 17 × 2 × 5 × 2, 17 represents 17 operators in the order from left to right, 2 represents two cases of K values of 2 and 4, 5 represents 5 image subregions, and 2 represents 2 scales. It should be noted that, if the image sub-region j is located at the corner, and there are no four adjacent image sub-regions, the feature vector corresponding to the non-existing adjacent image sub-region is replaced with 0; two dimensions, typically the original dimension of the image sub-region j and a dimension smaller than the original dimension of the image sub-region j (typically 50% of the dimension is selected).
And step 102c, determining the light source position in the first image according to the feature vector of each image subregion and a preset light source two-class neural network.
It should be noted that the feature vector of each image sub-region (also referred to as super-pixel) is divided into two as the preset light sourceAnd the input of the neural network takes whether the image subarea is a light source as the output, and if one image subarea is judged to be the light source, the position of the image subarea is the position of the light source. Determining the position of the light source in the first image, i.e. determining the coordinates of the pixel point of the light source in the first image, the coordinates of the pixel point of the ith light source being (x)l,yl) It is expressed that l is a natural number.
Step 102d, determining the intensity of the light source in the first image according to the first image and the position of the light source in the first image.
It should be noted that, determining the intensity of the light source in the first image according to the first image and the position of the light source in the first image may be implemented in any feasible manner according to the actual application condition, and this is not particularly limited.
In the embodiment of the present application, determining the intensity of the light source in the first image is implemented by using the following formula:
wherein, LlRepresenting the intensity of the first light source, pixels representing pixels in the first image, IlRepresenting a pixel point (x) in the first imagel,yl) Pixel value of Rl(L) indicating the pixel point (x) rendered under the action of the first light source Ll,yl) The pixel value of (3) can be estimated as several values of the light source intensity of the first light source L, the light source intensity of the minimum of the above expression among the estimated several values, and the light source intensity LlThe result of (1).
According to the method for generating the image, the light source position and the light source intensity in the first image can be obtained, so that the second image and the second depth image generated according to the first image are more real and effective.
Fig. 4 is a flowchart illustrating editing and rendering a preset object model to be added in a first image according to the first image, a surface normal map, reflection information, and light source information to obtain a second image according to an exemplary embodiment of the present application. The embodiment shown in fig. 4 of the present application is extended based on the embodiment shown in fig. 3 of the present application, and the differences between the embodiment shown in fig. 4 and the embodiment shown in fig. 3 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 4, in the method for generating an image according to the embodiment of the present application, editing and rendering a preset object model to be added in a first image according to the first image, a surface normal map, reflection information, and light source information to obtain a second image (i.e., step 103), including:
and 103a, limiting the placing position of the preset object model through the surface normal map.
In an embodiment, the placing position of the preset object model may be constrained by the surface normal map, so as to avoid the preset object model being placed outside the boundary of the first image.
Step 103b, camera parameters of the first image are determined.
It should be noted that the camera parameters include an intra-camera parameter and an extra-camera parameter. The in-camera parameters are parameters related to the characteristics of the camera itself, such as the focal length, pixel size, etc. of the camera. The camera-out parameters are parameters in a world coordinate system, such as the position, rotation direction, etc. of the camera.
And 103c, determining the pixel coordinates of the preset object model according to the camera parameters of the first image and the three-dimensional coordinates of the preset object model.
The three-dimensional coordinates (i.e., three-dimensional cartesian coordinates (x, y, z)) are expressions of points in a three-dimensional cartesian coordinate system, where x, y, and z are coordinate values of x, y, and z axes that share a common zero point and are orthogonal to each other. The pixel coordinates (x, y) are the location of the pixel in the image.
And 103d, editing and rendering the first image and the preset object model according to the pixel coordinates, the reflection information, the light source position and the light source intensity of the preset object model to obtain a second image.
It should be noted that the first image and the preset object model are edited and rendered, and the preset object model is used to replace the object at the corresponding position in the first image, so as to obtain the second image.
According to the method for generating the image, the placing position of the preset object model can be limited through the surface normal map, so that the preset object model can be prevented from exceeding the boundary of the first image, and the generated second image is more real and effective.
Fig. 5 is a schematic flowchart of a process of obtaining a second depth image corresponding to a second image according to a first depth image corresponding to a first image and a preset object model according to an exemplary embodiment of the present application. The embodiment shown in fig. 5 of the present application is extended on the basis of the embodiment shown in fig. 4 of the present application, and the differences between the embodiment shown in fig. 5 and the embodiment shown in fig. 4 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 5, in the method for generating an image according to the embodiment of the present application, obtaining a second depth image corresponding to a second image according to a first depth image corresponding to a first image and a preset object model (i.e. step 104), includes:
and 104a, obtaining the depth value of each pixel point in the preset object model according to the three-dimensional coordinates of the preset object model.
It should be noted that, a z-coordinate value of the three-dimensional coordinate (x, y, z) of each point in the preset object model may be used as the depth value of each corresponding pixel point in the preset object model.
And 104b, obtaining a second depth image according to the first depth image and the depth value of each pixel point in the preset object model.
It should be noted that, the depth value of the pixel point in the same portion of the second depth image as the first depth image is the depth value of the pixel point in the corresponding portion of the first depth image, and the depth value of the pixel point in the portion of the preset object model in the second depth image is the depth value of the pixel point of the preset object model.
According to the method for generating the image, the depth value of each pixel point in the preset object model can be obtained according to the three-dimensional coordinates of the preset object model, and the second depth image can be obtained according to the depth values of each pixel point in the first depth image and the preset object model, so that the method is simple and rapid to achieve, and the data are real and effective.
Fig. 6 is a schematic flowchart of determining pixel coordinates of a preset object model according to camera parameters of a first image and three-dimensional coordinates of the preset object model according to an exemplary embodiment of the present application. The embodiment shown in fig. 6 of the present application is extended based on the embodiment shown in fig. 4 of the present application, and the differences between the embodiment shown in fig. 6 and the embodiment shown in fig. 4 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 6, in the method for generating an image according to the embodiment of the present application, determining pixel coordinates of a preset object model according to camera parameters of a first image and three-dimensional coordinates of the preset object model (i.e. step 103c) includes:
and step 103c1, setting reference pixel points of the preset object model.
It should be noted that, the pixel point at the center of the preset object model may be set as a reference pixel point (x)1,y1)。
Step 103c2, setting the pixel coordinate and the depth value of the reference pixel point.
It should be noted that the coordinates of the preset object model in the pixel coordinate system may be changed by changing the position of the preset object model in the first image, so as to set the pixel coordinates of the reference pixel point. Changing the position of the preset object model in the first image can be realized by dragging and the like. Setting reference pixel point (x) according to practical application condition1,y1) D, the value range of the depth value D is more than 0 and less than or equal to D (x)1,y1) Wherein D (x)1,y1) Representation and reference pixel (x)1,y1) And the depth value of the corresponding pixel point in the corresponding first depth image.
And 103c3, calculating the three-dimensional coordinates of the reference pixel point by using a preset three-dimensional coordinate calculation formula according to the camera parameters of the first image, the pixel coordinates and the depth values of the reference pixel point.
It should be noted that the preset three-dimensional coordinate calculation formula may be selected according to an actual application condition, and is not limited thereto.
The preset three-dimensional coordinate calculation formula in this embodiment is:
W(x2,y2,z2)=D(x1,y1)K-1[x1,y1,1]
wherein, W (x)2,y2,z2) Representing a reference pixel (x)1,y1) K denotes a camera internal reference matrix, D (x)1,y1) Representation and reference pixel (x)1,y1) And the depth value of the corresponding pixel point in the corresponding first depth image.
And 103c4, calculating the pixel coordinate of each pixel point in the preset object model by using a preset pixel coordinate calculation formula according to the camera parameter of the first image, the three-dimensional coordinate of the reference pixel point, the three-dimensional coordinate of the preset object model and the relative position of the reference pixel point and each pixel point in the preset object model.
It should be noted that the preset three-dimensional coordinate calculation formula may be selected according to an actual application condition, and is not limited thereto.
The preset pixel coordinate calculation formula in this embodiment is:
wherein (x)t,yt) Representing a point (x) in a preset object modelt,yt,zt) (x) pixel coordinates of (c)2,y2,z2) Representing a reference pixel (x)1,y1) Three-dimensional coordinates of (1), Δ xtDenotes xtAnd x2Relative position (also called deviation, can take the value of x)t-x2)、ΔytDenotes ytAnd y2Relative position of (a) can also be called a deviation, which can take the value yt-y2)、ΔztDenotes ztAnd z2Relative position (also called deviation, can take the value z)t-z2) And K denotes a camera reference matrix.
According to the method for generating the image, the pixel coordinates of the preset object model can be obtained, and the second image can be conveniently generated subsequently.
Fig. 7 is a flowchart illustrating a method for generating an image according to another exemplary embodiment of the present application. The embodiment shown in fig. 7 of the present application is extended based on the embodiments shown in fig. 2 to 6 of the present application, and the differences between the embodiment shown in fig. 7 and the embodiments shown in fig. 2 to 6 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 7, in the method for generating an image according to the embodiment of the present application, before editing and rendering a preset object model to be added in a first image according to the first image, a surface normal map corresponding to the first image, reflection information, and light source information to obtain a second image (i.e., step 103), the method further includes:
step 105, adding a preset object model in the first image.
It should be noted that, the preset object model may be added to the corresponding position in the first image according to the specific content of the first image and the specific content of the preset object model. The preset object model is a 3D model and can be a person, an animal, a plant, a machine and the like. According to actual needs, a large number of object models can be constructed and added to the first image to generate a large number of second images and second depth images.
According to the method for generating the image, the preset object model is added into the first image, the second image and the second depth image can be generated, and a large amount of sample data does not need to be collected, so that time and energy can be saved, and cost can be reduced.
Exemplary devices
Fig. 8 is a schematic structural diagram of an apparatus for generating an image according to an exemplary embodiment of the present application. The device for generating the image can be applied to the field of image processing of automobiles and can also be applied to the field of image processing functions of intelligent robots. As shown in fig. 8, an apparatus for generating an image according to an embodiment of the present application includes:
a reflectioninformation determining module 201, configured to determine reflection information of each pixel point in the first image;
the lightsource determining module 202 is configured to determine light source information in a scene where the first image is captured according to the first image, a surface normal map corresponding to the first image, and reflection information;
the secondimage acquisition module 203 is configured to edit and render a preset object model to be added in the first image according to the first image, the surface normal map, the reflection information and the light source information to obtain a second image;
and the second depthimage obtaining module 204 is configured to obtain a second depth image corresponding to the second image according to the first depth image corresponding to the first image and the preset object model.
An exemplary embodiment of the present application provides a schematic structural diagram of the reflectioninformation determination module 201 in an apparatus for generating an image. The embodiment shown in the present application is extended based on the embodiment shown in fig. 8 of the present application, and the differences between the embodiment shown in the present application and the embodiment shown in fig. 8 are mainly described below, and the descriptions of the same parts are omitted.
In the apparatus for generating an image according to the embodiment of the present application, theimage determining module 201 is further configured to determine a surface normal corresponding to each pixel point in the first depth image, so as to obtain a surface normal map corresponding to the first image.
Fig. 9 is a schematic structural diagram of the lightsource determining module 202 in the apparatus for generating an image according to an exemplary embodiment of the present application. The embodiment shown in fig. 9 of the present application is extended based on the embodiment shown in fig. 8 of the present application, and the differences between the embodiment shown in fig. 9 and the embodiment shown in fig. 8 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 9, in the apparatus for generating an image according to an embodiment of the present application, the light source information includes a light source position and a light source intensity, and the lightsource determining module 202 includes:
animage segmentation unit 202a, configured to perform image segmentation on a first image to obtain a plurality of image sub-regions;
a featurevector determination unit 202b for determining a feature vector for each image sub-region using the surface normal map and the reflection information;
the light sourceposition determining unit 202c is configured to determine a light source position in the first image according to the feature vector of each image sub-region and a preset light source two-class neural network;
a light sourceintensity determining unit 202d for determining the light source intensity in the first image according to the first image and the light source position in the first image.
Fig. 10 is a schematic structural diagram of the secondimage obtaining module 203 in the apparatus for generating an image according to an exemplary embodiment of the present application. The embodiment shown in fig. 10 of the present application is extended based on the embodiment shown in fig. 9 of the present application, and the differences between the embodiment shown in fig. 10 and the embodiment shown in fig. 9 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 10, in the apparatus for generating an image according to the embodiment of the present application, the secondimage obtaining module 203 includes:
aposition limiting unit 203a for limiting the placement position of the preset object model by the surface normal map;
a cameraparameter determination unit 203b for determining camera parameters of the first image;
a pixel coordinate determination unit 203c, configured to determine a pixel coordinate of the preset object model according to the camera parameter of the first image and the three-dimensional coordinate of the preset object model;
the second image determining unit 203d is configured to edit and render the first image and the preset object model according to the pixel coordinates, the reflection information, the light source position, and the light source intensity of the preset object model, so as to obtain a second image.
Fig. 11 is a schematic structural diagram of the second depthimage obtaining module 204 in the apparatus for generating an image according to an exemplary embodiment of the present application. The embodiment shown in fig. 11 of the present application is extended based on the embodiment shown in fig. 10 of the present application, and the differences between the embodiment shown in fig. 11 and the embodiment shown in fig. 10 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 11, in the apparatus for generating an image according to the embodiment of the present application, the second depthimage obtaining module 204 includes:
the depthvalue determining unit 204a is configured to obtain a depth value of each pixel point in the preset object model according to the three-dimensional coordinates of the preset object model;
the second depthimage determining unit 204b is configured to obtain a second depth image according to the first depth image and the depth value of each pixel point in the preset object model.
Fig. 12 is a schematic structural diagram of the pixel coordinate determination unit 203c in the image generation apparatus according to an exemplary embodiment of the present application. The embodiment shown in fig. 12 of the present application is extended based on the embodiment shown in fig. 10 of the present application, and the differences between the embodiment shown in fig. 12 and the embodiment shown in fig. 10 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 12, in the apparatus for generating an image according to the embodiment of the present application, the pixel coordinate determination unit 203c includes:
a reference pixel point setting subunit 203c1, configured to set a reference pixel point of a preset object model;
a data setting subunit 203c2 configured to set pixel coordinates and depth values of the reference pixel points;
a three-dimensional coordinate calculation subunit 203c3, configured to calculate, according to the camera parameter of the first image, the pixel coordinate and the depth value of the reference pixel, the three-dimensional coordinate of the reference pixel by using a preset three-dimensional coordinate calculation formula;
the pixel coordinate calculating subunit 203c4 is configured to calculate, according to the camera parameter of the first image, the three-dimensional coordinate of the reference pixel point, the three-dimensional coordinate of the preset object model, and the relative position between the reference pixel point and each pixel point in the preset object model, the pixel coordinate of each pixel point in the preset object model by using a preset pixel coordinate calculation formula.
Fig. 13 is a schematic structural diagram of an apparatus for generating an image according to another exemplary embodiment of the present application. The embodiment shown in fig. 13 of the present application is extended based on the embodiments shown in fig. 8 to 12 of the present application, and the differences between the embodiment shown in fig. 13 and the embodiments shown in fig. 8 to 12 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 13, the apparatus for generating an image according to an embodiment of the present application further includes:
an addingmodule 205, configured to add a preset object model in the first image.
It should be understood that fig. 8 to 13 provide the reflectioninformation determining module 201, the lightsource determining module 202, the secondimage acquiring module 203, the second depthimage acquiring module 204, and the addingmodule 205 in the apparatus for generating an image. The operations and functions of theimage segmentation unit 202a, the featurevector determination unit 202b, the light sourceposition determination unit 202c, and the light sourceintensity determination unit 202d included in the lightsource determination module 202, theposition limitation unit 203a, the cameraparameter determination unit 203b, the pixel coordinate determination unit 203c, and the second image determination unit 203d included in the secondimage acquisition module 203, the depthvalue determination unit 204a, and the second depthimage determination unit 204b included in the second depthimage acquisition module 204, the reference pixel point setting subunit 203c1, the data setting subunit 203c2, the three-dimensional coordinate calculation subunit 203c3, and the pixel coordinate calculation subunit 203c4 included in the pixel coordinate determination unit 203c may refer to the method for generating an image provided in fig. 1 to 7 described above, and are not described herein again to avoid repetition.
Exemplary electronic device
FIG. 14 illustrates a block diagram of an electronic device in accordance with an embodiment of the present application.
As shown in fig. 14, theelectronic device 11 includes one ormore processors 11a and amemory 11 b.
Theprocessor 11a may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in theelectronic device 11 to perform desired functions.
Memory 11b may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), a hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer-readable storage medium and executed by theprocessor 11a to implement the method of generating an image of the various embodiments of the present application described above and/or other desired functions. Various contents such as an input signal, a signal component, a noise component, etc. may also be stored in the computer-readable storage medium.
In one example, theelectronic device 11 may further include: aninput device 11c and anoutput device 11d, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, theinput device 11c may be a camera or a microphone, a microphone array, or the like as described above, for capturing an input signal of an image or a sound source. When the electronic device is a stand-alone device, the input means 11c may be a communication network connector for receiving the acquired input signals from the neural network processor.
Further, theinput device 11c may include, for example, a keyboard, a mouse, and the like.
Theoutput device 11d can output various information including the determined output voltage, output current information, and the like to the outside. Theoutput devices 11d may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Of course, for the sake of simplicity, only some of the components related to the present application in theelectronic device 11 are shown in fig. 14, and components such as a bus, an input/output interface, and the like are omitted. In addition, theelectronic device 11 may include any other suitable components, depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the power parameter adjustment method according to various embodiments of the present application described in the "exemplary methods" section of this specification above.
The computer program product may be written with program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform the steps in the power parameter adjustment method according to various embodiments of the present application described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.