Detailed Description
In order to make the objects and advantages of the embodiments of the present invention easier to understand, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments, and the following detailed description of the embodiments of the present invention in the accompanying drawings does not limit the scope of the claimed invention, but only represent selected embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that, if no conflict arises, the technical features of the embodiments of the present invention described below may be combined with each other and all the features are within the protection scope of the present invention. In addition, while functional block division is performed in an apparatus or a structural diagram, a logical order is shown in a flowchart, in some cases, steps shown or described may be performed in a different order than block division in an apparatus or a different order than in a flowchart. Furthermore, the use of "first," "second," "third," and other similar expressions, herein do not limit the data and execution sequence, but merely facilitate description and distinguish between identical items or similar items that have substantially the same function and effect, without regard to indicating or implying a relative importance or implying a number of technical features.
Unless defined otherwise, technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. It should be understood that the term "and/or" as used in this specification includes any and all combinations of one or more of the associated listed items.
In the semiconductor manufacturing industry, automated optical inspection equipment (Automated Optical Inspection, AOI) is often required to detect and analyze problems with the appearance, quality, and performance of products. The optical machine and the camera are optical components forming the AOI equipment, and most of the optical imaging modules of the current AOI equipment adopt the configuration of a top single camera to inspect an object to be detected on a bottom track. However, some scenes cannot meet the detection requirement only by the top camera, the field of view of one top camera is limited, the size of the shot FOV is limited, and when the area of a detection object is large, the side camera needs to be added to shoot from the side to acquire information which cannot be acquired by the top main camera.
In order to ensure that the optical imaging module of the AOI device can cover as much detection area as possible, the main camera and the side camera are often set at different heights by the research and development designer, so that the space in the transverse direction is saved. Also, since the side camera cannot block the view of the main camera, the side camera needs to be obliquely arranged outside the optical path of the main camera. These two points result in the coordinate system of the main camera view being completely decoupled from the coordinate systems in the respective side camera views. This disadvantage makes it impossible to relate the position of the defect detected in the main camera to the position of the defect detected in the side camera at a later stage in the detection link. For example, the user may want to detect the presence of a cold weld or other weld defect on one element in the main camera view, where the user has detected multiple elements in the side camera, but the user cannot tell which element is the one the user sees in the main camera view. Because the main camera view is not related to each side camera view, the relative positions of the main camera and each side camera cannot be determined, so that detection deviation is easy to occur, and the user experience is poor.
In view of this, the embodiment of the present invention provides a multi-camera view fusion method, which uses a main camera and a plurality of side cameras to capture the same calibration board to obtain a main camera image and a plurality of side camera images, pre-processes the main camera image and the plurality of side camera images, and identifies valid dots and dot information of the valid dots of the pre-processed main camera image and the plurality of side camera images, identifies anchor points in the main camera image and the plurality of side camera images based on the valid dots of the main camera image and the plurality of side camera images, calculates the real resolutions of the main camera and the plurality of side cameras, calculates and obtains a first relative position and a plurality of second relative positions according to the calculated real resolutions of the main camera and the plurality of side cameras, calculates and obtains a mapping relation between FOV of the main camera and FOV of the plurality of side cameras according to the first relative position and the plurality of second relative positions, and performs view fusion between the main camera and each side camera based on the mapping relation, thereby realizing view fusion between the main camera and each side camera, and the main camera can be identified, and the detection range of the main camera can be clearly identified, and the field of view detection range can be increased, and the field of view detection range can be clearly detected, and the field detection range can be increased.
Referring to fig. 1, fig. 1 schematically illustrates an application scenario of a multi-camera view fusion method according to some embodiments of the present invention.
As shown in fig. 1, the application scenario includes an electronic device 100. It will be appreciated that, to implement multi-camera view blending, the electronic device 100 may also need to cooperate with other software and hardware, including, for example, a camera, image processing software, a display device, and a storage device, and those skilled in the art may add, delete or change the software and hardware required according to actual needs, and cooperate with the electronic device 100 to implement multi-camera view blending. It should be understood that these cooperating hardware and software may be configured on the electronic device 100 according to actual needs, or may exist independent of the electronic device 100.
It should be understood that in the application scenario shown in fig. 1, the electronic device 100 is a desktop computer, but it does not impose any limitation on the structure, type, and number of electronic devices in other application scenarios. For example, in other embodiments, the electronic device may also be a notebook, tablet, or other suitable type of device, or may also be a server, such as a server deployed in the cloud.
For example, referring to fig. 2a, fig. 2a illustrates the positional relationship of the main camera and the other four side cameras in some embodiments of the present invention. Fig. 2a shows a calibration board 200, a main camera 300, and four side cameras, wherein the four side cameras are a first side camera 301, a second side camera 302, a third side camera 303, and a fourth side camera 304, respectively, and the first side camera 301, the second side camera 302, the third side camera 303, and the fourth side camera 304 are disposed at the lower side of the main camera 300 and are located outside the optical path of the main camera 300, so as to avoid affecting the main camera 300 to shoot the calibration board 200, the element to be detected, or other objects.
Referring to fig. 3, in some embodiments, calibration plate 200 includes a calibration pattern array 210, wherein calibration pattern array 210 includes a positioning point 211 and a calibration point 212. The positioning point 211 is disposed at the center of the calibration pattern array 210, so as to determine the relative positions of the center of the vision of the main camera and the center of the vision of each side camera, and further perform vision fusion of the main camera and each side camera. It should be understood that fig. 3 only schematically illustrates the positions of the positioning points 211 and the calibration points 212 of the calibration pattern array 210 in the calibration plate 200, but it is not limited to any positions, shapes, arrangements, and numbers of the positioning points and the calibration points in other embodiments, and only the main camera and the side camera need to capture the positioning points 211.
In the case of performing the multi-camera view fusion, the main camera and the plurality of side cameras are each photographed based on the same calibration plate, and the main camera image and the plurality of side camera images are obtained, and for example, the calibration plate 200 may be photographed by using the main camera 300, the first side camera 301, the second side camera 302, the third side camera 303, and the fourth side camera 304 shown in fig. 2a, so as to obtain the main camera image and the four side camera images. Referring to fig. 4a to 4e, fig. 4a to 4e show a main camera image and four side camera images obtained by photographing a main camera 300, a first side camera 301, a second side camera 302, a third side camera 303, and a fourth side camera 304, respectively. The main camera image shown in fig. 4a is marked with a camera field of view center, which is shown as a gray line in fig. 4 a.
It will be appreciated that the camera view in the main camera image is not distorted because the main camera is vertically positioned at the target location where the component to be detected is placed, such as the calibration plate 200 shown in fig. 2a, whereas the camera view in the four side camera images is subject to depth of field problems because the four side cameras are not vertically positioned at the target location where the component to be detected is placed. Therefore, the four side cameras are additionally provided with the poloxamer lenses, and the side cameras are additionally provided with the poloxamer lenses, so that the optical axis trend of the lenses can be changed, the optical axis of the obliquely arranged side cameras is perpendicular to the surface of an object, the whole side image obtained by shooting is focused on the upper part, virtual focus cannot be generated, the problem of depth of field caused by camera inclination is solved, and the detection effect is ensured not to be influenced.
Specifically, the electronic device 100 is used as a device for providing computing and control capabilities, after capturing a main camera image and a plurality of side camera images by using the main camera and the plurality of side cameras, preprocessing the main camera image and the plurality of side camera images, identifying effective dots in the preprocessed main camera image and the plurality of side camera images, identifying positioning points in the main camera image and the plurality of side camera images based on the effective dots of the main camera image and the plurality of side camera images, computing the real resolutions of the main camera and the plurality of side camera images, computing and acquiring a first relative position and a plurality of second relative positions according to the real resolutions of the main camera and the plurality of side camera, wherein the first relative position is a relative position between the positioning point of the main camera image and the FOV center of the main camera, the second relative position is a relative position between the positioning point of the side camera image and the FOV center of the side camera, and finally computing and acquiring a mapping relation between the main camera and the plurality of different side camera based on the first relative position and the plurality of different second relative positions, and computing and fusing the main camera and the multiple side camera based on the mapping relation, thereby improving the visual field of view of the main camera and the main camera, and the field of view of the main camera can be clearly fused, and the field of view of the main camera can be detected, and the field of view of the main camera can be clearly detected.
In order to facilitate understanding of the multi-camera view fusion method provided by the embodiment of the present invention, first, an electronic device provided by the embodiment of the present invention is described in detail.
Referring to fig. 5, fig. 5 schematically illustrates a structural diagram of an electronic device according to some embodiments of the present invention.
As shown in FIG. 5, the electronic device 100 includes at least one processor 110 and a memory 120 communicatively coupled, one processor being illustrated in FIG. 5 as a bus system 130. Wherein the various components in the electronic device 100 are coupled together by a bus system 130, the bus system 130 being used to enable connection communications between the various components. It will be readily appreciated that the bus system 130 may include a power bus, a control bus, a status signal bus, and the like, in addition to the data bus. The various buses are labeled as bus system 130 in fig. 5 for clarity and conciseness. It will be appreciated that the configuration shown in the embodiment of fig. 5 is merely illustrative and is not intended to limit the configuration of the electronic device described above in any way. For example, the electronic device described above may also include more or fewer components than the structure shown in fig. 5, or have a different configuration than the structure shown in fig. 5.
Specifically, the processor 110 is configured to provide computing and control capabilities to control the electronic device 100 to perform corresponding tasks, for example, to control the electronic device 100 to perform any of the multi-camera view blending methods provided in the embodiments of the present invention, or to perform steps in any of the possible implementation manners of any of the multi-camera view blending methods provided in the embodiments of the present invention. Those skilled in the art will appreciate that the processor 110 may be a general purpose processor including a central Processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc., or may be a digital signal processor (DIGITAL SIGNAL Processing, DSP), an Application SPECIFIC INTEGRATED Circuit (ASIC), a Field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components.
The memory 120 is used as a non-transitory computer readable storage medium, and may be used to store non-transitory software programs, non-transitory computer executable programs, instructions, and modules, for example, corresponding to the multi-camera view fusion method in the embodiment of the present invention. In some embodiments, the memory 120 may include a storage program area that may store an operating system, applications required for at least one function, and a storage data area that may store data created from the use of the processor 110, etc. The processor 110 executes various functional applications and data processing of the electronic device 100 by running non-transitory software programs, instructions and modules stored in the memory 120 to implement any of the multi-camera view blending methods provided by the embodiments of the present invention, or to perform steps in any of the possible implementations of any of the multi-camera view blending methods provided by the embodiments of the present invention. In some embodiments, memory 120 may include high-speed random access memory, and may also include non-transitory memory. Such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 120 may also include memory located remotely from processor 110, which may be connected to processor 110 through a communication network. It is understood that examples of the above-described communication networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
It will be appreciated from the foregoing that the implementation of any of the methods for multi-camera view fusion provided by the embodiments of the present invention may be implemented by any suitable type of electronic device having certain computing and control capabilities, for example, the electronic device 100 described above. In some possible implementations, any of the multi-camera view fusion methods provided by the embodiments of the present invention may be implemented by a processor executing computer program instructions stored in a memory.
The multi-camera view fusion method provided by the embodiment of the invention will be described in detail below in connection with exemplary applications and implementations of the electronic device provided by the embodiment of the invention.
Referring to fig. 6, fig. 6 schematically illustrates a flowchart of a multi-camera view fusion method according to some embodiments of the present invention.
It will be appreciated by those skilled in the art that the multi-camera view fusion method provided in the embodiments of the present invention may be applied to the electronic device (e.g., the electronic device 100) described above. Specifically, the execution subject of the multi-camera view fusion method is one or at least two processors of the electronic device.
As shown in fig. 6, the multi-camera view fusion method includes, but is not limited to, the following steps S100-S700:
And S100, shooting through the main camera and the plurality of side cameras based on the same calibration plate, and obtaining a main camera image and a plurality of side camera images.
In the embodiment of the invention, the calibration plate is provided with a positioning block and a plurality of calibration blocks, the positioning block and the calibration blocks form a calibration pattern array, and the positioning block is positioned at the center of the calibration pattern array. The shape of the calibration block can be circular, rectangular or square, the shape of the positioning block can be annular, diamond-shaped, and the like, and the positioning block is different from the calibration block in shape. It is easy to understand that the distribution shapes of the positioning blocks and the calibration blocks (namely, the distribution shapes of the calibration pattern arrays) on the calibration plate can be rectangular, circular or square and other regular shapes, and also can be diamond, oval or triangular and other irregular shapes.
The calibration plate is fixed at the target position of the stable support, so that the calibration plate is ensured not to shake or deform in the shooting process. The size of the calibration plate is moderate, the calibration plate is required to be positioned in the visual field range of the main camera and the side cameras at the same time, reasonable light intensity is set, reflection and shadow are avoided, and the image definition, the positioning point and the identification accuracy of the calibration point are improved.
Specifically, the main camera and the plurality of side cameras are fixedly mounted at predetermined positions, ensuring that the respective cameras do not move during photographing. And after receiving the shooting instructions, the main camera and the side cameras shoot a calibration plate arranged at the target position to acquire a main camera image and a plurality of side camera images, so that shooting is performed by the main camera and the side cameras based on the same calibration plate to acquire the main camera image and the side camera images. The main camera image and the plurality of side camera images comprise circular or approximately circular shadow areas (i.e., calibration points) formed by the calibration blocks and annular or approximately annular shadow areas (i.e., positioning points) formed by the positioning blocks.
S200, preprocessing the main camera image, identifying effective dots in the main camera image and dot information of the effective dots after preprocessing, and calculating the real resolution of the main camera.
In some embodiments, the main camera image is preprocessed, that is, color information is removed from the main camera image, only brightness information is reserved, so that the main camera image is converted into a gray image with only one gray value for each pixel, then gaussian filtering is used for smoothing the gray image, noise in the gray image is removed, finally binarization processing is performed on the gray image with the noise removed, and the gray image with the noise removed is converted into a pure black-and-white image (that is, a binary image with only black and white colors) to obtain the preprocessed main camera image. Referring to fig. 7a, fig. 7a schematically illustrates a preprocessed main camera image obtained by preprocessing the main camera image illustrated in fig. 4 a.
In some embodiments, a circle detection algorithm (e.g., a Hough transform algorithm) is used to detect the dots in the preprocessed main camera image and extract the identified detected dots. The Hough transform algorithm is an image processing algorithm and is used for identifying dots in an image and returning the center coordinates and the radius of the dots. In some embodiments, any other suitable algorithm may be used to detect dots in the preprocessed main camera image, which the embodiments of the present invention do not limit in any way.
In some embodiments, for each extracted dot, determining whether the dot is valid according to a preset rule, if so, determining that the dot is a valid dot, screening the dot as a valid dot, if not, determining that the dot is an invalid dot, and removing or performing other processing. The preset rules include that the radius of the dots is within a preset range, the circle center position of the dots accords with the layout of the calibration pattern array, and the like, and of course, a person skilled in the art can add, modify or delete preset rules according to actual needs, and the embodiment of the invention is not limited in any way.
In some embodiments, after effective dots are screened from the extracted dots, the dot information of the effective dots is obtained by using a circular detection algorithm (such as a Hough transform algorithm, etc.), the dot information comprises the centroid and the radius of the effective dots, wherein the centroid of the effective dots is the center coordinates of the effective dots, the Hough transform algorithm gives the center coordinates and the radius of each effective dot, and thus the dot information of the effective dots can be obtained by the Hough transform algorithm.
In some embodiments, a true resolution of the primary camera is calculated from dot information of the valid dots, wherein the true resolution of the primary camera includes an X-axis resolution and a Y-axis resolution of the primary camera. In the embodiment of the invention, the dot information of the effective dot comprises the actual diameter of the effective dot, wherein the round calibration block on the calibration plate has a known actual size in an actual physical space, the actual size comprises the actual diameter of the round calibration block, and the actual diameter of the round calibration block is the actual diameter of the effective dot.
Illustratively, the circumscribed rectangles of all the effective dots are obtained, and the widths (in units of pixels) of the circumscribed rectangles of all the effective dots are summed to obtain a sum of the widths of the circumscribed rectangles. And calculating the total number of all the effective dots, obtaining the diameter of the dot in the X direction based on the sum of the widths of the circumscribed rectangles and the total number of the effective dots, and calculating and obtaining the X-axis resolution of the main camera according to the diameter of the dot in the X direction and the actual diameter of the effective dots. And summing the heights (taking pixels as units) of the external rectangles of all the effective dots to obtain the sum of the heights of the external rectangles, obtaining the diameter of the dots in the Y direction based on the sum of the heights of the external rectangles and the total number of the effective dots, and calculating and obtaining the Y-axis resolution of the main camera according to the diameter of the dots in the Y direction and the actual diameter of the effective dots, so as to obtain the actual resolution of the main camera.
Illustratively, in some embodiments, the primary camera image is preprocessed, specifically including, but not limited to, the following steps S210-S220:
S210, converting the input main camera image into a single-channel gray scale image.
Specifically, the main camera image input to the electronic device is preprocessed, that is, an image processing library (such as OpenCV) is used to remove color information from the main camera image, so that the number of channels of the main camera image is converted from three channels (such as RGB channels) to a single channel, and a single-channel gray scale map is obtained, where each pixel in the single-channel gray scale map has only one gray scale value.
And S220, performing threshold segmentation on the converted main camera image to convert the main camera image into black and white colors.
In the embodiment of the invention, a threshold segmentation algorithm is adopted to carry out threshold segmentation on a converted main camera image (namely a single-channel gray level image), so that the main camera image is converted into two colors of black and white, namely, binarization processing is carried out on the converted main camera image, the single-channel gray level image is converted into a pure black-and-white image (namely, only a black and white binary image is included), a preprocessed main camera image is obtained, and the preprocessed main camera image is in the form of a binary image.
It is understood that the thresholding algorithm includes, but is not limited to, binary thresholding algorithm, OTSU algorithm, and TOZERO algorithm. In some embodiments, the noise points in the binary image are removed by using Gaussian blur, and the binary image is smoothed, so that the features in the binary image are more obvious, and a preprocessed main camera image with higher quality is obtained, so that interference on feature extraction is reduced.
In some embodiments, the dot information identifying the valid dots and valid dots in the main camera image after preprocessing includes, but is not limited to, the following steps S230-S240:
S230, carrying out connected domain analysis on the main camera image after preprocessing, finding out the median of the areas of all the connected domains, and setting an area difference threshold according to the requirements of the identification dots.
And S240, traversing all the connected domains, calculating the four-corner coordinates of the circumscribed rectangle of each connected domain, and judging all the connected domains based on the area difference threshold and the four-corner coordinates so as to identify all effective dots and dot information of the effective dots in the main camera image.
In the embodiment of the invention, after the main camera image is preprocessed, the preprocessed main camera image is subjected to connected domain analysis by adopting a connected domain analysis algorithm, all connected domains in the preprocessed main camera image are detected and identified, and all the identified connected domains are traversed to obtain the connected domain information. The connected domain information comprises the total number of connected domains, the barycenter coordinates and the area of each connected domain, the upper left corner coordinates, the width and the height of the circumscribed rectangle. It is understood that the connected domain analysis algorithm includes, but is not limited to, SPAGHETTI algorithm, SAUF algorithm, and BBDT algorithm.
In some embodiments, the area of the connected domain is the total number of pixels in the connected domain, and the total number of pixels in the connected domain is calculated using a cv2.Contourarea () function in the OpenCV library or other similar method, to obtain the area of the connected domain. And according to the area of each connected domain, arranging the areas of all the connected domains in order from small to large to obtain an area arrangement sequence, selecting an area value positioned at a middle position from the area arrangement sequence as the median of the area, and if the area value at the middle position is two, taking the median of the area as the average value of the two area values at the middle position to obtain the median of the area of all the connected domains.
In the embodiment of the invention, the area difference threshold value is set according to the requirements of the identification dots (such as the number of the dots, the size of the dots, the median of the area and the like), wherein the area difference threshold value is used for identifying and judging whether the identified connected domain is qualified or not. In some embodiments, the area difference threshold may be set according to a preset multiple of the median of the areas of the connected domains for distinguishing between valid and invalid connected domains. For example, the area difference threshold is 1.2 times, 1.5 times or other suitable times of the median of the area, and those skilled in the art can set the area difference threshold according to actual requirements, which is not limited in any way in the embodiments of the present invention.
In some embodiments, the centroid coordinates of the connected domain are the geometric centers of the connected domain, and moments of the connected domain can be calculated using the cv2. Motion () function in the OpenCV library to obtain the centroid coordinates of the connected domain.
In some embodiments, the bounding rectangle of the connected domain is defined by the upper left corner coordinates of the rectangleAnd width and heightAnd obtaining the coordinates and the sizes (including the width and the height) of the circumscribed rectangle of the connected domain by using a cv2.BoundingRect () function of the OpenCV library, so as to obtain the circumscribed rectangle of the connected domain.
In the embodiment of the invention, the four-corner coordinates of the circumscribed rectangle comprise the left upper corner coordinate, the right upper corner coordinate, the left lower corner coordinate and the right lower corner coordinate of the circumscribed rectangle.
Specifically, after traversing all the identified connected domains to obtain the upper left corner coordinates, the width and the height of the circumscribed rectangle of each connected domain, for each connected domain, calculating the upper right corner coordinates, the lower left corner coordinates and the lower right corner coordinates of the circumscribed rectangle of each connected domain according to the upper left corner coordinates, the width and the height of the circumscribed rectangle of each connected domain, so as to obtain the four corner coordinates of the circumscribed rectangle of each connected domain. It will be appreciated that the upper right-hand corner coordinate of the bounding rectangle is the same as the upper left-hand corner coordinate, so that when calculating the upper right-hand corner coordinate, the upper left-hand corner coordinate is added to the width of the bounding rectangle to obtain the upper right-hand corner coordinate, and the upper right-hand corner coordinate is obtained. Similarly, the lower left corner coordinate of the circumscribed rectangle is the same as the upper left corner coordinate, so that when the lower left corner coordinate is calculated, the vertical coordinate of the upper left corner coordinate is subtracted from the height of the circumscribed rectangle to obtain the vertical coordinate of the lower left corner coordinate, and the lower left corner coordinate is obtained. Similarly, after the upper right corner coordinate and/or the lower left corner coordinate are calculated, the lower right corner coordinate of the circumscribed rectangle is the same as the abscissa of the upper right corner coordinate, so that when the lower right corner coordinate is calculated, the ordinate of the upper right corner coordinate is subtracted from the height of the circumscribed rectangle to obtain the ordinate of the lower right corner coordinate, and the lower right corner coordinate is obtained. Or the lower right corner coordinate of the circumscribed rectangle is the same as the ordinate of the lower left corner coordinate, so that when the lower right corner coordinate is calculated, the abscissa of the lower left corner coordinate is added with the width of the circumscribed rectangle to obtain the abscissa of the lower right corner coordinate, and the lower right corner coordinate is obtained.
And comparing the left upper corner coordinate, the right upper corner coordinate, the left lower corner coordinate and the right lower corner coordinate of the circumscribed rectangle of each connected domain with the edge of the main camera image respectively. If any one of the left upper corner coordinate, the right upper corner coordinate, the left lower corner coordinate and the right lower corner coordinate of the circumscribed rectangle exceeds the edge of the main camera image or is positioned at the edge of the main camera image, the connected domain is an incomplete connected domain, the incomplete connected domain is an invalid dot, and the incomplete connected domain is filtered.
After incomplete connected domains are filtered, each of the reserved complete connected domains is filtered according to a region difference threshold value to obtain qualified connected domains, wherein the qualified connected domains are effective dots, and the connected domain information of the qualified connected domains is dot information of the effective dots.
Specifically, for each complete connected domain, the area of each complete connected domain is differenced from the median of the area to obtain an area difference value, the area difference value is compared with an area difference threshold value, if the area difference value is smaller than or equal to the area difference threshold value, the connected domain is a qualified connected domain (i.e., an effective dot), and if the area difference value is greater than the area difference threshold value, the connected domain is a disqualified connected domain (i.e., an ineffective dot). And taking the qualified connected domain information of the connected domain obtained by filtering as dot information of the effective dots to obtain dot information of the effective dots, namely identifying and obtaining all the effective dots and dot information of the effective dots in the main camera image.
Illustratively, in some embodiments, the true resolution of the primary camera is calculated, including, but not limited to, the following steps S250-S280:
S250, summing the widths of all the effective dots and the circumscribed rectangles to obtain the sum of the circumscribed widths.
And S260, acquiring the diameter of the dot in the X direction based on the sum of the external widths and the total number of the effective dots, and calculating and acquiring the X-axis resolution of the main camera according to the diameter of the dot in the X direction and the actual diameter.
And S270, summing the heights of all the effective dots and the circumscribed rectangles to obtain the sum of the circumscribed heights.
And S280, acquiring the diameter of the dot in the Y direction based on the sum of the external heights and the total number of the effective dots, and calculating and acquiring the Y-axis resolution of the main camera according to the diameter of the dot in the Y direction and the actual diameter.
In the embodiment of the invention, the dot information comprises the actual diameter of the effective dot, the width and the height of the circumscribed rectangle, and the actual resolution of the main camera comprises the X-axis resolution and the Y-axis resolution of the main camera.
In some embodiments, the true resolution of the main camera is calculated from the dot information, i.e., the widths (in pixels) of the circumscribed rectangles of all the effective dots are summed to obtain a sum of the widths of the circumscribed rectangles, which is taken as the sum of the circumscribed widths. And calculating the total number of the effective dots, dividing the sum of the external widths by the total number of the effective dots to obtain the diameter of the dots in the X direction (namely the diameter of the effective dots in the X axis direction, taking pixels as units), and dividing the diameter of the dots in the X direction by the actual diameter of the effective dots to obtain the X axis resolution of the main camera.
And summing the heights (taking pixels as units) of the circumscribed rectangles of all the effective dots to obtain the sum of the heights of the circumscribed rectangles, and recording the sum as the sum of the circumscribed heights. Dividing the sum of the external connection heights by the total number of the effective dots to obtain the diameter of the dots in the Y direction (namely the diameter of the effective dots in the Y axis direction, taking pixels as units), and dividing the diameter of the dots in the Y direction by the actual diameter of the effective dots to obtain the Y axis resolution of the main camera, thereby obtaining the real resolution of the main camera.
S300, identifying positioning points of the main camera image based on the effective dots in the main camera image.
In the embodiment of the invention, the main camera image comprises positioning points and calibration points, and qualified positioning points and calibration points (namely effective dots) are obtained after unqualified positioning points and calibration points are removed. As can be seen from the foregoing description of the embodiment, the anchor point and the anchor point are different in shape, wherein the anchor point is annular and the anchor point is circular, and thus the color and shape of the anchor point and the anchor point are different in the preprocessed main camera image. For example, referring to fig. 7a, in the preprocessed main camera image shown in fig. 7a, the positioning points are white circles, and the positioning points are black dots.
Specifically, after identifying the effective dots in the main camera image, identifying the effective dots in the main camera image according to the position, shape and color of each effective dot in the image coordinate system, identifying the positioning points of the main camera image, and acquiring the X-axis coordinates and the Y-axis coordinates of the positioning points of the main camera image.
Illustratively, in some embodiments, the anchor point of the primary camera image is identified based on the valid dots in the primary camera image, including, but not limited to, the following steps S310-S350:
And S310, setting the foreground of the preprocessed main camera image as white, and carrying out connected domain analysis on the main camera image to obtain the left upper corner coordinates of the connected domain outside rectangle, and the width and the height of the circumscribed rectangle.
S320, calculating the upper right corner coordinate, the lower right corner coordinate and the lower left corner coordinate of the circumscribed rectangle of the connected domain through the upper left corner coordinate, the width and the height of the circumscribed rectangle of the first connected domain.
S330, judging whether any one of the upper left corner coordinate, the lower left corner coordinate, the upper right corner coordinate and the lower right corner coordinate of the circumscribed rectangle of the connected domain is contacted with the edge of the main camera image.
And S340, when the judgment result is yes, the second connected domain is indicated to be the locating point.
And S350, when the judgment result is NO, the first connected domain is indicated to be the locating point.
In the embodiment of the invention, the background and the positioning points in the preprocessed main camera image are white, and the positioning points are black dots. If the foreground is considered to be black by adopting the connected domain analysis algorithm, the foreground of the preprocessed main camera image is required to be set to be white, the preprocessed main camera image is subjected to inverse color processing, the foreground and the standard point are converted to be white, and the standard point is converted to be black. If the adopted connected domain analysis algorithm considers that the foreground is white, the pre-processed main camera image does not need to be subjected to inverse color processing. The connected domain analysis algorithm adopted by the embodiment of the invention considers the foreground as black, so that the preprocessed main camera image is required to be subjected to inverse color processing, the foreground and the standard point in the preprocessed main camera image are converted into white, the standard point is converted into black, and at the moment, only two connected domains, namely the standard point and the background, are arranged in the main camera image. Referring to fig. 7b, fig. 7b shows a main camera image obtained by preprocessing and inverse-coloring the main camera image shown in fig. 4a, and the foreground of the main camera image shown in fig. 7b is white.
Specifically, a connected domain analysis algorithm is adopted to analyze the preprocessed and reversely-colored main camera image, so as to obtain the upper left corner coordinates of the circumscribed rectangle of the connected domain, and the width and the height of the circumscribed rectangle.
It can be understood that the main camera image after the pretreatment and the inverse color treatment has only two connected domains, one connected domain is a positioning point and the other connected domain is a background area. Therefore, the positioning point can be identified and determined according to the connected domain information of the two connected domains, specifically, the upper right corner coordinate, the lower right corner coordinate and the lower left corner coordinate of the circumscribed rectangle of the connected domain are calculated by utilizing the upper left corner coordinate, the width and the height of the circumscribed rectangle of the first connected domain.
After the upper left corner coordinate, the lower left corner coordinate, the upper right corner coordinate and the lower right corner coordinate of the circumscribed rectangle of the first communication domain are obtained through calculation, the upper left corner coordinate, the lower left corner coordinate, the upper right corner coordinate and the lower right corner coordinate of the circumscribed rectangle of the communication domain are respectively compared with the edge of the main camera image, and whether any one coordinate is contacted with the edge of the main camera image is judged. If any one of the upper left corner coordinate, the upper right corner coordinate, the lower left corner coordinate and the lower right corner coordinate of the circumscribed rectangle of the first communicating region is contacted with the edge of the main camera image, namely, if the judgment result is yes, the first communicating region is not a locating point, and the second communicating region is a locating point, so that the locating point of the main camera image is identified and determined. If no coordinates in the left upper corner coordinates, the right upper corner coordinates, the left lower corner coordinates and the right lower corner coordinates of the circumscribed rectangle of the first communicating region contact with the edge of the main camera image, namely if the judgment result is negative, the first communicating region is a positioning point, and the second communicating region is not a positioning point, so that the positioning point of the main camera image is identified and determined, and the X-axis coordinates and the Y-axis coordinates of the positioning point of the main camera image are obtained.
And S400, calculating and acquiring a first relative position according to the real resolution of the main camera.
In this step, the first relative position is the relative position between the positioning point of the main camera image and the FOV center of the main camera, and the embodiment of the present invention uses the X-axis relative distance and the Y-axis relative distance of the physical world unit to represent the first relative position, where the first relative position includes the X-axis relative distance and the Y-axis relative distance of the positioning point of the main camera image relative to the physical world unit of the FOV center of the main camera.
In the embodiment of the invention, before the main camera is used for shooting the main camera image, the size (comprising the width and the height) of the picture shot by the main camera is set, so that after the main camera shoots and obtains the main camera image, the width and the height of the main camera image can be known.
Specifically, after the main camera is used for shooting to obtain a main camera image, the size of the main camera image is read to obtain the width and the height of the main camera image. Then, based on the width and height Of the main camera image, the main camera FOV (Field Of View) center coordinates including the X-axis coordinates and the Y-axis coordinates are calculated. And calculating the X-axis relative distance of the locating point relative to the pixel unit of the center of the FOV of the main camera and the Y-axis relative distance of the pixel unit according to the X-axis coordinate and the Y-axis coordinate of the locating point of the image of the main camera and the X-axis coordinate and the Y-axis coordinate of the center coordinate of the FOV of the main camera. And calculating the X-axis relative distance of the locating point relative to the physical world unit of the center of the FOV of the main camera according to the X-axis relative distance of the pixel unit and the X-axis resolution of the main camera, calculating the Y-axis relative distance of the locating point relative to the physical world unit of the center of the FOV of the main camera according to the Y-axis relative distance of the pixel unit and the Y-axis resolution of the main camera, and combining the X-axis relative distance of the locating point relative to the physical world unit of the center of the FOV of the main camera and the Y-axis relative distance of the physical world unit to obtain the relative position (namely the first relative position) of the locating point of the main camera image and the center of the FOV of the main camera.
It will be understood that the X-axis relative distance in pixel units refers to the X-axis distance of two pixels (i.e., the anchor point and the center of the FOV of the main camera) in the pixel coordinate system, and the Y-axis relative distance in pixel units refers to the Y-axis distance of two pixels (i.e., the anchor point and the center of the FOV of the main camera) in the pixel coordinate system.
Wherein, the X-axis relative distance of the physical world unit refers to the X-axis distance of two points (namely, the positioning point and the center of the FOV of the main camera) under the world coordinate system, and the Y-axis relative distance of the physical world unit refers to the Y-axis distance of two points (namely, the positioning point and the center of the FOV of the main camera) under the world coordinate system. Wherein the physical world units are metric length units such as millimeters, centimeters, micrometers, and the like.
In some embodiments, the first relative position is obtained according to the real resolution calculation of the main camera, including, but not limited to, the following steps S410-S460:
s410, acquiring the center coordinates of the FOV of the main camera according to the width and the height of the main camera image.
In the embodiment of the invention, the width of the main camera image is divided by 2 to obtain the X-axis coordinate of the center of the FOV of the main camera, the height of the main camera image is divided by 2 to obtain the Y-axis coordinate of the center of the FOV of the main camera, and the X-axis coordinate and the Y-axis coordinate of the center of the FOV of the main camera are combined to obtain the center coordinate of the FOV of the main camera.
S420, acquiring the X-axis relative distance of the locating point relative to the pixel unit of the center of the FOV according to the X-axis coordinate of the locating point and the X-axis coordinate of the center coordinate of the FOV of the main camera.
S430, acquiring the Y-axis relative distance of the locating point relative to the pixel unit of the center of the FOV according to the Y-axis coordinate of the locating point and the Y-axis coordinate of the center coordinate of the FOV of the main camera.
Illustratively, subtracting the X-axis coordinates of the center coordinates of the FOV of the main camera from the X-axis coordinates of the anchor points of the main camera image yields the X-axis relative distance of the anchor points of the main camera image with respect to the pixel units of the FOV center of the main camera. And subtracting the Y-axis coordinate of the center coordinate of the FOV of the main camera from the Y-axis coordinate of the locating point of the main camera image to obtain the Y-axis relative distance of the locating point of the main camera image relative to the pixel unit of the center of the FOV of the main camera.
S440, acquiring the X-axis relative distance of the locating point relative to the physical world unit of the center of the FOV according to the X-axis relative distance of the pixel unit and the X-axis resolution of the main camera.
S450, acquiring the Y-axis relative distance of the locating point relative to the physical world unit of the center of the FOV according to the Y-axis relative distance of the pixel unit and the Y-axis resolution of the main camera.
In the embodiment of the invention, the X-axis relative distance of the locating point of the main camera image relative to the pixel unit in the center of the FOV of the main camera is multiplied by the X-axis resolution of the main camera to obtain the X-axis relative distance of the locating point of the main camera image relative to the physical world unit in the center of the FOV of the main camera. And multiplying the Y-axis relative distance of the locating point of the main camera image relative to the pixel unit at the center of the FOV of the main camera by the Y-axis resolution of the main camera to obtain the Y-axis relative distance of the locating point of the main camera image relative to the physical world unit at the center of the FOV of the main camera.
S460, acquiring a first relative position based on the X-axis relative distance of the physical world unit and the Y-axis relative distance of the physical world unit.
In the embodiment of the invention, the X-axis relative distance of the positioning point of the main camera image relative to the physical world unit of the FOV center of the main camera and the Y-axis relative distance of the physical world unit are combined to obtain the relative position of the positioning point of the main camera image and the FOV center of the main camera, namely the first relative position.
S500, setting a processing sequence of the side camera images according to requirements, processing each side camera image according to the main camera image processing steps based on the processing sequence, acquiring positioning points of the side camera images and acquiring a second relative position.
It will be appreciated that, due to distortion of the camera field of view in the side camera images, the camera field of view in the side camera images needs to be corrected before each side camera image is processed in the main camera image processing steps described above based on the processing order, the embodiment of the invention corrects the camera vision in the side camera image by adopting the camera vision correction method described in the patent application of the inventor of the invention, which is issued by the inventor of the invention and is published as ZL202411764896.0, so as to obtain the side camera image after vision correction, and then, each side camera image after vision correction is processed according to the main camera image processing steps based on the processing sequence. Referring to fig. 8a to 8d, fig. 8a to 8d show the side camera images obtained after the side camera images shown in fig. 4b to 4e are corrected for the camera view, respectively, and the side camera images shown in fig. 8a to 8d are marked with the camera view center, wherein the camera view center is the position where the color lines shown in fig. 8a to 8d intersect (i.e. the orange line shown in fig. 8a, the red line shown in fig. 8b, the blue line shown in fig. 8c, and the green line shown in fig. 8 d).
In the embodiment of the present invention, the processing sequence of the side camera image may be set according to the actual requirement, for example, the processing sequence of the side camera image may be set as shown in fig. 4b to 4c to 4d to 4e, and of course, any other suitable processing sequence may be set, which is not limited in any way.
Illustratively, the second relative position is a relative position between a positioning point of the side camera image and a center of the FOV of the side camera, and the embodiment of the present invention uses an X-axis relative distance and a Y-axis relative distance of the physical world unit to characterize the second relative position, where the second relative position includes the X-axis relative distance and the Y-axis relative distance (both are physical world units) of the positioning point of the side camera image relative to the center of the FOV of the side camera.
And processing each side camera image according to the processing steps of the main camera image based on the processing sequence, acquiring positioning points of the side camera images and acquiring a second relative position. Preprocessing the side camera image, identifying effective dots of the side camera image after preprocessing, identifying dot information of the effective dots, and calculating true resolution of the side camera, and performing the steps of referring to step S200 and refinement thereof. The anchor point of the side camera image is identified based on the valid dots of the side camera image, and the refinement step is performed with reference to step S300. The second relative position is obtained from the true resolution calculation of the side camera, and is performed with reference to step S400 and its refinement step. Finally, each side image correspondingly obtains a second relative position, and a plurality of different second relative positions are obtained.
It should be noted that, the side camera images after preprocessing are preprocessed, the obtained preprocessed side camera images are similar to the preprocessed main camera image, and similarly, the preprocessed side camera images after preprocessing are subjected to inverse color processing, the obtained preprocessed side camera images after inverse color processing are similar to the preprocessed main camera image after inverse color processing, and for brevity, the embodiment of the invention does not show the preprocessed side camera images and the preprocessed side camera images after inverse color processing.
S600, calculating and acquiring the mapping relation between the FOV of the main camera and the FOVs of the different side cameras based on the first relative position and the second different relative positions.
In particular, the plurality of different second relative positions includes the relative positions of the anchor point of each side camera image and the center of the FOV of each side camera. The mapping relationship is a transformation relationship for converting/mapping a certain point in the side camera image to a position in the main camera image, and may be represented in the form of a projection matrix, a homography matrix, an affine transformation matrix, or the like.
Specifically, according to the first relative position and each second relative position, the relative positions of the main camera and each side camera are obtained, then according to the relative positions of the main camera and each side camera, the X-axis relative distance and the Y-axis relative distance (both are physical world units) of the main camera and each side camera are calculated, and then according to the X-axis relative distance and the Y-axis relative distance of the physical world units of the main camera and each side camera, the mapping relation between the FOV of the main camera and the FOV of each side camera is calculated, so that the mapping relation between the FOV of the main camera and the FOVs of a plurality of different side cameras is obtained, wherein the mapping relation comprises the X-axis relative distance and the Y-axis relative distance of the center of the FOV of the side camera relative to the center of the FOV of the main camera.
Illustratively, in some embodiments, the mapping relationship between the main camera FOV and the plurality of different side camera FOVs is calculated based on the first relative position and the plurality of different second relative positions, specifically including, but not limited to, the following steps S610-S630:
And S610, comparing the first relative position with the second relative position of the candidate side camera to acquire the relative orientation of the main camera and the candidate side camera.
In this embodiment, the candidate side cameras are any one of a plurality of side cameras, each candidate side camera corresponds to a second relative position, and the main camera corresponds to a first relative position.
Specifically, the first relative position is compared with the second relative position corresponding to the candidate side camera, so that the relative positions of the main camera and the candidate side camera are obtained, wherein the relative positions comprise the right side, the left side, the upper side and the lower side of the main camera.
It will be appreciated that in the schematic diagram of the positional relationship between the main camera and the other four side cameras in fig. 2a, the main camera and the other four side cameras are viewed from the side, and the positional relationship between the main camera and the other four side cameras shown in fig. 2a is not the relative orientation between the main camera and the candidate side cameras in the embodiment. In the embodiment, the relative orientations of the main camera and the candidate side camera are the relative orientations of each side camera and the main camera in a top view, referring to fig. 2b, fig. 2b schematically illustrates the relative orientations of each side camera and the main camera in a top view, in fig. 2b, the first side camera 301 is located on the upper side of the main camera 300, the second side camera 302 is located on the left side of the main camera 300, the third side camera 303 is located on the lower side of the main camera 300, and the fourth side camera 304 is located on the right side of the main camera 300. It should be noted that the relative orientations of the main camera and the candidate side camera shown in fig. 2b do not impose any limitation on the relative orientations of the main camera and the candidate side camera in other embodiments.
S620, calculating the X-axis relative distance and the Y-axis relative distance of the physical world units of the main camera and the candidate side camera according to the relative azimuth so as to establish the mapping relation between the FOV of the main camera and the FOV of the candidate side camera.
Specifically, if the candidate side camera is located on the right side of the main camera, subtracting the X-axis relative distance of the locating point of the main camera image in the first relative position with respect to the physical world unit of the FOV center of the main camera from the Y-axis relative distance of the locating point of the side camera image in the second relative position corresponding to the candidate side camera with respect to the physical world unit of the FOV center of the side camera, to obtain the X-axis relative distance of the FOV center of the candidate side camera with respect to the physical world unit of the FOV center of the main camera, to obtain the X-axis relative distance of the main camera with respect to the physical world unit of the candidate side camera. And adding the Y-axis relative distance of the locating point of the main camera image in the first relative position relative to the physical world unit of the FOV center of the main camera with the X-axis relative distance of the locating point of the side camera image in the second relative position corresponding to the candidate side camera relative to the physical world unit of the FOV center of the side camera to obtain the Y-axis relative distance of the FOV center of the candidate side camera relative to the physical world unit of the FOV center of the main camera, namely obtaining the Y-axis relative distance of the main camera and the physical world unit of the candidate side camera.
If the candidate side camera is positioned at the left side of the main camera, adding the X-axis relative distance of the locating point of the main camera image in the first relative position relative to the physical world unit of the FOV center of the main camera to the Y-axis relative distance of the locating point of the side camera image in the second relative position corresponding to the candidate side camera relative to the physical world unit of the FOV center of the side camera to obtain the X-axis relative distance of the FOV center of the candidate side camera relative to the physical world unit of the FOV center of the main camera, namely the X-axis relative distance of the main camera and the physical world unit of the candidate side camera. And subtracting the Y-axis relative distance of the locating point of the main camera image in the first relative position relative to the physical world unit of the FOV center of the main camera from the X-axis relative distance of the locating point of the side camera image in the second relative position corresponding to the candidate side camera relative to the physical world unit of the FOV center of the side camera to obtain the Y-axis relative distance of the FOV center of the candidate side camera relative to the physical world unit of the FOV center of the main camera, namely obtaining the Y-axis relative distance of the main camera and the physical world unit of the candidate side camera.
If the candidate side camera is located on the upper side of the main camera, adding the X-axis relative distance of the locating point of the main camera image in the first relative position relative to the physical world unit of the center of the FOV of the main camera to the X-axis relative distance of the locating point of the side camera image in the second relative position corresponding to the candidate side camera relative to the physical world unit of the center of the FOV of the side camera to obtain the X-axis relative distance of the center of the FOV of the candidate side camera relative to the physical world unit of the center of the FOV of the main camera, and obtaining the X-axis relative distance of the physical world unit of the main camera and the candidate side camera. And adding the Y-axis relative distance of the locating point of the main camera image in the first relative position relative to the physical world unit of the FOV center of the main camera to the Y-axis relative distance of the locating point of the side camera image in the second relative position corresponding to the candidate side camera relative to the physical world unit of the FOV center of the side camera to obtain the Y-axis relative distance of the FOV center of the candidate side camera relative to the physical world unit of the FOV center of the main camera, namely obtaining the Y-axis relative distance of the main camera and the physical world unit of the candidate side camera.
And if the candidate side camera is positioned at the lower side of the main camera, subtracting the X-axis relative distance of the locating point of the main camera image in the first relative position relative to the physical world unit of the center of the FOV of the main camera from the X-axis relative distance of the locating point of the side camera image in the second relative position corresponding to the candidate side camera relative to the physical world unit of the center of the FOV of the side camera to obtain the X-axis relative distance of the center of the FOV of the candidate side camera relative to the physical world unit of the center of the FOV of the main camera, namely obtaining the X-axis relative distance of the main camera and the physical world unit of the candidate side camera. And subtracting the Y-axis relative distance of the locating point of the main camera image in the first relative position relative to the physical world unit of the center of the FOV of the main camera from the Y-axis relative distance of the locating point of the side camera image in the second relative position corresponding to the candidate side camera relative to the physical world unit of the center of the FOV of the side camera to obtain the Y-axis relative distance of the center of the FOV of the candidate side camera relative to the physical world unit of the center of the FOV of the main camera, namely obtaining the Y-axis relative distance of the main camera and the physical world unit of the candidate side camera.
And combining the X-axis relative distance of the center of the candidate side camera FOV relative to the center of the main camera FOV and the Y-axis relative distance of the physical world unit, and establishing a mapping relation between the main camera FOV and the candidate side camera FOV, wherein the mapping relation is used for're-projecting' the side camera image content into the main camera image so as to realize visual field fusion.
And S630, sequentially establishing the mapping relation between the FOV of the main camera and the FOV of each side camera according to the processing sequence.
Specifically, according to the processing sequence of the side camera images, the mapping relation between the FOV of the main camera and the FOV of each side camera is sequentially established, and the mapping relation between the FOV of the main camera and the FOVs of a plurality of different side cameras is obtained.
And S700, performing visual field fusion of the main camera and the side camera based on the mapping relation.
Specifically, after the mapping relation between the FOV of the main camera and the FOV of each side camera is obtained, for each side camera, according to the mapping relation between the FOV of the main camera and the FOV of each side camera, all pixel coordinates in the side camera image shot by the side camera are converted to the coordinate system of the main camera, the mapped image content is inserted into the corresponding position of the main camera image, and finally, the side camera images shot by all the side cameras are fused to the main camera image to obtain a fused image, so that the visual field fusion of the main camera and each side camera is realized. Referring to fig. 9a and 9b, fig. 9a shows a schematic view before the main camera image shown in fig. 4a and the side camera images shown in fig. 8a to 8d are fused, fig. 9b shows a schematic view after the main camera image shown in fig. 4a and the side camera images shown in fig. 8a to 8d are fused, and fig. 9a and 9b each show a camera view center, wherein the camera view centers are gray lines and color lines intersecting as shown in fig. 9a and 9 b.
Illustratively, in some embodiments, the multi-camera view fusion method further includes, but is not limited to, the following steps S701-S702:
s701, comparing the width and the height of the plurality of side camera images, and finding out the minimum width and the minimum height of all the side camera images.
Illustratively, the width and height of each side camera image are obtained, the width of each side camera image is compared, and the width minimum value among the widths of all the side camera images is found. The heights of the respective side camera images are compared to find the height minimum value among the heights of all the side camera images.
S702, clipping the width of each side camera image to a width minimum and clipping the height to a height minimum.
Specifically, after obtaining the minimum width value and the minimum height value, the width of each side camera image is cut into the minimum width value in the middle, the height of each side camera image is cut into the minimum height value in the middle, each cut side camera image is obtained, each cut side camera image is used as a side camera image to be fused, the side camera images to be fused and the main camera image are fused based on the mapping relation, namely, the visual field fusion of the main camera and the side camera is carried out according to the mapping relation, and the fusion image is obtained.
In summary, in the multi-camera view fusion method provided by the embodiment of the invention, the same calibration plate is shot by using the main camera and the plurality of side cameras to obtain the main camera image and the plurality of side camera images, the main camera image and the plurality of side camera images are preprocessed, the preprocessed main camera image and the plurality of side camera images are identified, the effective dots and dot information of the effective dots are identified based on the effective dots of the main camera image and the plurality of side camera images, the positioning points in the main camera image and the plurality of side camera images are identified, the real resolutions of the main camera and the plurality of side cameras are calculated, the first relative position and the plurality of second relative positions are calculated and obtained according to the calculated real resolutions of the main camera and the plurality of side cameras, the mapping relation between the FOV of the main camera and the FOV of the plurality of side cameras is calculated and obtained, and the view fusion of the main camera and each side camera is determined based on the mapping relation, so that the view fusion of the main camera and the side camera is identified, the detection range of the main camera is improved, and the field detection experience is improved, and the field detection range is improved.
An embodiment of the present invention provides a computer readable storage medium, where computer program instructions executable by a processor are stored, where the computer program instructions, when executed by the processor, cause a computer to perform any one of the multiple-camera view fusion methods provided by the embodiment of the present invention, or perform steps in any one of implementation manners of any one of the multiple-camera view fusion methods provided by the embodiment of the present invention.
It will be appreciated by those skilled in the art that the embodiments provided in the present invention are merely illustrative, and the written order of steps in the methods of the embodiments is not meant to imply a strict order of execution and should be construed as defining any implementation procedure, and that the steps may be sequentially modified, combined, and pruned according to actual needs, and that modules or sub-modules, units, sub-units, etc. in the apparatus or system of the embodiments may be combined, divided, and pruned according to actual needs. For example, the division of the units is only one logic function division, and other division modes can be adopted in actual implementation. As another example, multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed.
From the above description of embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus a general purpose hardware platform, but may also be implemented by hardware. Those skilled in the art will appreciate that all or part of the processes implementing the methods of the embodiments described above may be implemented by a computer program, which may be stored in a computer readable storage medium, and which when executed may include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), or the like.
It should be noted that, the foregoing embodiments are provided for illustrating the technical concept and features of the present invention, and are intended to enable those skilled in the art to understand the present invention and implement it accordingly, and not to limit the scope of the claims of the present invention, and those skilled in the art may understand that all or part of the procedures for implementing the foregoing embodiments are modified according to the technical solutions described in the embodiments of the present invention or some of the technical features are replaced with equivalents thereof. It should be understood that these modifications or substitutions do not depart from the spirit of the embodiments of the invention, and are to be construed as equivalent changes and modifications based on the embodiments of the invention, which are intended to be covered by the claims.