Movatterモバイル変換


[0]ホーム

URL:


CN114782502B - A multi-spectral multi-sensor collaborative processing method and device, and storage medium - Google Patents

A multi-spectral multi-sensor collaborative processing method and device, and storage medium
Download PDF

Info

Publication number
CN114782502B
CN114782502BCN202210676732.7ACN202210676732ACN114782502BCN 114782502 BCN114782502 BCN 114782502BCN 202210676732 ACN202210676732 ACN 202210676732ACN 114782502 BCN114782502 BCN 114782502B
Authority
CN
China
Prior art keywords
image
channel
images
tracking
channels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210676732.7A
Other languages
Chinese (zh)
Other versions
CN114782502A (en
Inventor
周迪
陈书界
王勋
张鹏国
徐爱华
王威杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Uniview Technologies Co Ltd
Original Assignee
Zhejiang Uniview Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Uniview Technologies Co LtdfiledCriticalZhejiang Uniview Technologies Co Ltd
Priority to CN202210676732.7ApriorityCriticalpatent/CN114782502B/en
Publication of CN114782502ApublicationCriticalpatent/CN114782502A/en
Application grantedgrantedCritical
Publication of CN114782502BpublicationCriticalpatent/CN114782502B/en
Priority to PCT/CN2022/139450prioritypatent/WO2023240963A1/en
Priority to EP22946629.7Aprioritypatent/EP4542488A1/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

A multispectral multisensor coprocessing method and device and a storage medium are provided, wherein the multispectral multisensor coprocessing method comprises the following steps: acquiring images of a plurality of channels, wherein the image of each channel is a monochrome image, and the colors of the images of different channels are different; carrying out object detection, identification and tracking according to the monochrome image, generating a tracking frame aiming at the object, and saving the information of the tracking frame; and registering and fusing the images of the channels to generate a fused image, and overlaying the stored information of the tracking frame to the fused image. According to the scheme provided by the embodiment, color distortion can be reduced by collecting the multi-channel image, and quick tracking can be realized by carrying out target tracking through the monochrome image, so that tracking failure caused by registration fusion time delay is avoided.

Description

Translated fromChinese
一种多光谱多传感器协同处理方法及装置、存储介质A multi-spectral multi-sensor collaborative processing method, device, and storage medium

技术领域technical field

本文涉及图像处理技术,尤指一种多光谱多传感器协同处理方法及装置、存储介质。This article relates to image processing technology, especially a multi-spectral multi-sensor collaborative processing method, device, and storage medium.

背景技术Background technique

单图像传感器的摄像机采用拜耳格式成像,图像中的每个像素点在传感器中只能采集RGB三色中的其中一个信息,另外两个信息来自周围的像素点,导致图像采集颜色失真。在一技术方案中,可以采用三个图像传感器分别进行RGB三色的采集,然后进行配准和融合,生成可见光彩色图像。但是,配准融合需要消耗时间,增加了成像的时延,对于快速移动的车辆等目标进行跟踪容易跟丢。The camera with single image sensor adopts Bayer format imaging, and each pixel in the image can only collect one of the RGB three-color information in the sensor, and the other two information comes from the surrounding pixels, resulting in image acquisition color distortion. In a technical solution, three image sensors may be used to collect RGB three colors respectively, and then perform registration and fusion to generate a visible light color image. However, registration fusion takes time and increases the imaging delay, and it is easy to lose track of fast-moving vehicles and other targets.

发明内容Contents of the invention

本申请实施例提供了一种多光谱多传感器协同处理方法及装置、存储介质,可以实现目标的跟踪。The embodiment of the present application provides a multi-spectral multi-sensor cooperative processing method and device, and a storage medium, which can realize target tracking.

本申请实施例提供了一种多光谱多传感器协同处理方法,包括:An embodiment of the present application provides a multi-spectral multi-sensor collaborative processing method, including:

获取多个通道的图像,每个通道的图像为一个单色图像,且不同通道的图像的颜色不同;Obtain images of multiple channels, the image of each channel is a monochrome image, and the colors of images of different channels are different;

根据所述单色图像进行目标检测、识别和跟踪,生成针对所述目标的跟踪框,保存所述跟踪框的信息;performing target detection, recognition and tracking according to the monochrome image, generating a tracking frame for the target, and saving information of the tracking frame;

对所述多个通道的图像进行配准融合,生成融合图像,将保存的所述跟踪框的信息叠加到所述融合图像。The images of the multiple channels are registered and fused to generate a fused image, and the saved information of the tracking frame is superimposed on the fused image.

在一示例性实施例中,所述将保存的所述跟踪框的信息叠加到所述融合图像包括:In an exemplary embodiment, the superimposing the saved information of the tracking frame on the fused image includes:

将保存的最新的跟踪框的信息叠加到所述融合图像。Superimpose the latest saved tracking frame information on the fused image.

在一示例性实施例中,所述通道包括红色通道;所述根据所述单色图像进行目标检测、识别和跟踪包括:In an exemplary embodiment, the channel includes a red channel; and performing target detection, recognition and tracking according to the monochrome image includes:

当前环境光照强度小于预设光照强度阈值时,选择红色通道的近红外图像进行目标检测、识别和跟踪。When the current ambient light intensity is less than the preset light intensity threshold, the near-infrared image of the red channel is selected for target detection, recognition and tracking.

在一示例性实施例中,所述根据所述单色图像进行目标检测、识别和跟踪包括:In an exemplary embodiment, the performing target detection, recognition and tracking according to the monochrome image includes:

当前环境光照强度大于等于预设光照强度阈值,获取每个通道的图像的像素值的和,当存在第一通道的图像的像素值的和大于每个其他通道的图像的像素值的和,且第一通道的图像的像素值的和与至少一个其他通道的图像的像素值的和的差值大于预设值时,选择除所述第一通道外的一个或多个通道的单色图像进行目标检测、识别和跟踪。The current ambient light intensity is greater than or equal to the preset light intensity threshold, and the sum of the pixel values of the images of each channel is acquired, when the sum of the pixel values of the images of the first channel is greater than the sum of the pixel values of the images of each other channel, and When the difference between the sum of the pixel values of the image of the first channel and the sum of the pixel values of the images of at least one other channel is greater than a preset value, select one or more monochrome images of channels other than the first channel to perform Object detection, recognition and tracking.

在一示例性实施例中,所述根据所述单色图像进行目标检测、识别和跟踪包括:In an exemplary embodiment, the performing target detection, recognition and tracking according to the monochrome image includes:

当前环境光照强度大于等于预设光照强度阈值时,获取每个通道的图像的像素值的和,且对任意第一通道和第二通道,第一通道的图像的像素值之和与第二通道的图像的像素值之和的差值小于预设值时,选择全部通道的单色图像进行目标检测、识别和跟踪。When the current ambient light intensity is greater than or equal to the preset light intensity threshold, the sum of the pixel values of the image of each channel is obtained, and for any first channel and second channel, the sum of the pixel values of the image of the first channel is the same as that of the second channel When the difference between the sum of the pixel values of the images is less than the preset value, the monochrome images of all channels are selected for target detection, recognition and tracking.

在一示例性实施例中,所述方法还包括:当选择多个通道的单色图像进行目标检测、识别和跟踪时,将该多个通道的单色图像中检测到的目标的合集作为融合图像的目标;In an exemplary embodiment, the method further includes: when the monochrome images of multiple channels are selected for target detection, recognition and tracking, a collection of targets detected in the monochrome images of the multiple channels is used as a fusion image target;

所述将保存的所述跟踪框的信息叠加到所述融合图像包括:The superimposing the saved information of the tracking frame on the fused image includes:

将保存的该多个通道的单色图像中检测到的目标的跟踪框信息叠加到所述融合图像。The tracking frame information of the target detected in the saved monochrome images of the multiple channels is superimposed on the fused image.

在一示例性实施例中,所述对所述多个通道的图像进行配准融合包括:In an exemplary embodiment, performing registration and fusion on the images of the multiple channels includes:

对所述多个通道的图像,选择一个通道的图像作为参考图像,其余通道的图像分别作为待配准图像与所述参考图像按如下方式配准:For the images of the multiple channels, the image of one channel is selected as the reference image, and the images of the remaining channels are respectively used as images to be registered and registered with the reference image as follows:

选择使得

Figure 340405DEST_PATH_IMAGE001
最小,且
Figure 195228DEST_PATH_IMAGE002
Figure 164846DEST_PATH_IMAGE003
的重叠区域非零时的
Figure 763317DEST_PATH_IMAGE002
作为配准后的图像,其中,
Figure 967903DEST_PATH_IMAGE004
表示沿
Figure 728048DEST_PATH_IMAGE005
方向的导数算子,
Figure 447611DEST_PATH_IMAGE006
Figure 912091DEST_PATH_IMAGE002
为待配准的图像进行坐标变换后的图像,
Figure 784232DEST_PATH_IMAGE003
为参考图像,Ω为
Figure 167809DEST_PATH_IMAGE002
Figure 125400DEST_PATH_IMAGE003
的重叠区域,
Figure 127991DEST_PATH_IMAGE007
为二维空间坐标; 或者,choose to make
Figure 340405DEST_PATH_IMAGE001
minimum, and
Figure 195228DEST_PATH_IMAGE002
and
Figure 164846DEST_PATH_IMAGE003
The overlap area of is non-zero when
Figure 763317DEST_PATH_IMAGE002
As a registered image, where,
Figure 967903DEST_PATH_IMAGE004
means along
Figure 728048DEST_PATH_IMAGE005
The derivative operator of the direction,
Figure 447611DEST_PATH_IMAGE006
;
Figure 912091DEST_PATH_IMAGE002
is the image after coordinate transformation for the image to be registered,
Figure 784232DEST_PATH_IMAGE003
is the reference image, Ω is
Figure 167809DEST_PATH_IMAGE002
and
Figure 125400DEST_PATH_IMAGE003
the overlapping area,
Figure 127991DEST_PATH_IMAGE007
is a two-dimensional space coordinate; or,

选择使得

Figure DEST_PATH_IMAGE009A
最小的
Figure 700312DEST_PATH_IMAGE010
作为配准后的图像,其中,
Figure 395735DEST_PATH_IMAGE011
;choose to make
Figure DEST_PATH_IMAGE009A
the smallest
Figure 700312DEST_PATH_IMAGE010
As a registered image, where,
Figure 395735DEST_PATH_IMAGE011
;

将配准后的图像与所述参考图像进行融合。The registered image is fused with the reference image.

本公开实施例提供一种多光谱多传感器协同处理装置,包括存储器和处理器,所述存储器存储有程序,所述程序在被所述处理器读取执行时,实现上述任一实施例所述的多光谱多传感器协同处理方法。An embodiment of the present disclosure provides a multi-spectral multi-sensor collaborative processing device, including a memory and a processor, the memory stores a program, and when the program is read and executed by the processor, it can implement any of the above-mentioned embodiments. Multispectral multisensor collaborative processing method.

在一示例性实施例中,所述多光谱多传感器协同处理装置还包括:镜头、分光棱镜和多个传感器,其中:In an exemplary embodiment, the multi-spectral multi-sensor collaborative processing device further includes: a lens, a beam splitting prism, and a plurality of sensors, wherein:

所述镜头用于接收外部光线,传输至所述分光棱镜;The lens is used to receive external light and transmit it to the beam splitting prism;

所述分光棱镜用于,将入射光线进行分光为多路单色光线,所述多路单色光线分别入射至所述多个传感器,其中,每路单色光线入射至一个传感器;The beam splitting prism is used to split the incident light into multiple monochromatic light beams, and the multiple monochromatic light beams are respectively incident on the plurality of sensors, wherein each monochromatic light beam is incident on one sensor;

所述传感器用于,将入射光线转换为电信号输出至所述处理器。The sensor is used for converting the incident light into an electrical signal and outputting it to the processor.

本公开实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现上述任一实施例所述的多光谱多传感器协同处理方法。An embodiment of the present disclosure provides a computer-readable storage medium, the computer-readable storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to implement any of the above-mentioned The multi-spectral multi-sensor collaborative processing method described in the embodiment.

与相关技术相比,本申请包括一种多光谱多传感器协同处理方法及装置、存储介质,所述多光谱多传感器协同处理方法包括:获取多个通道的图像,每个通道的图像为一个单色图像,且不同通道的图像的颜色不同;根据所述单色图像进行目标检测、识别和跟踪,生成针对所述目标的跟踪框,保存所述跟踪框的信息;对所述多个通道的图像进行配准融合,生成融合图像,将保存的所述跟踪框的信息叠加到所述融合图像。本实施例提供的方案,通过采集多通道图像可以减少色彩失真,且通过单色图像进行目标跟踪可以实现快速跟踪,避免配准融合时延造成跟踪失败。Compared with related technologies, this application includes a multi-spectral multi-sensor collaborative processing method and device, and a storage medium. The multi-spectral multi-sensor collaborative processing method includes: acquiring images of multiple channels, and the images of each channel are a single color images, and images of different channels have different colors; perform target detection, recognition and tracking according to the monochrome images, generate a tracking frame for the target, and save the information of the tracking frame; The images are registered and fused to generate a fused image, and the saved information of the tracking frame is superimposed on the fused image. In the solution provided in this embodiment, color distortion can be reduced by collecting multi-channel images, and fast tracking can be achieved by performing target tracking on monochrome images, avoiding tracking failure caused by registration fusion time delay.

本申请的其它特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本申请而了解。本申请的其他优点可通过在说明书以及附图中所描述的方案来实现和获得。Additional features and advantages of the application will be set forth in the description which follows, and, in part, will be obvious from the description, or may be learned by practice of the application. Other advantages of the present application can be realized and obtained through the schemes described in the specification and drawings.

附图说明Description of drawings

附图用来提供对本申请技术方案的理解,并且构成说明书的一部分,与本申请的实施例一起用于解释本申请的技术方案,并不构成对本申请技术方案的限制。The accompanying drawings are used to provide an understanding of the technical solution of the present application, and constitute a part of the description, and are used together with the embodiments of the present application to explain the technical solution of the present application, and do not constitute a limitation to the technical solution of the present application.

图1为一示例性实施例提供的摄像机光学子系统示意图;FIG. 1 is a schematic diagram of a camera optical subsystem provided by an exemplary embodiment;

图2为本公开实施例提供的一种多光谱多传感器协同处理方法流程图;FIG. 2 is a flow chart of a multi-spectral multi-sensor collaborative processing method provided by an embodiment of the present disclosure;

图3为一示例性实施提供的多光谱图像局部亮度的不一致性示意图;Fig. 3 is a schematic diagram of the inconsistency of local brightness of a multispectral image provided by an exemplary implementation;

图4为一示例性实施提供的多光谱多传感器协同处理装置框图。Fig. 4 is a block diagram of a multi-spectral multi-sensor collaborative processing device provided by an exemplary implementation.

具体实施方式Detailed ways

本申请描述了多个实施例,但是该描述是示例性的,而不是限制性的,并且对于本领域的普通技术人员来说显而易见的是,在本申请所描述的实施例包含的范围内可以有更多的实施例和实现方案。尽管在附图中示出了许多可能的特征组合,并在具体实施方式中进行了讨论,但是所公开的特征的许多其它组合方式也是可能的。除非特意加以限制的情况以外,任何实施例的任何特征或元件可以与任何其它实施例中的任何其他特征或元件结合使用,或可以替代任何其它实施例中的任何其他特征或元件。The application describes a number of embodiments, but the description is illustrative rather than restrictive, and it will be obvious to those of ordinary skill in the art that within the scope of the embodiments described in the application, There are many more embodiments and implementations. Although many possible combinations of features are shown in the drawings and discussed in the detailed description, many other combinations of the disclosed features are possible. Except where expressly limited, any feature or element of any embodiment may be used in combination with, or substituted for, any other feature or element of any other embodiment.

本申请包括并设想了与本领域普通技术人员已知的特征和元件的组合。本申请已经公开的实施例、特征和元件也可以与任何常规特征或元件组合,以形成由权利要求限定的独特的发明方案。任何实施例的任何特征或元件也可以与来自其它发明方案的特征或元件组合,以形成另一个由权利要求限定的独特的发明方案。因此,应当理解,在本申请中示出和/或讨论的任何特征可以单独地或以任何适当的组合来实现。因此,除了根据所附权利要求及其等同替换所做的限制以外,实施例不受其它限制。此外,可以在所附权利要求的保护范围内进行各种修改和改变。This application includes and contemplates combinations of features and elements known to those of ordinary skill in the art. The disclosed embodiments, features and elements of this application can also be combined with any conventional features or elements to form unique inventive solutions as defined by the claims. Any feature or element of any embodiment may also be combined with features or elements from other inventive solutions to form yet another unique inventive solution as defined by the claims. It is therefore to be understood that any of the features shown and/or discussed in this application can be implemented alone or in any suitable combination. Accordingly, the embodiments are not to be limited except in accordance with the appended claims and their equivalents. Furthermore, various modifications and changes may be made within the scope of the appended claims.

此外,在描述具有代表性的实施例时,说明书可能已经将方法和/或过程呈现为特定的步骤序列。然而,在该方法或过程不依赖于本文所述步骤的特定顺序的程度上,该方法或过程不应限于所述的特定顺序的步骤。如本领域普通技术人员将理解的,其它的步骤顺序也是可能的。因此,说明书中阐述的步骤的特定顺序不应被解释为对权利要求的限制。此外,针对该方法和/或过程的权利要求不应限于按照所写顺序执行它们的步骤,本领域技术人员可以容易地理解,这些顺序可以变化,并且仍然保持在本申请实施例的精神和范围内。Furthermore, in describing representative embodiments, the specification may have presented a method and/or process as a particular sequence of steps. However, to the extent the method or process is not dependent on the specific order of steps described herein, the method or process should not be limited to the specific order of steps described. Other sequences of steps are also possible, as will be appreciated by those of ordinary skill in the art. Therefore, the specific order of the steps set forth in the specification should not be construed as limitations on the claims. In addition, claims for the method and/or process should not be limited to performing their steps in the order written, those skilled in the art can easily understand that these orders can be changed and still remain within the spirit and scope of the embodiments of the present application Inside.

采用三个图像传感器分别进行RGB三个图像的采集,然后进行配准和融合,当前存在两方面的问题。一是配准融合需要消耗时间,增加了成像的时延,对于快速移动的车辆等目标进行跟踪容易跟丢;二是配准本身采用SIFT(Scale-invariant feature transform,尺度不变的特征变换)或SURF(Speeded Up Robust Features,加速鲁棒特征)技术,存在鲁棒性弱的问题,配准不可靠容易导致最后的融合图像模糊。Using three image sensors to collect RGB images respectively, and then perform registration and fusion, currently there are two problems. One is that registration fusion takes time, which increases the imaging delay, and it is easy to lose track of fast-moving vehicles and other targets; the other is that the registration itself uses SIFT (Scale-invariant feature transform, scale-invariant feature transformation) Or SURF (Speeded Up Robust Features, accelerated robust features) technology, there is a problem of weak robustness, and unreliable registration can easily lead to blurring of the final fusion image.

本公开实施例中,使用多个图像传感器采集图像,基于单色图像进行目标跟踪,可以避免跟丢目标。In the embodiments of the present disclosure, multiple image sensors are used to collect images, and target tracking is performed based on monochrome images, so as to avoid missing the target.

传统摄像机采用拜耳(Bayer)格式进行图像采集,每个像素点仅采集RGB三个分量中的一个分量,另外两个分量基于附近像素点信息进行插值估计,因此,所获得的图像不是真实的。为了实现真实的色彩还原效果,本公开实施中,使用RGB三色分离的光学子系统,通过分光棱镜将光线分离成RGB三原色,然后由3个传感器(Sensor)分别采集,各自成像,最后合成真实的彩色图像。Traditional cameras use Bayer (Bayer) format for image acquisition. Each pixel only collects one of the three components of RGB, and the other two components are interpolated based on nearby pixel information. Therefore, the obtained image is not real. In order to achieve a true color reproduction effect, in the implementation of this disclosure, an optical subsystem with RGB three-color separation is used to separate the light into RGB three primary colors through a splitter prism, and then collected by three sensors (Sensors), each of which is imaged, and finally synthesized into a real color. color image.

图1为一示例性实施例提供的摄像机的光学子系统示意图。如图1所示,本实施例提供的摄像机的光学子系统包括:镜头1、IR滤镜2、ND滤镜(中性密度滤光镜)3、分光棱镜4和多个传感器5,其中,每个传感器5采集一个颜色分量的光线。IR滤镜2包括可对红外光进行过滤的红外滤光片,在光线强度较大时(比如白天环境),红外滤光片工作,对红外光进行过滤;在光线强度较小时(比如夜晚环境),红外滤光片不工作,红外光可以通过,从而可以生成近红外图像。ND滤镜3用于对入射光线进行衰减,降低曝光量,摄像机可以控制使用或不使用ND滤镜3。分光棱镜4可以包括3个棱镜,每个棱镜出射一种颜色的光。比如,3个棱镜的出光面分别有R通镀膜、G通镀膜和B通镀膜,可以实现带通滤波,分别获得红光(R通镀膜)、绿光(G通镀膜)和蓝光(B通镀膜),3个出光面出射的光线分别进入三个传感器。镀膜可以滤除一些杂波,获得各通道较好的波形,滤除杂波讯号以及特殊角度的异常讯号,使得颜色还原性更为逼真。但本公开实施例不限于此,可以不使用镀膜。Fig. 1 is a schematic diagram of an optical subsystem of a camera provided by an exemplary embodiment. As shown in Figure 1, the optical subsystem of the camera provided by this embodiment includes: alens 1, anIR filter 2, an ND filter (neutral density filter) 3, adichroic prism 4 and a plurality ofsensors 5, wherein, Eachsensor 5 captures light of one color component. TheIR filter 2 includes an infrared filter that can filter infrared light. When the light intensity is high (such as the daytime environment), the infrared filter works to filter the infrared light; when the light intensity is small (such as the night environment) ), the infrared filter does not work, and the infrared light can pass through, so that a near-infrared image can be generated. TheND filter 3 is used to attenuate the incident light and reduce the exposure. The camera can control whether to use theND filter 3 or not. Thedichroic prism 4 may include three prisms, and each prism emits light of one color. For example, the light emitting surfaces of the three prisms have R-pass coating, G-pass coating and B-pass coating respectively, which can realize band-pass filtering and obtain red light (R-pass coating), green light (G-pass coating) and blue light (B-pass coating) respectively. Coating), the light emitted from the three light-emitting surfaces enters the three sensors respectively. The coating can filter out some clutter, obtain better waveforms for each channel, filter out clutter signals and abnormal signals from special angles, and make the color reproduction more realistic. However, the embodiments of the present disclosure are not limited thereto, and the coating film may not be used.

在一示例性实施例中,超高清摄像机可以支持3片 2/3英寸4K CMOS(Complementary Metal-Oxide-Semiconductor,互补金属氧化物半导体)传感器,实现RGB3路分光。此处仅为示例,可以是其他尺寸和像素数量的传感器。In an exemplary embodiment, the ultra-high-definition camera can support three 2/3-inch 4K CMOS (Complementary Metal-Oxide-Semiconductor, Complementary Metal-Oxide-Semiconductor) sensors to implement RGB three-way light splitting. This is an example only, other sensor sizes and pixel counts are possible.

在一示例性实施例中,所述摄像机还可以包括近红外光的补光设备,在光照强度小于预设光照强度阈值时可以开启近红外补光设备,实现近红外图像的采集,在光照强度大于等于预设光照强度阈值时可以关闭近红外补光设备。In an exemplary embodiment, the camera may also include a near-infrared light supplementary device, and the near-infrared supplementary light device may be turned on when the light intensity is less than a preset light intensity threshold to realize the collection of near-infrared images. When it is greater than or equal to the preset light intensity threshold, the near-infrared supplementary light device can be turned off.

上述摄像机系统仅为示例,本公开实施例不限于此,可以是其他能实现多通道图像采集的摄像机系统,比如,摄像机系统可以包括其他滤镜组件等等。The above camera system is only an example, and the embodiments of the present disclosure are not limited thereto, and may be other camera systems capable of realizing multi-channel image acquisition, for example, the camera system may include other filter components and the like.

图2为本公开实施例提供的一种多光谱多传感器协同处理方法流程图。如图2所示,本公开实施例提供的多光谱多传感器协同处理方法包括:Fig. 2 is a flow chart of a multi-spectral multi-sensor collaborative processing method provided by an embodiment of the present disclosure. As shown in Figure 2, the multi-spectral multi-sensor collaborative processing method provided by the embodiment of the present disclosure includes:

步骤201,获取多个通道的图像,每个通道的图像为一个单色图像,且不同通道的图像的颜色不同;Step 201, acquiring images of multiple channels, the image of each channel is a monochromatic image, and the colors of images of different channels are different;

步骤202,根据所述单色图像进行目标检测、识别和跟踪,生成针对所述目标的跟踪框,保存所述跟踪框的信息;Step 202, performing target detection, recognition and tracking according to the monochrome image, generating a tracking frame for the target, and saving information of the tracking frame;

步骤203,对所述多个通道的图像进行配准融合,生成融合图像,将保存的所述跟踪框的信息叠加到所述融合图像。Step 203, performing registration and fusion on the images of the multiple channels to generate a fusion image, and superimposing the saved information of the tracking frame on the fusion image.

本实施例提供的方案,通过采集多通道图像可以减少色彩失真,且通过单色图像进行目标跟踪可以实现快速跟踪,避免配准融合时延造成跟踪失败。In the solution provided in this embodiment, color distortion can be reduced by collecting multi-channel images, and fast tracking can be achieved by performing target tracking on monochrome images, avoiding tracking failure caused by registration fusion time delay.

在一示例性实施例中,所述目标包括但不限于车辆,还可以是行人、动物、流水线上被传递的物品等,目标的类属并不应当成为对于本申请的限制。In an exemplary embodiment, the target includes but is not limited to a vehicle, and may also be a pedestrian, an animal, an item passed on an assembly line, etc., and the category of the target should not be a limitation to this application.

在一示例性实施例中,所述多个通道可以包括第一通道、第二通道和第三通道,所述第一通道的图像可以是红色图像,所述第二通道的图像可以是绿色图像,所述第三通道的图像可以蓝色图像。In an exemplary embodiment, the plurality of channels may include a first channel, a second channel, and a third channel, the image of the first channel may be a red image, and the image of the second channel may be a green image , the third channel image can be a blue image.

在一其他示例性实施例中,第一通道的图像可以是绿色图像,第二通道的图像可以是蓝色图像,第三通道的图像可以红色图像,并不以此为限制。In another exemplary embodiment, the image of the first channel may be a green image, the image of the second channel may be a blue image, and the image of the third channel may be a red image, which is not limited thereto.

在一示例性实施中,所述将保存的所述跟踪框的信息叠加到所述融合图像包括:In an exemplary implementation, the superimposing the saved information of the tracking frame on the fused image includes:

将保存的最新的跟踪框的信息叠加到所述融合图像。本实施例提供的方案,可以实现实时跟踪目标并在融合图像中进行显示。但本公开实施例不限于此,融合图像中显示的跟踪框可以是融合图像对应的时刻的跟踪框。由于单色图像的时延较小,融合配准会产生一定的时延,根据单色图像得到的最新的跟踪框对应的时刻通常会晚于融合图像对应的时刻。Superimpose the latest saved tracking frame information on the fused image. The solution provided in this embodiment can realize real-time tracking of a target and display it in a fused image. However, the embodiments of the present disclosure are not limited thereto, and the tracking frame displayed in the fused image may be a tracking frame at a moment corresponding to the fused image. Since the time delay of the monochrome image is small, the fusion registration will produce a certain time delay, and the time corresponding to the latest tracking frame obtained from the monochrome image is usually later than the time corresponding to the fused image.

在一示例性实施例中,可以根据单色图像实施目标的检测、识别和跟踪,若发现符合要求的目标,对该目标进行放大跟踪,并生成跟踪框叠加到该目标上。所述目标至少部分位于所述跟踪框内,比如,目标可以全部位于跟踪框内。In an exemplary embodiment, target detection, recognition and tracking can be implemented based on a monochrome image, and if a target meeting the requirements is found, the target is zoomed in and tracked, and a tracking frame is generated and superimposed on the target. The target is at least partly located within the tracking frame, for example, the target may be entirely located within the tracking frame.

在一示例性实施例中,所述跟踪框的信息可以包括跟踪框左上角和右下角的坐标信息,但不限于此,跟踪框的信息可以是跟踪框的左上角的坐标信息、跟踪框的长度和宽度,或者,可以是跟踪框的中心点的坐标、跟踪框的长度和宽度,等等。跟踪框可以是矩形或者其他形状。由于单色图像采集的时延非常小,因此可以实现目标的实时跟踪。随着目标的移动,生成的跟踪框的位置和大小跟随目标进行更新。将跟踪框左上角和右下角的坐标信息实时保存在内存中,若跟踪框的位置和大小发生了变更,则保存的跟踪也同步变更。可以只保存最新时刻的跟踪框信息,或者,可以保存一段时间内跟踪框的信息。In an exemplary embodiment, the information of the tracking frame may include the coordinate information of the upper left corner and the lower right corner of the tracking frame, but is not limited thereto. The information of the tracking frame may be the coordinate information of the upper left corner of the tracking frame, the coordinate information of the tracking frame Length and Width, alternatively, can be the coordinates of the center point of the tracking box, the length and width of the tracking box, and so on. Tracking boxes can be rectangular or other shapes. Since the time delay of monochrome image acquisition is very small, real-time tracking of the target can be realized. As the target moves, the position and size of the generated tracking box are updated following the target. Save the coordinate information of the upper left corner and the lower right corner of the tracking frame in the memory in real time. If the position and size of the tracking frame change, the saved tracking will also change synchronously. Only the tracking frame information at the latest moment may be saved, or the tracking frame information within a period of time may be saved.

每融合完一帧图像,可以提取最新保存的跟踪框坐标信息,将跟踪框叠加到融合得到的彩色图像上,从而为用户呈现出彩色视频图像中的目标跟踪效果,即跟踪框实时的跟随着移动的目标。Every time a frame of image is fused, the latest saved tracking frame coordinate information can be extracted, and the tracking frame can be superimposed on the fused color image to present the user with the target tracking effect in the color video image, that is, the tracking frame follows in real time moving target.

本领域技术人员知晓或应当知晓,在上述的图像的融合过程中,可以是将三个通道对应的图像进行融合,也可以是融合任二个通道对应的图像,以适应不同的需求。Those skilled in the art know or should know that in the above image fusion process, images corresponding to three channels may be fused, or images corresponding to any two channels may be fused, so as to meet different requirements.

利用单色图像跟踪目标所生成的跟踪框坐标的信息,与融合后图像的跟踪框的叠加,可以不同步,也不要求一一对应。即,步骤202中跟踪框坐标信息的生成与步骤203中融合后图像的目标跟踪框叠加之间独立,二者之间无顺序关系,按融合图像的生成频率按需从内存中提取跟踪框的坐标信息,在融合图像中生成相应的跟踪框。The information of the coordinates of the tracking frame generated by using the monochromatic image to track the target, and the superimposition of the tracking frame of the fused image may not be synchronized, and one-to-one correspondence is not required. That is, the generation of the tracking frame coordinate information instep 202 is independent from the superimposition of the target tracking frame of the fused image instep 203, there is no sequential relationship between them, and the tracking frame is extracted from the memory as needed according to the generation frequency of the fused image. Coordinate information to generate corresponding tracking boxes in the fused image.

在一示例性实施例中,所述通道包括红色通道;所述根据所述单色图像进行目标检测、识别和跟踪包括:In an exemplary embodiment, the channel includes a red channel; and performing target detection, recognition and tracking according to the monochrome image includes:

当前环境光照强度小于预设光照强度阈值时,选择红色通道的近红外图像进行目标检测、识别和跟踪。当前环境光照强度小于预设光照强度阈值时,摄像机的近红外补光设备打开。近红外光会通过红色通道生成近红外图像,从而生成清晰的近红外图像。本实施例提供的方案,可以实现环境光照强度小于预设光照强度阈值时的目标跟踪。When the current ambient light intensity is less than the preset light intensity threshold, the near-infrared image of the red channel is selected for target detection, recognition and tracking. When the current ambient light intensity is lower than the preset light intensity threshold, the camera's near-infrared fill light device is turned on. NIR light passes through the red channel to create a NIR image, resulting in a sharp NIR image. The solution provided in this embodiment can realize target tracking when the ambient light intensity is less than a preset light intensity threshold.

在一示例性实施例中,环境光照强度可以通过光强传感器确定。In an exemplary embodiment, the ambient light intensity may be determined by a light intensity sensor.

在一示例性实施例中,可以通过摄像机上的近红外补光设备(近红外补光灯)是否打开确定当前环境光照强度是否小于预设光照强度阈值。即近红外补光设备打开时,判断环境光照强度小于预设光照强度阈值;近红外补光设备关闭时,判断环境光照强度大于预设光照强度阈值。In an exemplary embodiment, it may be determined whether the current ambient light intensity is less than a preset light intensity threshold according to whether a near-infrared supplementary light device (near-infrared supplementary light) on the camera is turned on. That is, when the near-infrared supplementary light device is turned on, it is judged that the ambient light intensity is less than the preset light intensity threshold; when the near-infrared supplementary light device is turned off, it is judged that the ambient light intensity is greater than the preset light intensity threshold.

在一示例性实施例中,可以根据场景色调选择一种或多种单色图像进行目标检测、识别和跟踪。所述根据所述单色图像进行目标检测、识别和跟踪包括:In an exemplary embodiment, one or more monochromatic images may be selected according to the tone of the scene for target detection, recognition and tracking. The target detection, identification and tracking according to the monochrome image include:

当前环境光照强度大于等于预设光照强度阈值,获取每个通道的图像的像素值的和,当存在第一通道的图像的像素值的和大于每个其他通道的图像的像素值的和,且第一通道的图像的像素值的和与至少一个其他通道的图像的像素值的和的差值大于预设值时,选择除所述第一通道外的一个或多个通道的单色图像进行目标检测、识别和跟踪。其中,通道的图像的像素值的和为该通道的图像的全部像素值的和。The current ambient light intensity is greater than or equal to the preset light intensity threshold, and the sum of the pixel values of the images of each channel is acquired, when the sum of the pixel values of the images of the first channel is greater than the sum of the pixel values of the images of each other channel, and When the difference between the sum of the pixel values of the image of the first channel and the sum of the pixel values of the images of at least one other channel is greater than a preset value, select one or more monochrome images of channels other than the first channel to perform Object detection, recognition and tracking. Wherein, the sum of the pixel values of the image of the channel is the sum of all the pixel values of the image of the channel.

以三个通道(第一通道、第二通道、第三通道)为例,即,存在第一通道的图像的像素值的和大于第二通道的图像的像素值的和,且第一通道的图像的像素值的和大于第三通道的图像的像素值的和,且,第一通道的图像的像素值的和与第二通道的图像的像素值的和的差,以及,第一通道的图像的像素值的和与第三通道的图像的像素值的和的差至少其中之一大于所述预设值,选择第二通道、第三通道至少之一的单色图像进行目标检测、识别和跟踪。Take three channels (the first channel, the second channel, and the third channel) as an example, that is, the sum of the pixel values of the images of the first channel is greater than the sum of the pixel values of the images of the second channel, and the sum of the pixel values of the images of the first channel The sum of the pixel values of the image is greater than the sum of the pixel values of the image of the third channel, and the difference between the sum of the pixel values of the image of the first channel and the sum of the pixel values of the image of the second channel, and the sum of the pixel values of the image of the first channel At least one of the difference between the sum of the pixel values of the image and the sum of the pixel values of the image of the third channel is greater than the preset value, and the monochrome image of at least one of the second channel and the third channel is selected for target detection and recognition and track.

本实施例提供的方案,第一通道的图像的像素值的和大于每个其他通道的图像的像素值的和,且差值大于预设值,说明第一通道的图像的颜色为图像的总色调,选择与总色调不同的颜色的图像进行目标跟踪,凸出了前景目标,避免目标和背景均为总色调导致区分困难。In the solution provided by this embodiment, the sum of the pixel values of the image of the first channel is greater than the sum of the pixel values of the images of each other channel, and the difference is greater than the preset value, indicating that the color of the image of the first channel is the total value of the image. Hue, choose an image with a color different from the overall tone for target tracking, highlighting the foreground target, and avoiding the difficulty of distinguishing between the target and the background due to the overall tone.

下面通过示例分别进行说明。The following are examples to illustrate.

在一示例性实施例中,如果总色调为红色(例如满是枫叶的秋天),则可以随机采用绿色分量图像(即绿色通道的图像)或蓝色分量图像(即蓝色通道的图像)进行目标跟踪,而不采用红色分量图像(即红色通道的图像)进行目标跟踪,如此,背景的绿色或蓝色分量分别得到弱化,突出了前景目标(背景主要是红色),避免目标和背景都是红色导致区分困难。In an exemplary embodiment, if the overall hue is red (such as autumn full of maple leaves), the green component image (ie, the image of the green channel) or the blue component image (ie, the image of the blue channel) can be randomly used for Target tracking, instead of using the red component image (that is, the image of the red channel) for target tracking, so that the green or blue components of the background are weakened respectively, highlighting the foreground target (the background is mainly red), and avoiding both the target and the background Red makes it difficult to distinguish.

若总色调为绿色(比如为山景),则可以随机选择红色或蓝色分量图像进行目标跟踪,而不采用绿色分量图像进行目标跟踪,从而避免目标和背景都是绿色导致区分困难。If the overall tone is green (for example, it is a mountain scene), you can randomly select the red or blue component image for target tracking, instead of using the green component image for target tracking, so as to avoid the difficulty of distinguishing between the target and the background due to green.

若总色调为蓝色(比如为天空或海景),则可以随机选择红色或绿色分量图像进行目标跟踪,而不采用蓝色分量图像进行目标跟踪,从而避免目标和背景都是蓝色导致区分困难。If the overall tone is blue (for example, sky or seascape), you can randomly select red or green component images for target tracking, instead of using blue component images for target tracking, so as to avoid the difficulty of distinguishing between the target and the background due to blue .

上述实施例中仅采用一个通道的单色图像进行目标跟踪。在另一实施例中,可以使用多个通道的单色图像进行目标跟踪。比如,例如,若总色调为红色(例如满是枫叶的秋天),则分别采用绿色分量的图像和蓝色分量的图像进行目标跟踪,避免其中一个通道图像中的目标和背景区分不明显。本实施例提供的方案,可以增加可靠性,避免前景目标和背景场景的色彩相似所带来的干扰。In the above embodiments, only one channel of monochrome images is used for target tracking. In another embodiment, multiple channels of monochrome images can be used for object tracking. For example, if the overall color tone is red (such as autumn full of maple leaves), the image of the green component and the image of the blue component are respectively used for target tracking to avoid the indistinct distinction between the target and the background in one of the channel images. The solution provided in this embodiment can increase reliability and avoid interference caused by similar colors of the foreground object and the background scene.

在一示例性实施例中,可以在当前环境光照强度大于等于预设光照强度阈值,获取每个通道的图像的像素值的和,当存在第一通道的图像的像素值的和大于每个其他通道的图像的像素值的和,则:In an exemplary embodiment, when the current ambient light intensity is greater than or equal to the preset light intensity threshold, the sum of the pixel values of the images of each channel can be obtained, when the sum of the pixel values of the images of the first channel is greater than each other The sum of the pixel values of the channel image, then:

当第一通道的图像的像素值的和与每个其他通道的图像的像素值的和的差值大于第一预设值时,选择除所述第一通道外的一个通道的单色图像进行目标检测、识别和跟踪;When the difference between the sum of the pixel values of the image of the first channel and the sum of the pixel values of the images of each other channel is greater than the first preset value, select a monochrome image of a channel other than the first channel to perform Object detection, recognition and tracking;

当第一通道的图像的像素值的和与至少一个其他通道的图像的像素值的和的差值小于等于第一预设值,大于第二预设值时,选择除所述第一通道外的全部其他通道的单色图像进行目标检测、识别和跟踪;所述第一预设值大于第二预设值。即本实施例中,在第一通道的图像的像素值的和与其他通道的图像的像素值的和相差较大时,采用一个单色图像进行目标跟踪,在相差较小时,采用多个单色图像进行目标跟踪。When the difference between the sum of the pixel values of the image of the first channel and the sum of the pixel values of the images of at least one other channel is less than or equal to the first preset value and greater than the second preset value, select the channel other than the first channel The monochrome images of all other channels are used for target detection, recognition and tracking; the first preset value is greater than the second preset value. That is, in this embodiment, when the difference between the sum of the pixel values of the image of the first channel and the sum of the pixel values of the images of other channels is large, one monochrome image is used for target tracking; when the difference is small, multiple monochrome images are used. color image for target tracking.

在一示例性实施例中,所述根据所述单色图像进行目标检测、识别和跟踪包括:In an exemplary embodiment, the performing target detection, recognition and tracking according to the monochrome image includes:

当前环境光照强度大于等于预设光照强度阈值时,获取每个通道的图像的像素值的和,且对任意第一通道和第二通道,第一通道的图像的像素值之和与第二通道的图像的像素值之和的差值小于预设值时,选择全部通道的单色图像进行目标检测、识别和跟踪。以三个颜色通道为例,红色通道的图像的像素值之和与绿色通道的图像的像素值之和小于所述预设值,且,红色通道的图像的像素值之和与蓝色通道的图像的像素值之和小于所述预设值,且,蓝色通道的图像的像素值之和与绿色通道的图像的像素值之和小于所述预设值时,选择红色通道的单色图像、绿色通道的单色图像、蓝色通道的单色图像进行目标检测、识别和跟踪。本实施例提供的方案,在场景色彩比较均衡时,利用全部通道的单色图像进行目标跟踪。When the current ambient light intensity is greater than or equal to the preset light intensity threshold, the sum of the pixel values of the image of each channel is obtained, and for any first channel and second channel, the sum of the pixel values of the image of the first channel is the same as that of the second channel When the difference between the sum of the pixel values of the images is less than the preset value, the monochrome images of all channels are selected for target detection, recognition and tracking. Taking three color channels as an example, the sum of the pixel values of the image of the red channel and the sum of the pixel values of the image of the green channel is less than the preset value, and the sum of the pixel values of the image of the red channel and the pixel value of the blue channel The sum of the pixel values of the image is less than the preset value, and when the sum of the pixel values of the image of the blue channel and the sum of the pixel values of the image of the green channel is less than the preset value, the monochrome image of the red channel is selected , the monochrome image of the green channel, and the monochrome image of the blue channel for target detection, recognition and tracking. In the solution provided by this embodiment, when the color of the scene is relatively balanced, the monochrome images of all channels are used for target tracking.

在一示例性实施例中,当选择多个通道的单色图像进行目标检测、识别和跟踪时,将该多个通道的单色图像中检测到的目标的合集作为融合图像的目标;In an exemplary embodiment, when the monochrome images of multiple channels are selected for target detection, recognition and tracking, the collection of targets detected in the monochrome images of the multiple channels is used as the target of the fused image;

所述将保存的所述跟踪框的信息叠加到所述融合图像包括:The superimposing the saved information of the tracking frame on the fused image includes:

将保存的该多个通道的单色图像中检测到的目标的跟踪框信息叠加到所述融合图像。The tracking frame information of the target detected in the saved monochrome images of the multiple channels is superimposed on the fused image.

比如,以两个通道的单色图像进行目标跟踪时,可以取这两个通道图像中所检测出的目标集的合集作为总目标集,在融合图像中叠加总目标集中的全部目标的跟踪框。以三个通道的单色图像进行目标跟踪时,可以取这三个通道图像中所检测出的目标集的合集作为总目标集,在融合图像中叠加总目标集中的全部目标的跟踪框。For example, when using two-channel monochrome images for target tracking, the collection of target sets detected in the two-channel images can be taken as the total target set, and the tracking frames of all targets in the total target set can be superimposed on the fused image . When using three-channel monochrome images for target tracking, the set of detected target sets in the three-channel images can be taken as the total target set, and the tracking frames of all targets in the total target set can be superimposed on the fused image.

图像配准任务旨在通过最大化两幅图像间的相似性测度(或最小化距离测度)寻找图像间的坐标对应关系。目前成熟的基于特征的图像配准技术,都一定程度上假设了图像局部亮度具有一致的响应特性,以便于局部特征的检测与匹配。在多光谱图像中,不同通道图像的局部亮度不存在一致性,因此很难用基于特征的图像配准技术精确配准。The image registration task aims to find the coordinate correspondence between two images by maximizing the similarity measure (or minimizing the distance measure) between two images. The current mature feature-based image registration technology assumes to a certain extent that the local brightness of the image has a consistent response characteristic, so as to facilitate the detection and matching of local features. In multispectral images, there is no consistency in the local brightness of different channel images, so it is difficult to register accurately with feature-based image registration techniques.

不同波段图像局部亮度上固有的差异,给多光谱图像配准带来了极大的挑战。图3为多光谱图像局部亮度的不一致性示意图。图 3中(a)至(c)展示了在多光谱图像中,不同波段图像局部亮度与对比度变化剧烈的特性。图3中(a) 为RGB表示的多光谱图像;(b) 为560 nm波段图像;(c) 为700 nm波段图像;(d) 为560 nm与700 nm图像亮度的联合直方图分布,对图3中(d)中,对任意一点(ic,ib),表示700nm波段图像中出现灰度ic,560nm波段图像中出现灰度ib的相关点的个数,颜色越浅对应数目点越大。联合直方图中未见明显的函数性映射关系存在,即同通道图像的局部亮度不存在一致性。由于陶瓷猫脸部的区域32与区域31的频谱响应在560 纳米(nm)波段差异明显而在700 nm波段相近,导致区域31在560nm波段通道图像中清晰可见,而在700 nm波段通道图像中较难辨别。区域31与区域32在560nm图像对比明显,在700 nm图像对比几不可见。因此很难用基于特征的图像配准技术精确配准。The inherent difference in the local brightness of images in different bands brings great challenges to multispectral image registration. Fig. 3 is a schematic diagram of inconsistency in local brightness of a multispectral image. (a) to (c) in Figure 3 show that in the multispectral image, the local brightness and contrast of different band images change drastically. In Fig. 3, (a) is the multispectral image represented by RGB; (b) is the image in the 560 nm band; (c) is the image in the 700 nm band; (d) is the joint histogram distribution of the brightness of the 560 nm and 700 nm images. In (d) in Fig. 3, for any point (ic, ib), it indicates the number of relevant points with gray level ic in the 700nm band image and gray level i binthe560nm band image, and the lighter the color corresponds to The larger the number of dots. There is no obvious functional mapping relationship in the joint histogram, that is, there is no consistency in the local brightness of the same channel image. Because the spectral responses of thearea 32 andarea 31 of the ceramic cat's face are significantly different in the 560 nm band and similar in the 700 nm band, thearea 31 is clearly visible in the 560 nm band channel image, while in the 700 nm band channel image Harder to tell.Region 31 andregion 32 have obvious contrast in the 560nm image, and are almost invisible in the 700nm image contrast. Therefore, it is difficult to accurately register with feature-based image registration techniques.

本公开实施例中,可以根据差值图像的梯度进行配准。In the embodiment of the present disclosure, the registration may be performed according to the gradient of the difference image.

差值图像为参考通道图像与待配准的通道图像进行坐标变换得到的变换图像的差值。作为一种梯度最稀疏的直接结果,配准时的差值图像梯度的绝对值之和会小于未配准时差值图像梯度绝对值之和。令

Figure 575044DEST_PATH_IMAGE012
表示参考通道图像,
Figure 302697DEST_PATH_IMAGE013
为二维空间坐标,
Figure 352693DEST_PATH_IMAGE014
Figure 953438DEST_PATH_IMAGE015
分别表示配准的与未配准的通道图像,则上述结果可表示为:The difference image is the difference of the transformed image obtained by performing coordinate transformation between the reference channel image and the channel image to be registered. As a direct consequence of the sparsest gradients, the sum of the absolute gradients of the difference images when registered will be smaller than the sum of the absolute gradients of the difference images when unregistered. make
Figure 575044DEST_PATH_IMAGE012
represents the reference channel image,
Figure 302697DEST_PATH_IMAGE013
is the two-dimensional space coordinates,
Figure 352693DEST_PATH_IMAGE014
and
Figure 953438DEST_PATH_IMAGE015
represent the registered and unregistered channel images respectively, then the above results can be expressed as:

Figure 869311DEST_PATH_IMAGE016
Figure 869311DEST_PATH_IMAGE016

其中算子

Figure 151387DEST_PATH_IMAGE017
表示图像沿着
Figure 42508DEST_PATH_IMAGE018
方向的导数算子。区间
Figure 17417DEST_PATH_IMAGE019
Figure 155006DEST_PATH_IMAGE020
表示有效计算区域,
Figure 303091DEST_PATH_IMAGE019
为图像
Figure 858837DEST_PATH_IMAGE021
Figure 191598DEST_PATH_IMAGE022
的重叠区域,
Figure 160691DEST_PATH_IMAGE020
为图像
Figure 784570DEST_PATH_IMAGE023
Figure 444091DEST_PATH_IMAGE022
的重叠区域。若用
Figure 698486DEST_PATH_IMAGE024
Figure 154875DEST_PATH_IMAGE025
Figure 557344DEST_PATH_IMAGE026
分别表示向量化后的图像
Figure 822104DEST_PATH_IMAGE027
Figure 762247DEST_PATH_IMAGE028
Figure 112457DEST_PATH_IMAGE015
,并且记where operator
Figure 151387DEST_PATH_IMAGE017
Indicates that the image is along the
Figure 42508DEST_PATH_IMAGE018
Direction derivative operator. interval
Figure 17417DEST_PATH_IMAGE019
,
Figure 155006DEST_PATH_IMAGE020
Indicates the effective computational area,
Figure 303091DEST_PATH_IMAGE019
for the image
Figure 858837DEST_PATH_IMAGE021
and
Figure 191598DEST_PATH_IMAGE022
the overlapping area,
Figure 160691DEST_PATH_IMAGE020
for the image
Figure 784570DEST_PATH_IMAGE023
and
Figure 444091DEST_PATH_IMAGE022
overlapping area. If used
Figure 698486DEST_PATH_IMAGE024
,
Figure 154875DEST_PATH_IMAGE025
and
Figure 557344DEST_PATH_IMAGE026
Represent the vectorized image respectively
Figure 822104DEST_PATH_IMAGE027
,
Figure 762247DEST_PATH_IMAGE028
and
Figure 112457DEST_PATH_IMAGE015
, and remember

Figure DEST_PATH_IMAGE030A
Figure DEST_PATH_IMAGE030A

则上式可以用

Figure 796248DEST_PATH_IMAGE031
范数表示为Then the above formula can be used
Figure 796248DEST_PATH_IMAGE031
The norm is expressed as

Figure DEST_PATH_IMAGE033A
Figure DEST_PATH_IMAGE033A

不等式两边对所有

Figure 836885DEST_PATH_IMAGE034
方向求和,有Both sides of the inequality for all
Figure 836885DEST_PATH_IMAGE034
Direction summation, there is

Figure DEST_PATH_IMAGE036A
Figure DEST_PATH_IMAGE036A

在上式中,

Figure 75493DEST_PATH_IMAGE031
范数自然而然地起到了差值图像梯度稀疏性度量的作用。将图像沿着各个方向(本方案中为
Figure 506474DEST_PATH_IMAGE037
方向)的梯度绝对值之和称为各向异性的总变分。由于本方案由梯度分布出发导出,可以将
Figure 10268DEST_PATH_IMAGE038
称为图像的总梯度。上式表明已配准多光谱图像其差值图像总梯度总小于未配准差值图像总梯度,因此多光谱图像配准可由最小化差值图像总梯度表征。因此,可以通过计算使得
Figure 702149DEST_PATH_IMAGE039
最小时的
Figure 406DEST_PATH_IMAGE010
作为配准后的图像。在一示例性实施中,对所述多个通道的图像进行配准融合包括:In the above formula,
Figure 75493DEST_PATH_IMAGE031
The norm naturally acts as a measure of the sparsity of the difference image gradient. Align the image along various directions (in this scenario,
Figure 506474DEST_PATH_IMAGE037
direction) is called the total variation of the anisotropy. Since this scheme is derived from the gradient distribution, it can be
Figure 10268DEST_PATH_IMAGE038
is called the total gradient of the image. The above formula shows that the total gradient of the difference image of the registered multispectral image is always smaller than the total gradient of the unregistered difference image, so the multispectral image registration can be characterized by minimizing the total gradient of the difference image. Therefore, it can be calculated that
Figure 702149DEST_PATH_IMAGE039
the smallest
Figure 406DEST_PATH_IMAGE010
as a registered image. In an exemplary implementation, performing registration and fusion on the images of the multiple channels includes:

对所述多个通道的图像,选择一个通道的图像作为参考图像,其余通道的图像作为待配准的图像与所述参考图像按如下方式配准:For the images of the multiple channels, the image of one channel is selected as the reference image, and the images of the remaining channels are registered as the image to be registered with the reference image as follows:

选择使得

Figure 840055DEST_PATH_IMAGE040
最小,且
Figure 475436DEST_PATH_IMAGE010
Figure 569294DEST_PATH_IMAGE022
的重叠区域非零时的
Figure 759491DEST_PATH_IMAGE010
作为配准后的图像,其中,
Figure 899486DEST_PATH_IMAGE041
Figure 10661DEST_PATH_IMAGE010
为待配准的图像进行坐标变换后的图像,
Figure 473872DEST_PATH_IMAGE022
为参考图像,Ω为
Figure 379511DEST_PATH_IMAGE010
Figure 131436DEST_PATH_IMAGE022
的重叠区域,
Figure 108619DEST_PATH_IMAGE007
为二维空间坐标;choose to make
Figure 840055DEST_PATH_IMAGE040
minimum, and
Figure 475436DEST_PATH_IMAGE010
and
Figure 569294DEST_PATH_IMAGE022
The overlap area of is non-zero when
Figure 759491DEST_PATH_IMAGE010
As a registered image, where,
Figure 899486DEST_PATH_IMAGE041
;
Figure 10661DEST_PATH_IMAGE010
is the image after coordinate transformation for the image to be registered,
Figure 473872DEST_PATH_IMAGE022
is the reference image, Ω is
Figure 379511DEST_PATH_IMAGE010
and
Figure 131436DEST_PATH_IMAGE022
the overlapping area,
Figure 108619DEST_PATH_IMAGE007
is a two-dimensional space coordinate;

将配准后的图像与参考图像进行融合。The registered image is fused with the reference image.

其中,待配准的图像进行坐标变换可以包括旋转、仿射、缩放、平移等等变换。Wherein, the coordinate transformation of the images to be registered may include transformations such as rotation, affine, scaling, and translation.

本实施例提供的配准方法,相比基于特征进行配准的方案,增强了鲁棒性,提高了配准效果。Compared with the registration method based on features, the registration method provided in this embodiment has enhanced robustness and improved registration effect.

在差值图像总梯度的计算过程中,空间上的求和是在图像重叠区域中进行,因此对重叠区域变化敏感。测度值

Figure 911490DEST_PATH_IMAGE038
将会随着重叠区域的变小而变小,当两幅图像没有重叠区域时,测度值减到零点。这意味着直接最小化差值图像总梯度,其解集中包含零重叠区域的错误配准结果。为避免陷入错误的解集中,可以对差值图像总梯度作归一化。为避免零重叠区域,最小化测度值不仅要求图像内容配准,还要求重叠区域中能够提供尽量多的图像信息,即重叠区域中的图像能量要大。因此,可以定义归一化差值图像总梯度(Normalized Total Gradient,简称NTG)如下In the calculation process of the total gradient of the difference image, the spatial summation is performed in the overlapping area of the image, so it is sensitive to the change of the overlapping area. measure value
Figure 911490DEST_PATH_IMAGE038
will become smaller as the overlapping area becomes smaller, and the measure value will decrease to zero when the two images have no overlapping area. This means directly minimizing the total gradient of the difference image, whose solution set contains misregistration results in regions of zero overlap. To avoid falling into wrong solutions, the total gradient of the difference image can be normalized. In order to avoid the zero-overlap area, minimizing the metric value not only requires image content registration, but also requires that as much image information as possible can be provided in the overlapping area, that is, the image energy in the overlapping area should be large. Therefore, the normalized total gradient (Normalized Total Gradient, NTG for short) of the normalized difference image can be defined as follows

Figure 502877DEST_PATH_IMAGE042
Figure 502877DEST_PATH_IMAGE042

上述表达式的右边项中,分子即为差值图像总梯度,分母为重叠区域能提供的图像总能量。最小化NTG的值要求图像配准的同时要使得重叠区域尽量大,有效地避免了配准结果陷入零重叠的错误中。In the right term of the above expression, the numerator is the total gradient of the difference image, and the denominator is the total energy of the image that can be provided by the overlapping area. Minimizing the value of NTG requires image registration to make the overlapping area as large as possible, which effectively avoids the registration result falling into the error of zero overlap.

以NTG为测度,多光谱图像的配准问题可被转换为NTG最小化问题。通过标准的最小化问题求解方法,可以实现多光谱图像的配准。即,在一示例性实施例中,对所述多个通道的图像进行配准融合包括:Taking NTG as the metric, the registration problem of multispectral images can be transformed into the problem of NTG minimization. Registration of multispectral images can be achieved by standard minimization problem solving methods. That is, in an exemplary embodiment, performing registration and fusion on the images of the multiple channels includes:

对所述多个通道的图像,选择一个通道的图像作为参考图像,其余通道的图像作为待配准图像与所述参考图像按如下方式进行配准:For the images of the plurality of channels, the image of one channel is selected as the reference image, and the images of the remaining channels are used as images to be registered and registered with the reference image as follows:

选择使得

Figure 492830DEST_PATH_IMAGE043
最小的
Figure 932426DEST_PATH_IMAGE010
作为配准后的图像,其中,
Figure 183279DEST_PATH_IMAGE044
Figure 696300DEST_PATH_IMAGE045
Figure 157237DEST_PATH_IMAGE046
为待配准的图像进行坐标变换后的图像,
Figure 413906DEST_PATH_IMAGE022
为参考图像,Ω为
Figure 519265DEST_PATH_IMAGE010
Figure 452455DEST_PATH_IMAGE022
的重叠区域;choose to make
Figure 492830DEST_PATH_IMAGE043
the smallest
Figure 932426DEST_PATH_IMAGE010
As a registered image, where,
Figure 183279DEST_PATH_IMAGE044
Figure 696300DEST_PATH_IMAGE045
;
Figure 157237DEST_PATH_IMAGE046
is the image after coordinate transformation for the image to be registered,
Figure 413906DEST_PATH_IMAGE022
is the reference image, Ω is
Figure 519265DEST_PATH_IMAGE010
and
Figure 452455DEST_PATH_IMAGE022
the overlapping area;

将配准后的图像与参考图像进行融合。The registered image is fused with the reference image.

如图4所示,本公开实施例提供一种多光谱多传感器协同处理装置40,包括存储器410和处理器420,所述存储器410存储有程序,所述程序在被所述处理器420读取执行时,实现上述任一实施例所述的多光谱多传感器协同处理方法。As shown in FIG. 4 , an embodiment of the present disclosure provides a multi-spectral multi-sensorcollaborative processing device 40, including amemory 410 and aprocessor 420, thememory 410 stores a program, and the program is read by theprocessor 420 During execution, the multi-spectrum multi-sensor collaborative processing method described in any one of the above-mentioned embodiments is realized.

在一示例性实施例中,参考图1,所述多光谱多传感器协同处理装置还可以包括:镜头1、分光棱镜4和多个传感器5,其中:In an exemplary embodiment, referring to FIG. 1 , the multi-spectral multi-sensor collaborative processing device may further include: alens 1, adichroic prism 4 and a plurality ofsensors 5, wherein:

所述镜头1用于,接收外部光线,传输至所述分光棱镜4;Thelens 1 is used to receive external light and transmit it to thedichroic prism 4;

所述分光棱镜4用于,将入射光线进行分光为多路单色光线,所述多路单色光线分别入射至所述多个传感器5,其中,每路单色光线入射至一个传感器5;Thedichroic prism 4 is used to split the incident light into multiple monochromatic light beams, and the multiple monochromatic light beams are respectively incident on the plurality ofsensors 5, wherein each monochromatic light beam is incident on onesensor 5;

所述传感器5用于,将入射光线转换为电信号输出至所述处理器420。处理器420根据每个传感器5的信号生成一个通道的图像,根据多个传感器5的信号生成多个通道的图像。Thesensor 5 is used to convert the incident light into an electrical signal and output it to theprocessor 420 . Theprocessor 420 generates an image of one channel according to the signal of eachsensor 5 , and generates images of multiple channels according to the signals ofmultiple sensors 5 .

在一示例性实施例中,如图1所示,所述多光谱多传感器协同处理装置40还可以包括:设置在所述镜头1和所述分光棱镜4之间的IR滤镜2、ND滤镜3至少之一。In an exemplary embodiment, as shown in FIG. 1 , the multi-spectral multi-sensorcollaborative processing device 40 may further include: anIR filter 2 and an ND filter arranged between thelens 1 and thedichroic prism 4 at least one ofmirrors 3.

本公开实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现上述任一实施例所述的多光谱多传感器协同处理方法。An embodiment of the present disclosure provides a computer-readable storage medium, the computer-readable storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to implement any of the above-mentioned The multi-spectral multi-sensor collaborative processing method described in the embodiment.

本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些组件或所有组件可以被实施为由处理器,如数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于 RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。Those of ordinary skill in the art can understand that all or some of the steps in the methods disclosed above, the functional modules/units in the system, and the device can be implemented as software, firmware, hardware, and an appropriate combination thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be composed of several physical components. Components cooperate to execute. Some or all of the components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). As known to those of ordinary skill in the art, the term computer storage media includes both volatile and nonvolatile media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. permanent, removable and non-removable media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, tape, magnetic disk storage or other magnetic storage devices, or can Any other medium used to store desired information and which can be accessed by a computer. In addition, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism, and may include any information delivery media .

Claims (10)

Translated fromChinese
1.一种多光谱多传感器协同处理方法,包括获取多个通道的图像,其特征在于:1. A multi-spectral multi-sensor collaborative processing method, comprising obtaining the image of a plurality of channels, is characterized in that:所述多个通道为将入射光线分光为多路单色光线构成的多个通道,每个通道的图像为对一路单色光线进行采集所得的一个单色图像,且不同通道的图像的颜色不同;The multiple channels are multiple channels formed by splitting the incident light into multiple channels of monochromatic light, and the image of each channel is a monochromatic image obtained by collecting one channel of monochromatic light, and the images of different channels have different colors ;所述方法还包括:根据所述单色图像进行目标检测、识别和跟踪,生成针对所述目标的跟踪框,保存所述跟踪框的信息;The method further includes: performing target detection, recognition and tracking according to the monochrome image, generating a tracking frame for the target, and saving information of the tracking frame;对所述多个通道的图像进行配准融合,生成融合图像,将保存的所述跟踪框的信息叠加到所述融合图像。The images of the multiple channels are registered and fused to generate a fused image, and the saved information of the tracking frame is superimposed on the fused image.2.根据权利要求1所述的多光谱多传感器协同处理方法,其特征在于,所述将保存的所述跟踪框的信息叠加到所述融合图像包括:2. The multi-spectral multi-sensor collaborative processing method according to claim 1, wherein the superimposing the information of the stored tracking frame on the fused image comprises:将保存的最新的跟踪框的信息叠加到所述融合图像。Superimpose the latest saved tracking frame information on the fused image.3.根据权利要求1所述的多光谱多传感器协同处理方法,其特征在于,所述通道包括红色通道;所述根据所述单色图像进行目标检测、识别和跟踪包括:3. The multi-spectral multi-sensor collaborative processing method according to claim 1, wherein said channel comprises a red channel; said carrying out target detection, identification and tracking according to said monochromatic image comprises:当前环境光照强度小于预设光照强度阈值时,选择红色通道的近红外图像进行目标检测、识别和跟踪。When the current ambient light intensity is less than the preset light intensity threshold, the near-infrared image of the red channel is selected for target detection, recognition and tracking.4.根据权利要求1所述的多光谱多传感器协同处理方法,其特征在于,所述根据所述单色图像进行目标检测、识别和跟踪包括:4. The multi-spectral multi-sensor collaborative processing method according to claim 1, wherein said carrying out target detection, recognition and tracking according to said monochromatic image comprises:当前环境光照强度大于等于预设光照强度阈值,获取每个通道的图像的像素值的和,当存在第一通道的图像的像素值的和大于每个其他通道的图像的像素值的和,且第一通道的图像的像素值的和与至少一个其他通道的图像的像素值的和的差值大于预设值时,选择除所述第一通道外的一个或多个通道的单色图像进行目标检测、识别和跟踪。The current ambient light intensity is greater than or equal to the preset light intensity threshold, and the sum of the pixel values of the images of each channel is acquired, when the sum of the pixel values of the images of the first channel is greater than the sum of the pixel values of the images of each other channel, and When the difference between the sum of the pixel values of the image of the first channel and the sum of the pixel values of the images of at least one other channel is greater than a preset value, select one or more monochrome images of channels other than the first channel to perform Object detection, recognition and tracking.5.根据权利要求1所述的多光谱多传感器协同处理方法,其特征在于,所述根据所述单色图像进行目标检测、识别和跟踪包括:5. The multi-spectral multi-sensor collaborative processing method according to claim 1, wherein said carrying out target detection, recognition and tracking according to said monochromatic image comprises:当前环境光照强度大于等于预设光照强度阈值时,获取每个通道的图像的像素值的和,且对任意第一通道和第二通道,第一通道的图像的像素值之和与第二通道的图像的像素值之和的差值小于预设值时,选择全部通道的单色图像进行目标检测、识别和跟踪。When the current ambient light intensity is greater than or equal to the preset light intensity threshold, the sum of the pixel values of the image of each channel is obtained, and for any first channel and second channel, the sum of the pixel values of the image of the first channel is the same as that of the second channel When the difference between the sum of the pixel values of the images is less than the preset value, the monochrome images of all channels are selected for target detection, recognition and tracking.6.根据权利要求4或5所述的多光谱多传感器协同处理方法,其特征在于,所述方法还包括:当选择多个通道的单色图像进行目标检测、识别和跟踪时,将该多个通道的单色图像中检测到的目标的合集作为融合图像的目标;6. according to claim 4 or 5 described multi-spectral multi-sensor cooperative processing methods, it is characterized in that, described method also comprises: when selecting the monochrome image of a plurality of channels to carry out target detection, identification and tracking, the multi-channel The collection of targets detected in monochrome images of channels is used as the target of the fused image;所述将保存的所述跟踪框的信息叠加到所述融合图像包括:The superimposing the saved information of the tracking frame on the fused image includes:将保存的该多个通道的单色图像中检测到的目标的跟踪框信息叠加到所述融合图像。The tracking frame information of the target detected in the saved monochrome images of the multiple channels is superimposed on the fused image.7.根据权利要求1至5任一所述的多光谱多传感器协同处理方法,其特征在于,所述对所述多个通道的图像进行配准融合包括:7. The multispectral multisensor collaborative processing method according to any one of claims 1 to 5, wherein the registration and fusion of the images of the multiple channels comprises:对所述多个通道的图像,选择一个通道的图像作为参考图像,其余通道的图像分别作为待配准图像与所述参考图像按如下方式配准:For the images of the multiple channels, the image of one channel is selected as the reference image, and the images of the remaining channels are respectively used as images to be registered and registered with the reference image as follows:选择使得
Figure DEST_PATH_IMAGE001
最小,且
Figure 920721DEST_PATH_IMAGE002
Figure DEST_PATH_IMAGE003
的重叠区域非零时的
Figure 804DEST_PATH_IMAGE002
作为配准后的图像,其中,
Figure 730862DEST_PATH_IMAGE004
表示沿
Figure DEST_PATH_IMAGE005
方向的导数算子,
Figure 478369DEST_PATH_IMAGE006
Figure 2892DEST_PATH_IMAGE002
为待配准的图像进行坐标变换后的图像,
Figure 288379DEST_PATH_IMAGE003
为参考图像,Ω为
Figure 569932DEST_PATH_IMAGE002
Figure 624475DEST_PATH_IMAGE003
的重叠区域,
Figure DEST_PATH_IMAGE007
为二维空间坐标;choose to make
Figure DEST_PATH_IMAGE001
minimum, and
Figure 920721DEST_PATH_IMAGE002
and
Figure DEST_PATH_IMAGE003
The overlap area of is non-zero when
Figure 804DEST_PATH_IMAGE002
As a registered image, where,
Figure 730862DEST_PATH_IMAGE004
means along
Figure DEST_PATH_IMAGE005
The derivative operator of the direction,
Figure 478369DEST_PATH_IMAGE006
;
Figure 2892DEST_PATH_IMAGE002
is the image after coordinate transformation for the image to be registered,
Figure 288379DEST_PATH_IMAGE003
is the reference image, Ω is
Figure 569932DEST_PATH_IMAGE002
and
Figure 624475DEST_PATH_IMAGE003
the overlapping area,
Figure DEST_PATH_IMAGE007
is a two-dimensional space coordinate;或者,or,选择使得
Figure DEST_PATH_IMAGE009
最小的
Figure 398528DEST_PATH_IMAGE010
作为配准后的图像,其中,
Figure DEST_PATH_IMAGE011
choose to make
Figure DEST_PATH_IMAGE009
the smallest
Figure 398528DEST_PATH_IMAGE010
As a registered image, where,
Figure DEST_PATH_IMAGE011
;
将配准后的图像与所述参考图像进行融合。The registered image is fused with the reference image.8.一种多光谱多传感器协同处理装置,其特征在于,包括存储器和处理器,所述存储器存储有程序,所述程序在被所述处理器读取执行时,实现如权利要求1至7任一所述的多光谱多传感器协同处理方法。8. A multi-spectral multi-sensor cooperative processing device, characterized in that it includes a memory and a processor, the memory stores a program, and when the program is read and executed by the processor, it realizes the requirements of claims 1 to 7. Any one of the multi-spectral multi-sensor collaborative processing methods.9.根据权利要求8所述的多光谱多传感器协同处理装置,其特征在于,所述多光谱多传感器协同处理装置还包括:镜头、分光棱镜和多个传感器,其中:9. The multi-spectral multi-sensor collaborative processing device according to claim 8, wherein the multi-spectral multi-sensor collaborative processing device also includes: a lens, a beam splitting prism and a plurality of sensors, wherein:所述镜头用于接收外部光线,传输至所述分光棱镜;The lens is used to receive external light and transmit it to the beam splitting prism;所述分光棱镜用于,将入射光线进行分光为多路单色光线,所述多路单色光线分别入射至所述多个传感器,其中,每路单色光线入射至一个传感器;The beam splitting prism is used to split the incident light into multiple monochromatic light beams, and the multiple monochromatic light beams are respectively incident on the plurality of sensors, wherein each monochromatic light beam is incident on one sensor;所述传感器用于,将入射光线转换为电信号输出至所述处理器。The sensor is used for converting the incident light into an electrical signal and outputting it to the processor.10.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现如权利要求1至7任一所述的多光谱多传感器协同处理方法。10. A computer-readable storage medium, characterized in that, the computer-readable storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to realize the The multi-spectral multi-sensor collaborative processing method described in any one of 1 to 7 is required.
CN202210676732.7A2022-06-162022-06-16 A multi-spectral multi-sensor collaborative processing method and device, and storage mediumActiveCN114782502B (en)

Priority Applications (3)

Application NumberPriority DateFiling DateTitle
CN202210676732.7ACN114782502B (en)2022-06-162022-06-16 A multi-spectral multi-sensor collaborative processing method and device, and storage medium
PCT/CN2022/139450WO2023240963A1 (en)2022-06-162022-12-16Multispectral multi-sensor synergistic processing method and apparatus, and storage medium
EP22946629.7AEP4542488A1 (en)2022-06-162022-12-16Multispectral multi-sensor synergistic processing method and apparatus, and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202210676732.7ACN114782502B (en)2022-06-162022-06-16 A multi-spectral multi-sensor collaborative processing method and device, and storage medium

Publications (2)

Publication NumberPublication Date
CN114782502A CN114782502A (en)2022-07-22
CN114782502Btrue CN114782502B (en)2022-11-04

Family

ID=82421217

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202210676732.7AActiveCN114782502B (en)2022-06-162022-06-16 A multi-spectral multi-sensor collaborative processing method and device, and storage medium

Country Status (3)

CountryLink
EP (1)EP4542488A1 (en)
CN (1)CN114782502B (en)
WO (1)WO2023240963A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114782502B (en)*2022-06-162022-11-04浙江宇视科技有限公司 A multi-spectral multi-sensor collaborative processing method and device, and storage medium
CN119445340B (en)*2025-01-132025-03-18湖南工商大学Multispectral small target reasoning acceleration method and multispectral small target reasoning acceleration system based on segmentation calculation
CN120259390B (en)*2025-06-032025-08-19绍兴颂明医疗科技有限公司 Automatic registration and spatial alignment method, system, electronic device and storage medium based on multi-channel fluorescence images

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108419062A (en)*2017-02-102018-08-17杭州海康威视数字技术股份有限公司 Image fusion device and image fusion method
CN111681171A (en)*2020-06-152020-09-18中国人民解放军军事科学院国防工程研究院Full-color and multi-spectral image high-fidelity fusion method and device based on block matching

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US4897739A (en)*1986-04-071990-01-30Canon Kabushiki KaishaMulti-channel recording apparatus using a plurality of heads in turn
CN101252677B (en)*2007-10-192010-05-19西安交通大学 A Target Tracking Method Based on Multispectral Image Sensor
CN103500341A (en)*2013-09-162014-01-08安徽工程大学Recognition device used for road signboard
CN109196518B (en)*2018-08-232022-06-07合刃科技(深圳)有限公司Gesture recognition method and device based on hyperspectral imaging
US10956704B2 (en)*2018-11-072021-03-23Advanced New Technologies Co., Ltd.Neural networks for biometric recognition
CN109740563B (en)*2019-01-142021-02-12湖南众智君赢科技有限公司Moving object detection method for video monitoring
CN110827314B (en)*2019-09-272020-10-23深圳云天励飞技术有限公司Single-target tracking method and related equipment
CN112308883A (en)*2020-11-262021-02-02哈尔滨工程大学 A multi-vessel fusion tracking method based on visible light and infrared images
CN113658216B (en)*2021-06-242024-07-19北京理工大学Remote sensing target tracking method based on multistage self-adaptive KCF and electronic equipment
CN114419741B (en)*2022-03-152022-07-19深圳市一心视觉科技有限公司Living body detection method, living body detection device, electronic apparatus, and storage medium
CN114511595B (en)*2022-04-192022-08-23浙江宇视科技有限公司Multi-mode cooperation and fusion target tracking method, device, system and medium
CN114782502B (en)*2022-06-162022-11-04浙江宇视科技有限公司 A multi-spectral multi-sensor collaborative processing method and device, and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108419062A (en)*2017-02-102018-08-17杭州海康威视数字技术股份有限公司 Image fusion device and image fusion method
CN111681171A (en)*2020-06-152020-09-18中国人民解放军军事科学院国防工程研究院Full-color and multi-spectral image high-fidelity fusion method and device based on block matching

Also Published As

Publication numberPublication date
WO2023240963A1 (en)2023-12-21
EP4542488A1 (en)2025-04-23
CN114782502A (en)2022-07-22

Similar Documents

PublicationPublication DateTitle
CN114782502B (en) A multi-spectral multi-sensor collaborative processing method and device, and storage medium
US11546576B2 (en)Systems and methods for dynamic calibration of array cameras
US8619128B2 (en)Systems and methods for an imaging system using multiple image sensors
US9898856B2 (en)Systems and methods for depth-assisted perspective distortion correction
JP7024736B2 (en) Image processing equipment, image processing method, and program
US20130077825A1 (en)Image processing apparatus
CN107360354B (en)Photographing method, photographing device, mobile terminal and computer-readable storage medium
CN112689850A (en)Image processing method, image processing apparatus, image forming apparatus, removable carrier, and storage medium
CN112241668A (en)Image processing method, device and equipment
Martinez et al.Kinect Unleashed: Getting Control over High Resolution Depth Maps.
CN112241935A (en)Image processing method, device and equipment and storage medium
KR20140026078A (en)Apparatus and method for extracting object
JP6825299B2 (en) Information processing equipment, information processing methods and programs
US9761275B2 (en)System and method for spatiotemporal image fusion and integration
KR101718309B1 (en)The method of auto stitching and panoramic image genertation using color histogram
CN115714919A (en)Method for camera control, image signal processor and apparatus
JP2018055591A (en) Information processing apparatus, information processing method, and program
CN112907643A (en)Target detection method and device
Gustafsson et al.Spectral cube reconstruction for a high resolution hyperspectral camera based on a linear variable filter
CN115343699B (en)Target detection method, device, medium and terminal for multi-sensor information fusion
US20240420285A1 (en)Real-time blind registration of disparate video image streams
US20240420348A1 (en)Calibration for real-time blind registration of disparate video image streams
JP7450668B2 (en) Facial recognition methods, devices, systems, electronic devices and readable storage media
JP3783881B2 (en) Vehicle detection method and vehicle detection apparatus using the same
WO2022074554A1 (en)Method of generating an aligned monochrome image and related computer program

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp