Movatterモバイル変換


[0]ホーム

URL:


CN109360179A - Image fusion method, device and readable storage medium - Google Patents

Image fusion method, device and readable storage medium
Download PDF

Info

Publication number
CN109360179A
CN109360179ACN201811214128.2ACN201811214128ACN109360179ACN 109360179 ACN109360179 ACN 109360179ACN 201811214128 ACN201811214128 ACN 201811214128ACN 109360179 ACN109360179 ACN 109360179A
Authority
CN
China
Prior art keywords
image
map
fusion
pixel
structural similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811214128.2A
Other languages
Chinese (zh)
Other versions
CN109360179B (en
Inventor
程永翔
刘坤
于晟焘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Maritime University
Original Assignee
Shanghai Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Maritime UniversityfiledCriticalShanghai Maritime University
Priority to CN201811214128.2ApriorityCriticalpatent/CN109360179B/en
Publication of CN109360179ApublicationCriticalpatent/CN109360179A/en
Application grantedgrantedCritical
Publication of CN109360179BpublicationCriticalpatent/CN109360179B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种图像融合方法、装置及可读存储介质,应用于图像处理技术领域,图像融合方法包括:首先得到配准后的第一图像和第二图像;经过卷积神经网络训练后分类输出第一得分图和第二得分图;对第一得分图和第二得分图的对应像素进行比较,得到二值图;得到第一融合图像;计算第一结构相似度图,以及计算第二结构相似度图;获得第一结构相似度图和第二结构相似度图的差异图;基于差异图、第一图像和第二图像,得到第二融合图像。应用本发明实施例,通过双通道卷积神经网络得到红外与可见光图像的融合图像,卷积神经网络作为深度学习的算法,可以自动选择图像特征,改善特征提取的单一性,避免了现有红外图像与可见光图像融合方法的缺陷。

The invention discloses an image fusion method, a device and a readable storage medium, which are applied to the technical field of image processing. The image fusion method includes: firstly obtaining a registered first image and a second image; The first score map and the second score map are classified and output; the corresponding pixels of the first score map and the second score map are compared to obtain a binary map; the first fusion image is obtained; the first structural similarity map is calculated, and the first Two structural similarity maps; obtaining a difference map between the first structural similarity map and the second structural similarity map; and obtaining a second fusion image based on the difference map, the first image, and the second image. By applying the embodiment of the present invention, a fusion image of infrared and visible light images is obtained through a dual-channel convolutional neural network. As a deep learning algorithm, the convolutional neural network can automatically select image features, improve the singleness of feature extraction, and avoid the existing infrared Flaws of image and visible light image fusion methods.

Description

A kind of image interfusion method, device and readable storage medium storing program for executing
Technical field
The present invention relates to people's image fusion technology field more particularly to a kind of image interfusion methods, device and readable storageMedium.
Background technique
Infrared sensor is sensitive to the infrared thermal characteristics of target area, it can with work double tides and overcome illumination difficulty comeIt was found that target, but it often lacks detailed information abundant, blurred background;And visible images include more abundant textureFeature and detailed information, but its image-forming condition is to the more demanding of illumination.If by infrared image letter complementary with visible imagesBreath carries out effective integration, and the blending image information of acquisition is richer, robustness is stronger, is subsequent image segmentation, detection, identificationIt haves laid a good foundation.Therefore infrared and visual image fusion technology is widely used in military and security monitoring field.
Image co-registration is divided into: Pixel-level, feature level and decision level.The image co-registration of the Pixel-level figure of basis and fusion the mostAs information is richer.Image interfusion method based on multi-scale transform (MST) and rarefaction representation (SR) is pixel-level image fusionMost common method in method, image characteristics extraction device needs manual designs in such method, and operation efficiency is low;It mentions simultaneouslyThe single characteristics of image got not is the image-context that can be applied to all kinds of complexity well, is easy in the region of uniform gray levelErroneous judgement.
Summary of the invention
The embodiment of the present invention is designed to provide a kind of image interfusion method, device and readable storage medium storing program for executing, by doubleChannel convolutional neural networks obtain infrared and visible images blending images, calculation of the convolutional neural networks as deep learningMethod can automatically select characteristics of image, improve the unicity of feature extraction, avoid existing infrared image and melt with visible imagesThe defect of conjunction method.Specific technical solution is as follows:
In order to achieve the above objectives, the embodiment of the invention provides a kind of image interfusion methods, comprising:
Infrared image is registrated with visible images, the first image and the second image after being registrated, wherein instituteIt is that visible images are the visible images that state the first image, which be parts of images, second image in the infrared image,In parts of images;
The first image and second image are input in trained convolutional neural networks, by the convolutionClassification the first shot chart of output and the second shot chart after neural metwork training;
The respective pixel of first shot chart and second shot chart is compared, binary map is obtained;
Based on the binary map, the first image and second image, the first blending image is obtained;
The first structure similarity graph of the first image and first blending image is calculated, and calculates the second imageWith the second structural similarity figure of first blending image;
Obtain the disparity map of the first structure similarity graph and the second structural similarity figure;
Based on the disparity map, the first image and second image, the second blending image is obtained.
In a kind of implementation, the respective pixel to first shot chart and second shot chart comparesCompared with the step of obtaining binary map, comprising:
For the first pixel on first shot chart, judge whether the pixel value greater than the second pixel, whereinFirst pixel is any one pixel on first shot chart, and second pixel is second scorePixel corresponding with first pixel on figure;
If it is, the pixel value of third pixel is 1 in the binary map;Otherwise, the pixel value of third pixelIt is 0, wherein the third pixel is the pixel in the binary map with the first pixel corresponding position.
In a kind of implementation, first blending image embodies formula are as follows:
F1(x, y)=D1(x,y)A(x,y)+(1-D1(x,y)B(x,y))
Wherein, D1For binary map, A is the first image, and B is the second image, F1For the first blending image, x, y are to constitute pixelThe coordinate value of point.
In a kind of implementation, the difference for obtaining the first structure similarity graph and the second structural similarity figureThe step of different figure, comprising:
Obtain the difference of the first structure similarity graph and the second structural similarity figure;
Using the absolute value of the difference as the difference of the first structure similarity graph and the second structural similarity figureDifferent figure.
It is described to be based on the disparity map, the first image and second image in a kind of implementation, obtain secondThe step of blending image includes:
Based on target area, region unrelated with target in the disparity map is removed, obtains target's feature-extraction image;
According to the target's feature-extraction image, the first image and second image, the second blending image is obtained.
In a kind of implementation, second blending image embodies formula are as follows:
F2(x, y)=D2(x,y)A(x,y)+(1-D2(x,y)B(x,y))
Wherein, D2For target's feature-extraction image, A is the first image, and B is the second image, and x, y are the seat for constituting pixelScale value, F2For the second blending image.
By the binary map as decision diagram, initial fusion image is obtained using Weighted Fusion rule, is finally mentioned using SSIMThe notable figure for taking out target area, merges again, obtains final blending image;
In a kind of implementation, the training step of the convolutional neural networks, comprising:
The first quantity original image having a size of 32 × 32 is extracted from the first image set, and is added in the second image setSecond quantity visible images;
The original image and the visible images are converted into grayscale image, and the above gray level image is cut into 16 ×16 sub-block, as high resolution graphics image set;
Gaussian Blur processing is carried out to the first quantity original image that the first image is concentrated, and the second image is addedThe infrared light image for the second quantity concentrated, then first quantity is opened into original image and the second quantity infrared light imageIt is cut into 16 × 16 sub-block, as fuzzy graph image set.
Convolutional neural networks structure is trained on the fuzzy graph image set and high resolution graphics image set made.
In a kind of implementation, the convolutional neural networks are binary channels network, each channel is by 5 layers of convolutionNeural network is constituted, including 3 convolutional layers, and 1 maximum pond layer and 1 full articulamentum, last output layer are 1Softmax classifier.
In addition, the embodiment of the invention also provides a kind of image fusion devices, comprising:
Registration module, for being registrated to infrared image with visible images, the first image after being registrated andTwo images, wherein the first image is that parts of images, second image in the infrared image are that visible images areParts of images in the visible images;
Categorization module, for the first image and second image to be input to trained convolutional neural networksIn, classification the first shot chart of output and the second shot chart after convolutional neural networks training;
Comparison module is compared for the respective pixel to first shot chart and second shot chart, obtainsBinary map;
First Fusion Module obtains first and melts for being based on the binary map, the first image and second imageClose image;
Computing module, for calculating the first structure similarity graph of the first image Yu first blending image, withAnd calculate the second structural similarity figure of the second image and first blending image;
Module is obtained, for obtaining the disparity map of the first structure similarity graph and the second structural similarity figure;
Second Fusion Module obtains second and melts for being based on the disparity map, the first image and second imageClose image.
And a kind of readable storage medium storing program for executing is provided, and it is stored thereon with computer program, it is real when which is executed by processorThe step of incumbent item of image fusion method.
Using a kind of image interfusion method provided in an embodiment of the present invention, device and readable storage medium storing program for executing, pass through convolution mindInfrared and visible images blending images are obtained through network, characteristics of image is automatically selected, improves the unicity of feature extraction, keep awayThe defect of existing infrared image and visible light image fusion method is exempted from.For binary segmentation there is no completely by target area withBackground area is accurately divided, and the case where shade occurs so as to cause the blending image in later period, according to infrared and visible light source figureConspicuousness target area figure is obtained as the difference with the structural similarity of original fusion image, secondary fusion steps is taken to changeKind fused image quality, the fusion method based on conspicuousness can keep the integrality of prominent target area, and improve fusion figureThe visual quality of picture, so as to preferably serve subsequent image understanding and identification etc..
Detailed description of the invention
Fig. 1 is the flow diagram of image interfusion method provided in an embodiment of the present invention;
Fig. 2 is the first effect diagram provided in an embodiment of the present invention;
Fig. 3 is second of effect diagram provided in an embodiment of the present invention;
Fig. 4 is the third effect diagram provided in an embodiment of the present invention;
Fig. 5 is the 4th kind of effect diagram provided in an embodiment of the present invention;
Fig. 6 is the 5th kind of effect diagram provided in an embodiment of the present invention;
Fig. 7 is the 6th kind of effect diagram provided in an embodiment of the present invention;
Fig. 8 is the 7th kind of effect diagram provided in an embodiment of the present invention;
Fig. 9 is the 8th kind of effect diagram provided in an embodiment of the present invention;
Figure 10 is the 9th kind of effect diagram provided in an embodiment of the present invention;
Figure 11 is the provided in an embodiment of the present invention ten kind of effect diagram.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, completeSite preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based onEmbodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every otherEmbodiment shall fall within the protection scope of the present invention.
It should be noted that in the image processing arts, the thermoradiation efficiency in infrared image with target is larger, and canLight-exposed image grayscale differs greatly even opposite;Infrared image background gray scale is low without apparent thermal sensation effect contrast, withVisible images are compared, and spectral information is lacked, but equally include detailed information.Therefore, only have when being merged to imageMore go the information for retaining original image that could further improve syncretizing effect.
Referring to Fig. 1, the embodiment of the invention provides a kind of image interfusion methods, include the following steps:
S101 is registrated infrared image with visible images, the first image and the second image after being registrated,In, the first image is that parts of images, second image in the infrared image are that visible images are described visibleParts of images in light image.
It should be noted that geometrical registration refer to different time, different-waveband, different remote sensor systems are obtained sameThe image (data) in one area, through geometric transformation make corresponding image points in position with the operation that is overlapped completely in orientation.SpecificallyGeometrical registration process is the prior art, and this will not be repeated here for the embodiment of the present invention.
It is understood that sliding window is the commonly used image processing tool in image procossing, specifically, sliding windowThe size of mouth can be 3*3,5*5 either 16*16 etc., and the embodiment of the present invention is not specifically limited herein.
Illustratively, by taking the first image as an example, the sliding window of 16*16 can be opened from first pixel in the upper left cornerBegin, as first central pixel point of 16*16 sliding window, then successively moves the 16*16 sliding window.So theThe chance of pixel centered on any one pixel in one image has, then and so on, for the second imageBe in this way, so any one central pixel point in the first image can be calculated according to this principle, and it is right in the second imageAnswer the structural similarity of central pixel point.
Sliding window is defined having a size of 16 × 16, step-length 1, the infrared image being registrated and visible images of input are distinguishedIt is done from left to right in infrared image and visible images, slide from top to bottom obtains the first image of infrared image sub-blockVA, as shown in Figure 2;The second image of visible images sub-block VB, as shown in Figure 3.
The first image and second image are input in trained convolutional neural networks, by institute by S102Classification exports the first shot chart and the second shot chart after stating convolutional neural networks training.
It should be noted that convolutional neural networks are a kind of depth feed forward-fuzzy controls in machine learning, have becomeIt is applied to image recognition to function.Convolutional neural networks are a kind of feedforward neural networks, and artificial neuron can respond surrounding cells,It can carry out large-scale image procossing, including convolutional layer and pond layer.
In a kind of implementation, the training step of the convolutional neural networks, comprising: from the first image set extract having a size of32 × 32 the first quantity original image, and the second quantity visible images being added in the second image set;By the originalBeginning image and the visible images are converted into grayscale image, and the above gray level image is cut into 16 × 16 sub-block, as heightResolution chart image set;Gaussian Blur processing is carried out to the first quantity original image that the first image is concentrated, and is added theThe infrared light image of the second quantity in two image sets, then first quantity original image and second quantity is infraredLight image is cut into 16 × 16 sub-block, as fuzzy graph image set.
Illustratively, 2000 original clear images having a size of 32 × 32 are extracted from Cifar-10 image set, and addedEnter 200 visible images in TNO_Image_Fusion_Datase image set, is then converted into grayscale image and image is completePortion is cut into 16 × 16 sub-block, as high resolution graphics image set;Secondly high to all being carried out from Cifar-10 image subblockThis Fuzzy Processing (since infrared light image background area is low compared with visible images resolution ratio), and TNO_Image_ is added200 infrared light images (sub-block for having been entirely cut into 16 × 16) in Fusion_Datase image set, as fuzzy graph image set.
Using binary channels network, each channel is made of 5 layers of convolutional neural networks, including 3 convolutional layers, and 1Maximum pond layer and 1 full articulamentum, last output layer are 1 softmax classifiers.Input picture block size is 16× 16, the convolution kernel size of convolutional layer is set as 3 × 3, and step-length is set as 1;Maximum pond layer convolution kernel size 2 × 2, step-length 2 swashFunction living is Relu.Momentum and weight decaying are set to 0.9 and 0.0005, learning rate 0.0001.
It is understood that the first image is input in trained convolutional neural networks, by the convolutionNeural network is trained each of the first image pixel, obtains the score to each pixel, thus rightAll pixels point in first image obtains the score of all pixels point after being trained, thus the first score after being trainedScheme SA, similarly, the corresponding second shot chart S of the second image can be obtainedB.Detailed process is shown in Figure 4, in convolutional Neural netNetwork exports the image after training after convolution twice, maximum pond, convolution sum connect entirely.
S103 is compared the respective pixel of first shot chart and second shot chart, obtains binary map.
Specifically, judging whether the pixel greater than the second pixel for the first pixel on first shot chartValue, wherein first pixel is any one pixel on first shot chart, and second pixel is describedPixel corresponding with first pixel on second shot chart;If it is, the third pixel in the binary mapPixel value is 1;Otherwise, the pixel value of third pixel be 0, wherein the third pixel be the binary map on it is describedThe pixel of first pixel corresponding position.
For binary map T, the first shot chart and the second shot chart are subjected to individual element comparison, if any one pixelPoint, position are (m, n), value SAPixel point value be greater than SBRespective pixel point value, then the pixel is corresponding in binary mapValue is 1 at position (m, n), conversely, then the pixel in the corresponding position of binary map obtains 0, shown in following formula, illustratively,Based on Fig. 2 and Fig. 3, the binary map obtained after through neural network shown in Fig. 4 is as shown in Figure 5.
The binary map of a target area and background area is thus obtained, wherein white area indicates infrared imageTarget area, black region is background area, which can be used as the decision diagram of image co-registration.
S104 is based on the binary map, the first image and second image, obtains the first blending image.
First image and the second image are weighted according to binary map can obtain initial fusion as a result, initial fusion purposeThat the background area of the target area of infrared image and high-resolution visible images is integrated into an image, based on Fig. 2,Fig. 3 and Fig. 5 obtains the first blending image as shown in FIG. 6.
In a kind of implementation, first blending image embodies formula are as follows:
F1(x, y)=D1(x,y)A(x,y)+(1-D1(x,y)B(x,y))
Wherein, D1For binary map, A is the first image, and B is the second image, F1For the first blending image, x, y are to constitute pixelThe coordinate value of point.
S105 calculates the first structure similarity graph of the first image and first blending image, and calculating theSecond structural similarity figure of two images and first blending image.
There are very strong relevance between infrared image and visible images pixel, there is a large amount of among these relevancesStructural information, image structure similarity SSIM (structural similarity index) are that one kind is used to assess image matterThe index of amount.From the perspective of image construction, structural information is defined as brightness and contrast by structural similarity index, with thisTo reflect the structural of objects in images.For two images C and D, then the similarity measure function of two images is defined as:
Wherein, μa, μbIt is the average gray of image C and D, σa, μbIt is the standard deviation of image C and D, σabIt is image C and DCovariance, C1, C2, C3It is minimum normal number, it is therefore an objective to unstable caused by when avoiding denominator close to 0.α, beta, gamma > 0 are to useTo adjust brightness, contrast, the weight of structure function.
Therefore, the first image A and the first blending image F is calculated1First structure similarity graph SAF, illustratively, it is based onFig. 2 and Fig. 6 obtains first structure similarity graph as shown in Figure 7, calculates the second image B and the first blending image F1The second knotStructure similarity graph SBF, the second structural similarity figure as shown in Figure 8 is obtained based on Fig. 3 and Fig. 6.
S106 obtains the disparity map of the first structure similarity graph and the second structural similarity figure.
In a kind of implementation, the difference for obtaining the first structure similarity graph and the second structural similarity figureThe step of different figure, comprising: obtain the difference of the first structure similarity graph and the second structural similarity figure;By the differenceDisparity map of the absolute value of value as the first structure similarity graph and the second structural similarity figure.Specifically, firstStructural similarity figure and the second structural similarity figure disparity map are as follows:
S=| SAF-SBF|
Wherein, first structure similarity graph SAF, the second structural similarity figure SBF, S is disparity map, illustratively, based on figureThe disparity map that 7 and Fig. 8 is obtained is as shown in Figure 9.
S107 is based on the disparity map, the first image and second image, obtains the second blending image.
Since the first blending image that initial fusion obtains completely does not divide target area accurately with background area,Cause the blending image in later period shade occur, therefore takes secondary fusion steps to improve fused image quality.
It is described to be based on the disparity map, the first image and second image in a kind of implementation, obtain secondThe step of blending image includes: to be removed region unrelated with target in the disparity map based on target area, obtained target signatureExtract image;According to the target's feature-extraction image, the first image and second image, the second fusion figure is obtainedPicture.
Illustratively, it is based on disparity map shown in Fig. 9, obtains target's feature-extraction image as shown in Figure 10.
In a kind of implementation, second blending image embodies formula are as follows:
F2(x, y)=D2(x,y)A(x,y)+(1-D2(x,y)B(x,y))
Wherein, D2For target's feature-extraction image, A is the first image, and B is the second image, and x, y are the seat for constituting pixelScale value, F2For the second blending image.
Regard secondary fusion as infrared image and visual image fusion based on conspicuousness Objective extraction.Disparity map SContain the salient region of infrared image.Using morphological images processing method, area unrelated with target in disparity map is removedDomain obtains target's feature-extraction figure, it is to be appreciated that target area is the target person extracted by infrared sensorTherefore infrared figure enhances the conspicuousness of target area, so as to improve the detailed information retained in blending image, such as schemeShown in 11, the second blending image based on Figure 10 and Fig. 2, Fig. 3 acquisition.
Using the thought of binary segmentation, obtain infrared merging figure with visible images by binary channels convolutional neural networksPicture, algorithm of the convolutional neural networks as deep learning, can automatically select characteristics of image, improve the unicity of feature extraction,Avoiding the defect of existing infrared image and visible light image fusion method, (majority needs that manual designs extract feature and feature mentionsTake it is single, be easily lost).Secondly, completely do not divide target area accurately with background area for binary segmentation, thusThe blending image in later period is caused the case where shade occur, according to infrared and visible light source image and original fusion image structureThe difference of similitude obtains conspicuousness target area figure, and secondary fusion steps is taken to improve fused image quality, based on aobviousThe fusion method of work property can keep the integrality of prominent target area, and improve the visual quality of blending image, so as to moreGood serves subsequent image understanding and identification etc..
In addition, the embodiment of the invention also provides a kind of image fusion devices, comprising:
Registration module, for being registrated to infrared image with visible images, the first image after being registrated andTwo images, wherein the first image is that parts of images, second image in the infrared image are that visible images areParts of images in the visible images;
Categorization module, for the first image and second image to be input to trained convolutional neural networksIn, classification the first shot chart of output and the second shot chart after convolutional neural networks training;
Comparison module is compared for the respective pixel to first shot chart and second shot chart, obtainsBinary map;
First Fusion Module obtains first and melts for being based on the binary map, the first image and second imageClose image;
Computing module, for calculating the first structure similarity graph of the first image Yu first blending image, withAnd calculate the second structural similarity figure of the second image and first blending image;
Module is obtained, for obtaining the disparity map of the first structure similarity graph and the second structural similarity figure;
Second Fusion Module obtains second and melts for being based on the disparity map, the first image and second imageClose image.
And a kind of readable storage medium storing program for executing is provided, and it is stored thereon with computer program, it is real when which is executed by processorThe step of incumbent item of image fusion method.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the scope of the present invention.It is allAny modification, equivalent replacement, improvement and so within the spirit and principles in the present invention, are all contained in protection scope of the present inventionIt is interior.

Claims (10)

Translated fromChinese
1.一种图像融合方法,其特征在于,包括:1. an image fusion method, is characterized in that, comprises:对红外图像与可见光图像进行配准,得到配准后的第一图像和第二图像,其中,所述第一图像为所述红外图像中的部分图像、所述第二图像为可见光图像为所述可见光图像中的部分图像;The infrared image and the visible light image are registered to obtain the registered first image and the second image, wherein the first image is a part of the infrared image and the second image is the visible light image. Part of the image in the visible light image;将所述第一图像和所述第二图像输入到训练好的卷积神经网络中,经过所述卷积神经网络训练后分类输出第一得分图和第二得分图;The first image and the second image are input into the trained convolutional neural network, and the first score map and the second score map are classified and output after the training of the convolutional neural network;对所述第一得分图和所述第二得分图的对应像素进行比较,得到二值图;Comparing the corresponding pixels of the first score map and the second score map to obtain a binary map;基于所述二值图、所述第一图像和所述第二图像,得到第一融合图像;obtaining a first fused image based on the binary map, the first image and the second image;计算所述第一图像与所述第一融合图像的第一结构相似度图,以及计算第二图像与所述第一融合图像的第二结构相似度图;calculating a first structural similarity map of the first image and the first fused image, and calculating a second structural similarity map of the second image and the first fused image;获得所述第一结构相似度图和所述第二结构相似度图的差异图;obtaining a difference map between the first structural similarity map and the second structural similarity map;基于所述差异图、所述第一图像和所述第二图像,得到第二融合图像。Based on the difference map, the first image and the second image, a second fused image is obtained.2.根据权利要求1所述的一种图像融合方法,其特征在于,所述对所述第一得分图和所述第二得分图的对应像素进行比较,得到二值图的步骤,包括:2. An image fusion method according to claim 1, wherein the step of comparing the corresponding pixels of the first score map and the second score map to obtain a binary map comprises:针对所述第一得分图上的第一像素点,判断是否大于第二像素点的像素值,其中,所述第一像素点为所述第一得分图上的任意一个像素点,所述第二像素点为所述第二得分图上与所述第一像素点对应的像素点;For the first pixel on the first score map, determine whether the pixel value is greater than that of the second pixel, where the first pixel is any pixel on the first score map, and the first pixel is Two pixel points are the pixel points corresponding to the first pixel point on the second score map;如果是,则在所述二值图上第三像素点的像素值为1;否则,第三像素点的像素值为0,其中,所述第三像素点为所述二值图上与所述第一像素点对应位置的像素点。If yes, the pixel value of the third pixel point on the binary image is 1; otherwise, the pixel value of the third pixel point is 0, wherein the third pixel point is the same as that on the binary image. The pixel point at the corresponding position of the first pixel point.3.根据权利要求1或2所述的一种图像融合方法,其特征在于,所述第一融合图像的具体表达公式为:3. a kind of image fusion method according to claim 1 and 2, is characterized in that, the concrete expression formula of described first fusion image is:F1(x,y)=D1(x,y)A(x,y)+(1-D1(x,y)B(x,y))F1 (x,y)=D1 (x,y)A(x,y)+(1-D1 (x,y)B(x,y))其中,D1为二值图,A为第一图像,B为第二图像,F1为第一融合图像,x、y为构成像素点的坐标值。Among them, D1 is a binary image, A is the first image, B is the second image, F1 is the first fusion image, and x and y are the coordinate values of the pixel points.4.根据权利要求1或2所述的一种图像融合方法,其特征在于,所述获得所述第一结构相似度图和所述第二结构相似度图的差异图的步骤,包括:4. An image fusion method according to claim 1 or 2, wherein the step of obtaining the difference map of the first structural similarity map and the second structural similarity map comprises:获得所述第一结构相似度图和所述第二结构相似度图的差值;obtaining the difference between the first structural similarity map and the second structural similarity map;将所述差值的绝对值作为所述第一结构相似度图和所述第二结构相似度图的差异图。The absolute value of the difference is used as a difference map between the first structural similarity map and the second structural similarity map.5.根据权利要求1或2所述的一种图像融合方法,其特征在于,所述基于所述差异图、所述第一图像和所述第二图像,得到第二融合图像的步骤包括:5. An image fusion method according to claim 1 or 2, wherein the step of obtaining the second fusion image based on the difference map, the first image and the second image comprises:基于目标区域,去除所述差异图中与目标无关的区域,得到目标特征提取图像;Based on the target area, remove the area irrelevant to the target in the difference map to obtain the target feature extraction image;根据所述目标特征提取图像、所述第一图像和所述第二图像,得到第二融合图像。The image, the first image and the second image are extracted according to the target feature to obtain a second fused image.6.根据权利要求5所述的一种图像融合方法,其特征在于,所述第二融合图像的具体表达公式为:6. A kind of image fusion method according to claim 5, is characterized in that, the concrete expression formula of described second fusion image is:F2(x,y)=D2(x,y)A(x,y)+(1-D2(x,y)B(x,y))F2 (x,y)=D2 (x,y)A(x,y)+(1-D2 (x,y)B(x,y))其中,D2为目标特征提取图像,A为第一图像,B为第二图像,x、y为构成像素点的坐标值,F2为第二融合图像。Among them, D2 is the target feature extraction image, A is the first image, B is the second image, x and y are the coordinate values of the pixel points, and F2 is the second fusion image.将该二值图当作决策图,采用加权融合规则得到初次融合图像,最后使用SSIM提取出目标区域的显著图,再次融合,得到最终的融合图像。The binary image is used as a decision map, and the weighted fusion rule is used to obtain the initial fusion image. Finally, SSIM is used to extract the saliency map of the target area, and then fused again to obtain the final fusion image.7.根据权利要求1所述的一种图像融合方法,其特征在于,所述卷积神经网络的训练步骤,包括:7. A kind of image fusion method according to claim 1, is characterized in that, the training step of described convolutional neural network comprises:从第一图像集抽取尺寸为32×32的第一数量张原始图像,并加入第二图像集中的第二数量张可见光图像;extracting a first number of original images with a size of 32×32 from the first image set, and adding a second number of visible light images in the second image set;将所述原始图像和所述可见光图像转换成灰度图,并将以上灰度图像切分成16×16的子块,作为高分辨率图像集;Converting the original image and the visible light image into a grayscale image, and dividing the above grayscale image into 16×16 sub-blocks as a high-resolution image set;对所述第一图像集中的第一数量张原始图像进行高斯模糊处理,并加入第二图像集中的第二数量的红外光图像,再将所述第一数量张原始图像和所述第二数量红外光图像均切分成16×16的子块,作为模糊图像集。Gaussian blurring is performed on the first number of original images in the first image set, and a second number of infrared light images in the second image set are added, and then the first number of original images and the second number of The infrared light image is divided into 16×16 sub-blocks as the blurred image set.8.根据权利要求1或7所述的一种图像融合方法,其特征在于,所述卷积神经网络,为双通道网络,每一个通道都由5层的卷积神经网络构成,包括3个卷积层,1个最大池化层,以及1个全连接层,最后的输出层是1个softmax分类器。8. An image fusion method according to claim 1 or 7, wherein the convolutional neural network is a dual-channel network, and each channel is composed of 5-layer convolutional neural networks, including 3 Convolutional layers, 1 max pooling layer, and 1 fully connected layer, and the final output layer is a softmax classifier.9.一种图像融合装置,其特征在于,包括:9. An image fusion device, comprising:配准模块,用于对红外图像与可见光图像进行配准,得到配准后的第一图像和第二图像,其中,所述第一图像为所述红外图像中的部分图像、所述第二图像为可见光图像为所述可见光图像中的部分图像;a registration module for registering an infrared image and a visible light image to obtain a registered first image and a second image, wherein the first image is a part of the infrared image and the second image The image is a visible light image and is a partial image in the visible light image;分类模块,用于将所述第一图像和所述第二图像输入到训练好的卷积神经网络中,经过所述卷积神经网络训练后分类输出第一得分图和第二得分图;A classification module, for inputting the first image and the second image into the trained convolutional neural network, and classifying and outputting the first score map and the second score map after being trained by the convolutional neural network;比较模块,用于对所述第一得分图和所述第二得分图的对应像素进行比较,得到二值图;a comparison module, configured to compare the corresponding pixels of the first score map and the second score map to obtain a binary map;第一融合模块,用于基于所述二值图、所述第一图像和所述第二图像,得到第一融合图像;a first fusion module, configured to obtain a first fusion image based on the binary map, the first image and the second image;计算模块,用于计算所述第一图像与所述第一融合图像的第一结构相似度图,以及计算第二图像与所述第一融合图像的第二结构相似度图;a calculation module, configured to calculate a first structural similarity map between the first image and the first fused image, and calculate a second structural similarity map between the second image and the first fused image;获得模块,用于获得所述第一结构相似度图和所述第二结构相似度图的差异图;an obtaining module for obtaining a difference map between the first structural similarity graph and the second structural similarity graph;第二融合模块,用于基于所述差异图、所述第一图像和所述第二图像,得到第二融合图像。A second fusion module, configured to obtain a second fusion image based on the difference map, the first image and the second image.10.一种可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现权利要求1至8所述的图像融合方法的步骤。10. A readable storage medium on which a computer program is stored, characterized in that, when the program is executed by a processor, the steps of the image fusion method according to claims 1 to 8 are implemented.
CN201811214128.2A2018-10-182018-10-18Image fusion method and device and readable storage mediumActiveCN109360179B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201811214128.2ACN109360179B (en)2018-10-182018-10-18Image fusion method and device and readable storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201811214128.2ACN109360179B (en)2018-10-182018-10-18Image fusion method and device and readable storage medium

Publications (2)

Publication NumberPublication Date
CN109360179Atrue CN109360179A (en)2019-02-19
CN109360179B CN109360179B (en)2022-09-02

Family

ID=65345711

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201811214128.2AActiveCN109360179B (en)2018-10-182018-10-18Image fusion method and device and readable storage medium

Country Status (1)

CountryLink
CN (1)CN109360179B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110415200A (en)*2019-07-262019-11-05西南科技大学 A method of interslice interpolation for CT images of bone cement implants
CN110555820A (en)*2019-08-282019-12-10西北工业大学Image fusion method based on convolutional neural network and dynamic guide filtering
CN112683787A (en)*2019-10-172021-04-20神讯电脑(昆山)有限公司Object surface detection system and detection method based on artificial neural network
CN112686274A (en)*2020-12-312021-04-20上海智臻智能网络科技股份有限公司Target object detection method and device
CN113378009A (en)*2021-06-032021-09-10上海科技大学Binary neural network quantitative analysis method based on binary decision diagram
CN114782296A (en)*2022-04-082022-07-22荣耀终端有限公司Image fusion method, device and storage medium
US12111267B2 (en)2019-05-152024-10-08Getac Holdings CorporationSystem for detecting surface type of object and artificial neural network-based method for detecting surface type of object
CN119723274A (en)*2025-02-262025-03-28江西省商友实业有限公司 Gas station fire control method, system, readable storage medium and computer

Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101673396A (en)*2009-09-072010-03-17南京理工大学Image fusion method based on dynamic object detection
CN103578092A (en)*2013-11-112014-02-12西北大学Multi-focus image fusion method
CN103700075A (en)*2013-12-252014-04-02浙江师范大学Tetrolet transform-based multichannel satellite cloud picture fusing method
CN103793896A (en)*2014-01-132014-05-14哈尔滨工程大学Method for real-time fusion of infrared image and visible image
US8755597B1 (en)*2011-02-242014-06-17Exelis, Inc.Smart fusion of visible and infrared image data
CN106530266A (en)*2016-11-112017-03-22华东理工大学Infrared and visible light image fusion method based on area sparse representation
CN106709477A (en)*2017-02-232017-05-24哈尔滨工业大学深圳研究生院Face recognition method and system based on adaptive score fusion and deep learning
CN107194904A (en)*2017-05-092017-09-22西北工业大学NSCT area image fusion methods based on supplement mechanism and PCNN
CN107578432A (en)*2017-08-162018-01-12南京航空航天大学 Target recognition method based on fusion of visible light and infrared two-band image target features

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101673396A (en)*2009-09-072010-03-17南京理工大学Image fusion method based on dynamic object detection
US8755597B1 (en)*2011-02-242014-06-17Exelis, Inc.Smart fusion of visible and infrared image data
CN103578092A (en)*2013-11-112014-02-12西北大学Multi-focus image fusion method
CN103700075A (en)*2013-12-252014-04-02浙江师范大学Tetrolet transform-based multichannel satellite cloud picture fusing method
CN103793896A (en)*2014-01-132014-05-14哈尔滨工程大学Method for real-time fusion of infrared image and visible image
CN106530266A (en)*2016-11-112017-03-22华东理工大学Infrared and visible light image fusion method based on area sparse representation
CN106709477A (en)*2017-02-232017-05-24哈尔滨工业大学深圳研究生院Face recognition method and system based on adaptive score fusion and deep learning
CN107194904A (en)*2017-05-092017-09-22西北工业大学NSCT area image fusion methods based on supplement mechanism and PCNN
CN107578432A (en)*2017-08-162018-01-12南京航空航天大学 Target recognition method based on fusion of visible light and infrared two-band image target features

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
DEJAN DRAJIC等: "Adaptive Fusion of Multimodal Surveillance Image Sequences in Visual Sensor Networks", 《IEEE TRANSACTIONS ON CONSUMER ELECTRONICS》*
YU LIU等: "Infrared and visible image fusion with convolutional neural networks", 《INTERNATIONAL JOURNAL OF WAVELETS, MULTIRESOLUTION AND INFORMATION PROCESSING》*
YU LIU等: "Multi-focus image fusion with a deep convolutional neural network", 《INFORMATION FUSION》*
张蕾等: "采用非采样Contourlet变换与区域分类的红外和可见光图像融合", 《光学精密工程》*
王建等: "基于Contourlet的图像融合方法", 《微处理机》*
马丽娟: "基于多尺度分析的图像融合技术研究", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》*

Cited By (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US12111267B2 (en)2019-05-152024-10-08Getac Holdings CorporationSystem for detecting surface type of object and artificial neural network-based method for detecting surface type of object
CN110415200A (en)*2019-07-262019-11-05西南科技大学 A method of interslice interpolation for CT images of bone cement implants
CN110415200B (en)*2019-07-262022-03-08西南科技大学 A method for inter-slice interpolation of CT images of bone cement implants
CN110555820A (en)*2019-08-282019-12-10西北工业大学Image fusion method based on convolutional neural network and dynamic guide filtering
CN112683787A (en)*2019-10-172021-04-20神讯电脑(昆山)有限公司Object surface detection system and detection method based on artificial neural network
CN112686274A (en)*2020-12-312021-04-20上海智臻智能网络科技股份有限公司Target object detection method and device
CN112686274B (en)*2020-12-312023-04-18上海智臻智能网络科技股份有限公司Target object detection method and device
CN113378009A (en)*2021-06-032021-09-10上海科技大学Binary neural network quantitative analysis method based on binary decision diagram
CN113378009B (en)*2021-06-032023-12-01上海科技大学Binary decision diagram-based binary neural network quantitative analysis method
CN114782296A (en)*2022-04-082022-07-22荣耀终端有限公司Image fusion method, device and storage medium
CN119723274A (en)*2025-02-262025-03-28江西省商友实业有限公司 Gas station fire control method, system, readable storage medium and computer

Also Published As

Publication numberPublication date
CN109360179B (en)2022-09-02

Similar Documents

PublicationPublication DateTitle
CN107316307B (en)Automatic segmentation method of traditional Chinese medicine tongue image based on deep convolutional neural network
CN109360179A (en) Image fusion method, device and readable storage medium
CN113065558B (en)Lightweight small target detection method combined with attention mechanism
CN110276316B (en) A human keypoint detection method based on deep learning
CN106920243B (en) Sequenced Image Segmentation Method of Ceramic Material Parts with Improved Fully Convolutional Neural Network
CN110543846A (en) A method of frontalizing multi-pose face images based on generative adversarial networks
CN106372581A (en)Method for constructing and training human face identification feature extraction network
CN107066916B (en)Scene semantic segmentation method based on deconvolution neural network
CN109948566B (en)Double-flow face anti-fraud detection method based on weight fusion and feature selection
CN112836625A (en)Face living body detection method and device and electronic equipment
CN105574550A (en)Vehicle identification method and device
CN104850825A (en)Facial image face score calculating method based on convolutional neural network
CN106127164A (en)The pedestrian detection method with convolutional neural networks and device is detected based on significance
CN108171701A (en)Conspicuousness detection method based on U networks and confrontation study
CN105654066A (en)Vehicle identification method and device
CN109543632A (en)A kind of deep layer network pedestrian detection method based on the guidance of shallow-layer Fusion Features
CN110032932B (en)Human body posture identification method based on video processing and decision tree set threshold
CN104361357B (en)Photo album categorizing system and sorting technique based on image content analysis
CN107944437B (en)A kind of Face detection method based on neural network and integral image
CN108629286A (en)A kind of remote sensing airport target detection method based on the notable model of subjective perception
CN106557750A (en)It is a kind of based on the colour of skin and the method for detecting human face of depth y-bend characteristics tree
CN111160194A (en) A still gesture image recognition method based on multi-feature fusion
CN113269136B (en)Off-line signature verification method based on triplet loss
CN116385832A (en) Dual-mode biometric feature recognition network model training method
CN116229528A (en)Living body palm vein detection method, device, equipment and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp