Movatterモバイル変換


[0]ホーム

URL:


CN115631407B - Underwater transparent biological detection based on fusion of event camera and color frame image - Google Patents

Underwater transparent biological detection based on fusion of event camera and color frame image
Download PDF

Info

Publication number
CN115631407B
CN115631407BCN202211407947.5ACN202211407947ACN115631407BCN 115631407 BCN115631407 BCN 115631407BCN 202211407947 ACN202211407947 ACN 202211407947ACN 115631407 BCN115631407 BCN 115631407B
Authority
CN
China
Prior art keywords
underwater
yolox
network
loss
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211407947.5A
Other languages
Chinese (zh)
Other versions
CN115631407A (en
Inventor
罗偲
吴吉花
任鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Petroleum East China
Original Assignee
China University of Petroleum East China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Petroleum East ChinafiledCriticalChina University of Petroleum East China
Priority to CN202211407947.5ApriorityCriticalpatent/CN115631407B/en
Publication of CN115631407ApublicationCriticalpatent/CN115631407A/en
Application grantedgrantedCritical
Publication of CN115631407BpublicationCriticalpatent/CN115631407B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明涉及水下透明生物检测技术领域,针对水下环境恶劣、光照不足并且水下生物快速移动时普通相机难以捕捉的问题,提出了借助事件相机产生的事件流,将事件流转化成事件帧,再与RGB帧进行图像融合的方法,该方法可以增强图像的对比度,同时本发明对YOLOX检测算法进行改进以进一步提高水下透明生物的检测精度,从而对水下透明生物进行有效的观察和保护。

The present invention relates to the technical field of underwater transparent biological detection. Aiming at the problems of harsh underwater environment, insufficient lighting and difficulty in capturing underwater organisms with ordinary cameras when underwater organisms move quickly, it is proposed to use event streams generated by event cameras to convert event streams into event frames. The method of image fusion with RGB frames can enhance the contrast of the image. At the same time, the present invention improves the YOLOX detection algorithm to further improve the detection accuracy of underwater transparent organisms, thereby effectively observing and protecting underwater transparent organisms. .

Description

Underwater transparent biological detection based on fusion of event camera and color frame image
Technical Field
The invention relates to the technical field of underwater biological detection, in particular to underwater transparent biological detection based on fusion of an event camera and a color frame image.
Background
United states has proposed that the 21 st century is the ocean century, and the ocean world plays an increasingly important role in international competition. Meanwhile, underwater organisms have a close association with humans. The area of the ocean dominant region of China is close to one third of the land area, the research and development requirements of China on ocean resources are continuously expanded in recent years, the utilization degree is continuously increased, the development of the underwater target detection technology is more urgent and important, and therefore the underwater target detection becomes one of the important problems of marine organisms today.
Since the underwater environment is much more complex than on land, it is difficult for a conventional camera to capture when underwater organisms are moving rapidly, especially relatively transparent organisms. Meanwhile, under the condition of underwater illumination attenuation, the problems of limited visible range, unclear blurring, low contrast, non-uniform illumination, uncooled colors and the like of an underwater image acquired by using a common camera are unavoidable.
An event camera (event camera) is a bio-inspired visual sensor that works in a manner that is quite different from a frame-based camera. The event camera does not output intensity image frames at a constant rate, but only information about local pixel level brightness variations (referred to as "events") that when exceeded a set threshold, the event camera would timestamp with microsecond resolution and output an asynchronous event stream. The unique advantages of the event camera enable the event camera to be more suitable for underwater application scenes, so that the invention can improve the utilization rate of image information by fusing the event frames generated by the event camera with the color frames generated by the common frame-based camera by means of the unique advantages of the event camera and fusing useful information in the two images.
The underwater target detection technology is the most widely used basic application task in the underwater detection technology, and is also an important automatic underwater data analysis method. YOLOX networks are an existing object detection network, which mainly comprises: the input end, the reference network, the neck network and the head output end; the input end is used for acquiring an input image and scaling the input image to the input size required by the network; a reference network for extracting some general image feature representations, wherein CSPDarknet53 is used as a backbone network; the feature graphs with different scales are fused by using a space pyramid pooling network in the neck network, and the feature extraction capacity of the network is improved by using a top-down feature graph pyramid network and a bottom-up feature pyramid path aggregation network; and the head output end is used for outputting a target detection result.
Disclosure of Invention
The invention utilizes a deep learning technology and aims to provide underwater transparent biological detection based on fusion of an event camera and a color frame image.
For underwater operation, particularly underwater robot operation and other scenes, the phenomena of color fading, low contrast, blurred details and the like can occur under the common condition of an RGB image obtained by using a common camera. And for fast swimming underwater organisms, a common camera is difficult to capture the clear movement form of the underwater organisms. The event camera may asynchronously output events for changes in intensity, including coordinates of pixels, polarity of intensity, and time stamps. Because image-based object detection techniques are now mature, events are first converted into images, and then color frame images (APS) and event frame images (DVS) are supplemented with image information in both by image fusion techniques. And after the fusion image is obtained, the fusion image is proportionally divided into a training set, a verification set and a test set. The mainstream object detection network YOLOX is then modified and the fused image is fed into the modified YOLOX for training. And finally, carrying out an ablation experiment and a comparison experiment on the trained YOLOX model to verify the effectiveness of the improved model.
The whole frame diagram of the method is shown in fig. 1, and can be divided into the following five steps: the event frames and the color frames are linearly fused to obtain a fused image; dividing the fusion image into a training set, a verification set and a test set according to a preset proportion; to improve YOLOX networks; training the improved YOLOX network to obtain a detection model of the underwater transparent organism; and predicting the image to be detected by using the training-obtained YOLOX underwater transparent biological detection model.
(1) The event frames and the color frames are linearly fused to obtain a fused image
The event data includes five parts, an abscissa x of the pixel, an ordinate y of the pixel, an increase in luminance polarity to +1, a decrease in luminance polarity to-1, and a time stamp. According to the change of the coordinates and the polarity of the pixels, the event data can be converted into an event image with the same size as the frame image in the accumulated time. Because the target detection algorithm for the image is relatively mature at present, the event data of the DVS channel is converted into the image and then is linearly fused with the APS color image.
(2) Dividing the fusion image into a training set, a verification set and a test set according to a preset proportion
The invention prepares the fusion image according to 8:1: the 1 scale is divided into training set, verification set and test set, namely 6497 pictures are divided into 5197 training sets, 649 Zhang Yanzheng sets and 651 test sets.
(3) Modifications to YOLOX networks
YOLOX, while currently the mainstream of object detection networks, still has room for improvement. The present invention proposes a three-point improvement for the feature fusion and loss function portion of the YOLOX network. Firstly, adding an adaptive feature fusion structure ASFF in a feature fusion part, wherein the ASFF structure suppresses inconsistency by a method of learning spatial filtering information, and can adaptively adjust the spatial weight of features of each scale during fusion, so that the scale invariance of the features can be improved; second, replace IOULoss used by YOLOX location loss with an α -iou function that can be used for accurate bounding box regression; finally, due to the complex and varied underwater environments, the presence of occlusions and overlaps between organisms can lead to indistinguishable foreground positive samples and background negative samples. The imbalance of the number of positive and negative samples can lead to unstable models, so the invention changes the cross entropy function of confidence Loss into Focal Loss to balance the number of positive and negative samples.
(4) Training the improved YOLOX network to obtain a detection model of the underwater transparent organism
Training the improved YOLOX model by using preset parameters to obtain a training model. Judging whether the training model meets the expected requirement according to the evaluation result of the verification set; if the training model meets the expected requirement, the training model is saved as an optimal model; and if the parameters of the training model do not meet the expected requirements, adjusting the parameters of the training model, and judging according to the evaluation result of the verification model until the parameters meet the expected requirements.
(5) Using a YOLOX underwater transparent biological detection model to predict an image to be detected,
and testing the test set by using the trained optimal model, and detecting the transparent underwater organisms and obtaining corresponding precision values.
Compared with the prior art, the invention has the following beneficial effects and obvious advantages: the event camera can be used for capturing fast moving objects well, the form of the moving objects can be recorded clearly under the condition of dark illumination, and meanwhile, the low power consumption of the event camera can be well suitable for underwater scenes. Therefore, aiming at the problems that the underwater environment is bad, the common camera is difficult to capture when the underwater organisms move rapidly, and the like, the event stream generated by the event camera can be used for converting the event stream into an event frame, and then the image is fused with an RGB frame to enhance the contrast of the image, and simultaneously, the YOLOX target detection algorithm is improved in three aspects so as to improve the detection precision of the underwater transparent organisms, thereby being beneficial to effectively protecting the scarce underwater organisms.
Drawings
FIG. 1 is a schematic diagram of a Yolox network model of the present invention;
FIG. 2 is a schematic flow chart of the present invention;
FIG. 3 is a schematic diagram of an event camera of the present invention;
fig. 4 is a schematic diagram of the ASFF structure of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention aims to provide an underwater transparent biological detection method based on fusion of an event camera and a color frame image, which can improve the accuracy of underwater transparent biological detection, thereby achieving the purpose of protecting underwater organisms.
Because the feature fusion part of the YOLOX model in the prior art uses an FPN structure, the fusion mode is to adjust the sizes of different feature layers to be the same size and then accumulate, and the inconsistent sizes of the different feature layers can cause the noise of the fusion feature images to become large, so that the detection effect is poor; in addition, IOULOSS is used for positioning loss in YOLOX, and when a prediction frame and a real frame are not intersected, the use of the function can lead to the IOU being 0, and the situation that a loss function is not led can occur; at the same time, for a network like YOLOX one-stage the number of positive and negative samples is extremely unbalanced, which can lead to instability of the trained model.
In order to solve the drawbacks of the prior art, the present invention provides the following examples:
step 1: and acquiring an underwater transparent biological data set, wherein the image data set is an underwater transparent biological detection data set published and published on the internet, and the annotation file is in an XML file format and comprises an image size, a position coordinate of a target frame and a category label. The data set is divided into five categories including jellyfish, red jellyfish, cat fish, salad fish and beautiful white shrimp;
step 2: carrying out linear fusion on an event frame and an RGB frame of an underwater transparent biological data set by means of a function in OpenCV to obtain 6497 fusion images, and carrying out 8 on the fusion images: 1:1 is divided into a training set, a verification set and a test set;
step 3: based on the YOLOX network, an adaptive feature fusion ASFF module is added. ASFF can be represented by the following formula:
wherein ,is that the network adaptively learns the spatial importance weights of the level-1 to level-3 feature maps, and +.>Representing the adjustment from level-1, level-2, level-3 to level-LFeature vectors at feature map locations (i, j);
step 4: based on the YOLOX network, the IOULoss used in the positioning penalty is replaced by α -IOU, which can be used for accurate bounding box regression, is based on unified exponentiation of the existing penalty function of the IOU, and has the following formula:
and alpha is a weight coefficient, the regression accuracy of different horizontal bounding boxes can be realized more flexibly by adjusting alpha, and the best effect of alpha=3 is found in experiments.
Step 5: based on the YOLOX network, the cross entropy function of confidence loss, BCELoss, is changed to a focal loss function, which is defined as follows:
Lfl =-αt (1-pt )γ log(pt ),
wherein alpha is a weight factor, which can inhibit the number unbalance of positive and negative samples; gamma is a focusing parameter and represents a balance coefficient for controlling the weight of a sample difficult to classify, and in an actual experiment, the effect of gamma=2 is the best; p is pt The probability of difficultly classifying the sample is reflected.
In order to verify the effectiveness of the proposed method, the improved YOLOX is first trained using a fused image obtained after fusing an event frame with a color frame. The learning rate adopts a cosine algorithm, the initial learning rate is 0.01, the gradient model uses random gradient descent, training is stopped after 250 epochs are trained, and loss reaches a convergence state when 170 epochs are trained.
The modified YOLOX algorithm was subjected to ablation experiments as shown in table 1. It can be seen from the table that the YOLOX algorithm after improvement and using the fusion image as input is highest in accuracy, which is 2.58% higher than the mAP that is not improved and uses only the color frame image as input, and the accuracy value is improved, which proves the effectiveness of the improved YOLOX algorithm and also the effectiveness of fusion image obtained by fusing event frames and color frames.
TABLE 1 ablation experiments of Yolox
The modified YOLOX algorithm was subjected to comparative experiments. As shown in table 2, comparing the modified YOLOX algorithm with other mainstream classical target detection algorithms, it can be seen from table 2 that the modified YOLOX is improved by 2.18%, 5.65%, 4.75%, 2.47%, 2.58% compared to EfficientDet, faster-RCNN, SSD, retinaNet, YOLOX, respectively, wherein the improvement in accuracy value of the modified YOLOX is greatest compared to fast-RCNN.
TABLE 2 average precision value MAP for each class of YOLOX (%)
The improved YOLOX algorithm has greatly improved accuracy, and the effectiveness and superiority of the algorithm are shown. Therefore, the detection precision of the underwater transparent organism can be effectively improved by using the event frame and the color frame to carry out image fusion and improving the YOLOX, so that the underwater transparent organism can be more effectively protected.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (3)

Translated fromChinese
1.基于事件相机与彩色帧图像融合的水下透明生物检测,包括YOLOX网络,其特征在于:1. Underwater transparent biological detection based on the fusion of event cameras and color frame images, including YOLOX network, which is characterized by:步骤1:获取水下透明生物数据集,该数据集共五个类别,包括蓝月水母、赤月水母、玻璃猫鱼、玻璃拉拉鱼、秀丽白虾;Step 1: Obtain the underwater transparent biological data set, which has five categories in total, including blue moon jellyfish, red moon jellyfish, glass catfish, glass lara fish, and white shrimp;步骤2:将水下透明生物数据集的事件帧图像与彩色帧图像进行融合,得到6497张融合图像,将融合图像按8:1:1的比例分成训练集、验证集、测试集;Step 2: Fuse the event frame images and color frame images of the underwater transparent biological data set to obtain 6497 fused images, and divide the fused images into training sets, verification sets, and test sets in a ratio of 8:1:1;步骤3:基于YOLOX网络,添加自适应特征融合ASFF模块,ASFF模块通过学习空间过滤信息的方法来抑制不一致性,能自适应的调整各尺度的特征在融合时的空间权重,从而可以提高特征的尺度不变性,ASFF可用以下公式来表示:Step 3: Based on the YOLOX network, add the adaptive feature fusion ASFF module. The ASFF module suppresses inconsistency by learning the method of spatial filtering information, and can adaptively adjust the spatial weight of features at each scale during fusion, thereby improving the quality of features. Scale invariance, ASFF can be expressed by the following formula:其中,表示经过ASFF操作后的输出部分,/>是网络自适应地学习从level-1到level-3特征图的空间重要性权重,l表示特征图所处的层数,且表示从level-1、level-2、level-3调整到level-l的特征图位置(i,j)处的特征向量;in, Represents the output part after ASFF operation,/> is the network adaptively learning the spatial importance weight of the feature map from level-1 to level-3, l represents the number of layers where the feature map is located, and Represents the feature vector at the feature map position (i, j) adjusted from level-1, level-2, level-3 to level-l;步骤4:基于YOLOX网络,将定位损失中使用的IOULoss替换为α-iou,该函数可用于精确的边界框回归,是基于IOU现有损失函数的统一幂化,其公式如下:Step 4: Based on the YOLOX network, replace the IOULoss used in the positioning loss with α-iou. This function can be used for accurate bounding box regression and is based on the unified exponentiation of the existing IOU loss function. Its formula is as follows:其中,α为权重系数,通过调节α,能使探测器更灵活地实现不同水平边界框的回归精度;iou表示预测框与真实框的交并比;Among them, α is the weight coefficient. By adjusting α, the detector can more flexibly achieve the regression accuracy of different levels of bounding boxes; iou represents the intersection ratio of the predicted box and the real box;步骤5:基于YOLOX网络,将置信度损失的交叉熵函数即BCELoss改为focal loss损失函数,该focal loss损失函数定义如下:Step 5: Based on the YOLOX network, change the cross-entropy function of confidence loss, namely BCELoss, to the focal loss loss function. The focal loss loss function is defined as follows:Lfl=-αt(1-pt)γlog(pt),Lfl =-αt (1-pt )γ log(pt ),其中,αt为权重因子,可以抑制正负样本的数量失衡;pt为预测目标类别的概率;γ为聚焦参数,表示控制难易分类样本权重的平衡系数;Among them, αt is the weight factor, which can suppress the imbalance in the number of positive and negative samples; pt is the probability of predicting the target category; γ is the focus parameter, which represents the balance coefficient that controls the weight of difficult-to-classify samples;使用事件帧与彩色帧融合后得到的融合图像,对改进的YOLOX网络进行训练,学习率采用余弦算法,初始学习率为0.01,梯度模型使用随机梯度下降,训练250个epoch后停止训练,网络在170个epoch时loss达到了收敛状态。The improved YOLOX network is trained using the fused image obtained by fusing the event frame and the color frame. The learning rate uses the cosine algorithm, the initial learning rate is 0.01, the gradient model uses stochastic gradient descent, and the training is stopped after 250 epochs. The network is The loss reaches the convergence state at 170 epochs.2.根据权利要求1所述的基于事件相机与彩色帧图像融合的水下透明生物检测,其特征在于:所述基于YOLOX网络,将定位损失中使用的IOULoss替换为α-iou,α-iou可用于精确的边界框回归,是基于IOU现有损失函数的统一幂化,其公式如下:2. Underwater transparent biological detection based on the fusion of event cameras and color frame images according to claim 1, characterized in that: based on the YOLOX network, the IOULoss used in the positioning loss is replaced by α-iou, α-iou It can be used for accurate bounding box regression, which is based on the unified exponentiation of the existing IOU loss function. Its formula is as follows:3.根据权利要求1所述的基于事件相机与彩色帧图像融合的水下透明生物检测,其特征在于:所述基于YOLOX网络,将置信度损失的交叉熵函数即BCELoss改为focal loss损失函数,该focal loss损失函数定义如下:3. Underwater transparent biological detection based on the fusion of event cameras and color frame images according to claim 1, characterized in that: based on the YOLOX network, the cross-entropy function of confidence loss, BCELoss, is changed to a focal loss loss function. , the focal loss loss function is defined as follows:Lfl=-αt(1-pt)γlog(pt),γ=2。Lfl =-αt (1-pt )γ log(pt ), γ=2.
CN202211407947.5A2022-11-102022-11-10 Underwater transparent biological detection based on fusion of event camera and color frame imageActiveCN115631407B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202211407947.5ACN115631407B (en)2022-11-102022-11-10 Underwater transparent biological detection based on fusion of event camera and color frame image

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202211407947.5ACN115631407B (en)2022-11-102022-11-10 Underwater transparent biological detection based on fusion of event camera and color frame image

Publications (2)

Publication NumberPublication Date
CN115631407A CN115631407A (en)2023-01-20
CN115631407Btrue CN115631407B (en)2023-10-20

Family

ID=84909901

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202211407947.5AActiveCN115631407B (en)2022-11-102022-11-10 Underwater transparent biological detection based on fusion of event camera and color frame image

Country Status (1)

CountryLink
CN (1)CN115631407B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN116206196B (en)*2023-04-272023-08-08吉林大学 A multi-target detection method and detection system in marine low-light environment
CN116682000B (en)*2023-07-282023-10-13吉林大学Underwater frogman target detection method based on event camera

Citations (13)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105865613A (en)*2016-06-032016-08-17哈尔滨工业大学深圳研究生院Underwater optical detection and imaging sensing method and system used for ocean stereo monitoring
CN112686928A (en)*2021-01-072021-04-20大连理工大学Moving target visual tracking method based on multi-source information fusion
CN112801027A (en)*2021-02-092021-05-14北京工业大学Vehicle target detection method based on event camera
CN113192040A (en)*2021-05-102021-07-30浙江理工大学Fabric flaw detection method based on YOLO v4 improved algorithm
CN113762409A (en)*2021-09-172021-12-07北京航空航天大学 A UAV target detection method based on event camera
CN114170497A (en)*2021-11-032022-03-11中国农业大学 A multi-scale underwater fish detection method based on attention module
CN114359714A (en)*2021-12-152022-04-15中国电子科技南湖研究院Unmanned body obstacle avoidance method and device based on event camera and intelligent unmanned body
CN114694011A (en)*2022-03-252022-07-01中国电子科技南湖研究院 A method and device for detecting fog-penetrating targets based on multi-sensor fusion
CN114863260A (en)*2022-04-112022-08-05燕山大学 Fast-YOLO real-time jellyfish detection method based on deep learning
CN114998603A (en)*2022-03-152022-09-02燕山大学 An underwater target detection method based on deep multi-scale feature factor fusion
CN114998879A (en)*2022-05-162022-09-02武汉大学Fuzzy license plate recognition method based on event camera
WO2022204153A1 (en)*2021-03-222022-09-29Angarak, Inc.Image based tracking system
CN115223032A (en)*2022-07-182022-10-21吉林大学 A method of water creature recognition and matching based on image processing and neural network fusion

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105865613A (en)*2016-06-032016-08-17哈尔滨工业大学深圳研究生院Underwater optical detection and imaging sensing method and system used for ocean stereo monitoring
CN112686928A (en)*2021-01-072021-04-20大连理工大学Moving target visual tracking method based on multi-source information fusion
CN112801027A (en)*2021-02-092021-05-14北京工业大学Vehicle target detection method based on event camera
WO2022204153A1 (en)*2021-03-222022-09-29Angarak, Inc.Image based tracking system
CN113192040A (en)*2021-05-102021-07-30浙江理工大学Fabric flaw detection method based on YOLO v4 improved algorithm
CN113762409A (en)*2021-09-172021-12-07北京航空航天大学 A UAV target detection method based on event camera
CN114170497A (en)*2021-11-032022-03-11中国农业大学 A multi-scale underwater fish detection method based on attention module
CN114359714A (en)*2021-12-152022-04-15中国电子科技南湖研究院Unmanned body obstacle avoidance method and device based on event camera and intelligent unmanned body
CN114998603A (en)*2022-03-152022-09-02燕山大学 An underwater target detection method based on deep multi-scale feature factor fusion
CN114694011A (en)*2022-03-252022-07-01中国电子科技南湖研究院 A method and device for detecting fog-penetrating targets based on multi-sensor fusion
CN114863260A (en)*2022-04-112022-08-05燕山大学 Fast-YOLO real-time jellyfish detection method based on deep learning
CN114998879A (en)*2022-05-162022-09-02武汉大学Fuzzy license plate recognition method based on event camera
CN115223032A (en)*2022-07-182022-10-21吉林大学 A method of water creature recognition and matching based on image processing and neural network fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
面向事件相机的时间信息融合网络框架;徐化池;《计算机科学》;第49卷(第05期);43-49*

Also Published As

Publication numberPublication date
CN115631407A (en)2023-01-20

Similar Documents

PublicationPublication DateTitle
CN114724022B (en)Method, system and medium for detecting farmed fish shoal by fusing SKNet and YOLOv5
CN115082855B (en)Pedestrian shielding detection method based on improved YOLOX algorithm
CN115631407B (en) Underwater transparent biological detection based on fusion of event camera and color frame image
CN108334847B (en) A Face Recognition Method Based on Deep Learning in Real Scenes
CN108615226B (en) An Image Dehazing Method Based on Generative Adversarial Networks
WO2020173226A1 (en)Spatial-temporal behavior detection method
CN113286194A (en)Video processing method and device, electronic equipment and readable storage medium
CN109977774B (en)Rapid target detection method based on adaptive convolution
CN110929593A (en) A Real-time Saliency Pedestrian Detection Method Based on Detail Discrimination
CN106097366B (en) An Image Processing Method Based on Improved Codebook Foreground Detection
CN114037938B (en)NFL-Net-based low-illumination target detection method
WO2024051297A1 (en)Lightweight fire smoke detection method, terminal device and storage medium
CN113158865A (en)Wheat ear detection method based on EfficientDet
CN112686276A (en)Flame detection method based on improved RetinaNet network
CN115223009A (en)Small target detection method and device based on improved YOLOv5
CN117315774A (en)End-to-end multitasking action recognition method and system for dark scene
CN115496920B (en) Adaptive target detection method, system and device based on event camera
CN116309270A (en)Binocular image-based transmission line typical defect identification method
CN118736480A (en) A real-time domestic waste detection system for surveillance cameras
CN119206855A (en) A lightweight pedestrian detection method based on BASFPN multi-scale feature fusion
CN118485876A (en) A method and system for identifying fish feeding intensity based on MobileViT
CN115294387B (en) Image classification method under complex illumination imaging based on deep learning
CN116863507A (en)Pedestrian detection method based on deep learning
CN117011785A (en)Firework detection method, device and system based on space-time correlation and Gaussian heat map
CN117152790A (en)Method and system for detecting cow face in complex scene

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp