Movatterモバイル変換


[0]ホーム

URL:


CN107730526A - A kind of statistical method of the number of fish school - Google Patents

A kind of statistical method of the number of fish school
Download PDF

Info

Publication number
CN107730526A
CN107730526ACN201710874194.1ACN201710874194ACN107730526ACN 107730526 ACN107730526 ACN 107730526ACN 201710874194 ACN201710874194 ACN 201710874194ACN 107730526 ACN107730526 ACN 107730526A
Authority
CN
China
Prior art keywords
mrow
msub
mtd
fish
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710874194.1A
Other languages
Chinese (zh)
Inventor
许枫
张巧花
张纯
梁镜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Acoustics CAS
Original Assignee
Institute of Acoustics CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Acoustics CASfiledCriticalInstitute of Acoustics CAS
Priority to CN201710874194.1ApriorityCriticalpatent/CN107730526A/en
Publication of CN107730526ApublicationCriticalpatent/CN107730526A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种鱼群数量的统计方法,所述方法包括以下步骤:步骤1)获取声纳探测鱼群数据;步骤2)将获取的数据经预处理后得到鱼群声图序列(I1,I2,…IQ),Q为总帧数;步骤3)取上述鱼群声图序列中的N帧图像利用前后相邻帧及间隔帧快速建立背景模型;步骤4)利用背景模型对每帧图像进行分割获取前景目标并二值化;步骤5)计算每帧图像的每个连通区域的前景目标的质心,并在每帧图像上进行标注;步骤6)根据每帧图像标注的质心统计鱼群的数量。本发明的方法能够直接从含有运动前景的场景图像中提取背景,而不需要对场景中的背景和目标建立模型,防止了运动目标速度慢的情况下目标提取产生虚假多目标,保证了前景目标的提取效果。

The invention discloses a method for counting the number of schools of fish. The method comprises the following steps: step 1) obtaining sonar detection data of schools of fish; step 2) obtaining the sequence of sound images of schools of fish after preprocessing the acquired data (I1 , I2 ,...IQ ), Q is the total number of frames; step 3) take the N frame images in the above-mentioned fish sound image sequence and use the front and back adjacent frames and interval frames to quickly establish a background model; step 4) use the background model to Each frame image is segmented to obtain the foreground target and binarized; step 5) calculate the centroid of the foreground target in each connected region of each frame image, and mark on each frame image; step 6) according to the centroid of each frame image label Count the number of fish schools. The method of the present invention can directly extract the background from the scene image containing the moving foreground without establishing a model for the background and the target in the scene, preventing false multi-targets from being extracted when the speed of the moving target is slow, and ensuring the foreground target extraction effect.

Description

Translated fromChinese
一种鱼群数量的统计方法A Statistical Method of Fish School Quantity

技术领域technical field

本发明涉及渔业资源评估技术领域,特别涉及一种鱼群数量的统计方法。The invention relates to the technical field of fishery resource assessment, in particular to a method for counting fish populations.

背景技术Background technique

水声学探测技术是渔业资源调查和评估采用的重要手段,如何有效快速高精度的对资源量进行评估是关键问题。近年随着科技发展,高频声纳逐渐广泛应用于渔业管理调查评估中,目前在资源调查中,用到最多的方法是靠回波积分法进行资源评估,但这种估算比较粗略,无法判断其回波是否仅有鱼群,误差较大,而一些声纳附带的用于统计的软件,若鱼群目标太小其回波信号弱则统计的误差非常大。Hydroacoustic detection technology is an important means of fishery resource investigation and assessment. How to effectively, quickly and accurately assess the amount of resources is a key issue. In recent years, with the development of science and technology, high-frequency sonar has gradually been widely used in fishery management survey and assessment. Currently, in resource survey, the most used method is resource assessment by echo integration method, but this kind of estimation is relatively rough, and it is impossible to judge its return. Whether the wave is only fish school, the error is large, and some statistical software attached to sonar, if the fish school target is too small and the echo signal is weak, the statistical error is very large.

目前对于背景建模的方法有多种,如帧间差分、对称差分等,其往往分割出的目标会产生缺口,只能提取部分轮廓信息,这样导致重叠鱼群可能被分割为一个目标或将目标一分为二等,这种方法误差较大;对于建立概率密度函数或高斯混合等方法提取目标,算法较复杂且计算量大。At present, there are many methods for background modeling, such as inter-frame difference, symmetric difference, etc., which often result in gaps in the segmented target, and only part of the contour information can be extracted, which may cause overlapping fish groups to be segmented into one target or The target is divided into two classes, and this method has a large error; for methods such as establishing a probability density function or a Gaussian mixture to extract targets, the algorithm is more complicated and the amount of calculation is large.

针对目标计数的方法,有基于卡尔曼滤波算法、基于样本块算法等,若鱼群重叠遮挡等,会导致目标跟踪丢失,适应性差,而样本块算法,样本块大小固定不变,使得匹配差生误差,比较耗时,因此这些方法鲁棒性差且比较耗时。Target counting methods include Kalman filter algorithm, sample block algorithm, etc. If fish schools overlap and occlude, etc., the target tracking will be lost and the adaptability will be poor. However, the sample block algorithm has a fixed sample block size, which makes the matching poor. Errors are time-consuming, so these methods are less robust and time-consuming.

发明内容Contents of the invention

本发明的目的在于克服目前鱼群数量统计方法存在的计算复杂、计算量大的问题,提出了一种新的鱼群数量的统计方法,该方法通过计算前景目标的质心,只需对目标质心标记即可进行数量统计,能够在短时间内更准确高效的对鱼群数量进行检测统计。The purpose of the present invention is to overcome the problems of complex calculation and large amount of calculation that exist in the current counting method of fish population, and propose a new statistical method of fish count, which only needs to calculate the center of mass of the target by calculating The quantity statistics can be carried out by marking, and the detection and statistics of the number of fish schools can be carried out more accurately and efficiently in a short period of time.

为了达到上述目的,本发明提出了一种鱼群数量的统计方法,所述方法包括:In order to achieve the above object, the present invention proposes a statistical method of fish school quantity, said method comprising:

步骤1)声纳探测获取鱼群数据;Step 1) sonar detection obtains school of fish data;

步骤2)将获取的声纳鱼群数据经预处理后得到鱼群声图序列(I1,I2,…IQ),Q为总帧数;Step 2) Preprocessing the acquired sonar fish data to obtain the fish sound image sequence (I1 , I2 ,...IQ ), where Q is the total number of frames;

步骤3)取上述鱼群声图序列中的N帧图像(I1,I2,…IN)建立背景模型;Step 3) Take N frames of images (I1 , I2 , ... IN ) in the above-mentioned fish sound image sequence to establish a background model;

步骤4)利用背景模型对每帧图像进行图像分割,获取每帧图像的前景目标并进行二值化;Step 4) utilize background model to carry out image segmentation to each frame image, obtain the foreground object of each frame image and carry out binarization;

步骤5)计算每帧图像的每个连通区域的前景目标的质心,并在每帧图像上进行标注;Step 5) calculate the centroid of the foreground object of each connected region of each frame image, and mark on each frame image;

步骤6)根据每帧图像标注的质心统计鱼群的数量。Step 6) Count the number of fish schools according to the centroid marked in each frame of image.

作为上述方法的一种改进,所述步骤2)具体包括:As an improvement of the above method, the step 2) specifically includes:

步骤201)将采集到的高频声纳数据按照特定存储格式读取,转换为矩形声图像;Step 201) read the collected high-frequency sonar data according to a specific storage format, and convert it into a rectangular acoustic image;

步骤202)对每一帧图像的声波波束作线性插值预处理;在两条波束之间插出三条新的波束;Step 202) performing linear interpolation preprocessing on the acoustic beams of each frame of image; inserting three new beams between the two beams;

所述线性插值预处理为:The linear interpolation preprocessing is:

其中,Bx表示在两条波束之间插入的三条波束,x=1,2,3,B0和B0'表示相邻的两条波束;Wherein, Bx represents three beams inserted between two beams, x=1,2,3, B0 and B0 ' represent two adjacent beams;

步骤203)将插值后的数据作图像显示,得到声图像序列(I1,I2,…IQ),Q为总帧数。Step 203) Display the interpolated data as an image to obtain an audio-image sequence (I1 , I2 , ... IQ ), where Q is the total number of frames.

作为上述方法的一种改进,所述步骤3)具体包括:As an improvement of the above method, the step 3) specifically includes:

步骤301)确立灰度值统计矩阵M:矩阵中的每个元素M(j,x)代表j处像素点的灰度级x,0≤x≤255出现的总次数;Step 301) Establish gray value statistical matrix M: each element M(j, x) in the matrix represents the gray level x of the pixel at j, and the total number of occurrences of 0≤x≤255;

步骤302)依次以第i,i=4,5,…,N-3帧图像为基准,向前向后各选取间隔两帧图像i-3,i-1,i+1,i+3,组成五帧图像,计算出第i帧图像的每个像素点的灰度取值,以BKi(j)确定背景像素点,由此统计并更新灰度值矩阵M;Step 302) Taking the i-th, i=4, 5, ..., N-3 frame images as a reference in turn, select images i-3, i-1, i+1, i+3 at intervals of two frames forward and backward, Form five frames of images, calculate the gray value of each pixel of the i-th frame image, determine the background pixels with BKi (j), and thus count and update the gray value matrix M;

Ii-3(j),Ii-1(j),Ii(j),Ii+1(j),Ii+3(j)分别表示第i-3,i-1,i,i+1,i+3图像帧中j处像素点的灰度值,Di-1(j),Di+1(j)分别表示前向差分和后向差分掩模:Ii-3 (j), Ii-1 (j), Ii (j), Ii+1 (j), Ii+3 (j) respectively represent i-3, i-1, i, The gray value of the pixel at j in the i+1, i+3 image frame, Di-1 (j), Di+1 (j) represent the forward difference and backward difference masks respectively:

其中,T表示阈值门限,由最大类间方差法Ostu自适应计算,用于判断像素点j处灰度值是否发生变化;Among them, T represents the threshold threshold, which is adaptively calculated by the maximum inter-class variance method Ostu, and is used to determine whether the gray value at pixel j has changed;

通过上述差分掩模判断连续七帧声图中在该点是前景目标点还是背景点,公式如下:Judging whether the point in the seven consecutive frames of acoustic images is a foreground target point or a background point through the above differential mask, the formula is as follows:

其中,若BKi(j)=1,即Di-1(j)和Di+1(j)的值都为1时,可确定该点在其间隔的图像帧中即连续7帧中都是运动的;相反,若BKi(j)=0则为背景像素点;Among them, if BKi (j)=1, that is, when the values of Di-1 (j) and Di+1 (j) are both 1, it can be determined that the point is in the image frames at intervals, that is, in 7 consecutive frames are all in motion; on the contrary, if BKi (j)=0, it is the background pixel;

通过上述处理进行统计更新最初的灰度值统计矩阵M:Statistically update the initial gray value statistical matrix M through the above processing:

不断重复上述过程,直至i=N-3;Repeat the above process until i=N-3;

步骤303)根据灰度值统计矩阵M判断出现频率最高的灰度值作为图像点的初始背景灰度值,从而完成背景建模:Step 303) Determine the gray value with the highest frequency of occurrence according to the gray value statistical matrix M as the initial background gray value of the image point, thereby completing the background modeling:

其中,B(j)表示背景模型。Among them, B(j) represents the background model.

作为上述方法的一种改进,所述步骤4)具体包括:As an improvement of the above method, the step 4) specifically includes:

步骤401)依次将每个图像帧与背景模型进行差分,得到每帧图像的前景检测图像二值化F1(x,y);Step 401) Differentiate each image frame from the background model in turn to obtain the binarized foreground detection image F1 (x, y) of each frame of image;

将背景模型B(j)记为B(x,y),当前图像帧记为I(x,y),其中,x和y表示图像的空间位置,x代表行,y代表列;得到的前景检测图像记为F1(x,y):Record the background model B(j) as B(x,y), and the current image frame as I(x,y), where x and y represent the spatial position of the image, x represents the row, and y represents the column; the obtained foreground The detection image is denoted as F1 (x,y):

T1是二值化阈值,利用Ostu自适应法计算得到;F1(x,y)=1对应前景目标及部分噪声;T1 is the binarization threshold, which is calculated by using the Ostu adaptive method; F1 (x, y)=1 corresponds to the foreground target and part of the noise;

步骤402)对上述二值化后的F1(x,y),利用面积法去除孤立噪点,进行Blob分析后标记为F(x,y)。Step 402) For the above-mentioned binarized F1 (x, y), use the area method to remove isolated noise points, perform Blob analysis and mark it as F (x, y).

作为上述方法的一种改进,所述步骤5)具体包括:As an improvement of the above method, the step 5) specifically includes:

首先对每帧图像的F(x,y)进行逐行和逐列扫描,找到第一个F(x,y)=1的点,标记该点,并迭代搜索该点邻域,计算该连通区域面积像素总和Si,然后计算运动目标质心的坐标位置(Xi,Yi):First scan the F(x,y) of each frame image row by row and column by column, find the first point where F(x,y)=1, mark the point, and iteratively search the neighborhood of the point, and calculate the connectivity The sum of pixels of the area area Si , and then calculate the coordinate position (Xi , Yi ) of the center of mass of the moving target:

按照坐标位置(Xi,Yi)将其标记并显示在声图像中,直至扫完整个F(x,y);Mark it according to the coordinate position (Xi ,Yi ) and display it in the acoustic image until the entire F(x,y) is scanned;

每帧图像标记的目标质心数量,其可等效为鱼群数量,对于图像帧中标记过的将不再标记,如果出现新目标则对其进行特殊符号及颜色标记。The number of target centroids marked in each frame of the image can be equivalent to the number of fish schools. The marked in the image frame will no longer be marked. If a new target appears, it will be marked with a special symbol and color.

作为上述方法的一种改进,所述步骤6)具体包括:As an improvement of the above method, the step 6) specifically includes:

统计Q帧图像的累计的标记点获得声图像序列中的鱼群数量,最后将这些标记点显示在原始序列帧声图像中即可实时显示鱼群标记数量,完成鱼群数量的统计。The number of fish schools in the audio image sequence is obtained by counting the accumulated marker points of the Q frame images, and finally these marker points are displayed in the original sequence frame audio images to display the number of fish school markers in real time and complete the statistics of the number of fish schools.

作为上述方法的一种改进,所述方法进一步包括:对背景模型进行更新,具体为:As an improvement of the above method, the method further includes: updating the background model, specifically:

随着不断接收新的鱼群声图序列,需要利用阈值判断背景模型是否需要更新;首先根据初始背景模型判断当前帧的动态前景,若差分后图像中发生变化的像素数与全部像素数的百分比大于某一个阈值,该阈值取80%,则判断背景发生了变化,若连续多帧中背景发生变化,则重新抽取此时的声图像序列,重新建立背景模型。With the continuous reception of new fish sound image sequences, it is necessary to use the threshold to judge whether the background model needs to be updated; first, judge the dynamic foreground of the current frame according to the initial background model, if the number of pixels that changes in the image after the difference is the percentage of the total number of pixels If it is greater than a certain threshold, the threshold is 80%, then it is judged that the background has changed. If the background changes in multiple consecutive frames, the sound image sequence at this time is re-extracted, and the background model is re-established.

本发明的优势在于:The advantages of the present invention are:

1、本发明主要针对高频图像声纳附带软件无法对小目标统计的缺点,提出了基于质心进行统计鱼群数量的方法,不会因为鱼个体小回波弱而漏检,该方法简单快速高效,精度更高,为渔业资源评估提供了一种新的快速统计方法;1. The present invention mainly aims at the disadvantage that the software attached to the high-frequency image sonar cannot count small targets, and proposes a method for counting the number of fish based on the center of mass, which will not miss detection due to weak small echoes of individual fish. This method is simple and fast High efficiency and higher accuracy, providing a new rapid statistical method for fishery resource assessment;

2、本发明中的背景建模方法,能够有效避免混合现象,直接从含有运动前景的场景图像中提取背景,而不需要对场景中的背景和目标建立模型,防止了运动目标速度慢的情况下目标提取产生虚假多目标,并且能以相对较少的帧数提取出较好的背景,保证了前景的提取效果。2. The background modeling method in the present invention can effectively avoid the mixing phenomenon, directly extract the background from the scene image containing the moving foreground, without the need to model the background and the target in the scene, and prevent the slow speed of the moving target The lower target extraction produces false multiple targets, and can extract a better background with a relatively small number of frames, ensuring the extraction effect of the foreground.

附图说明Description of drawings

图1是本发明的鱼群数量的统计方法的流程示意图;Fig. 1 is the schematic flow sheet of the statistical method of fish quantity of the present invention;

图2是本发明的背景建模原理示意图。Fig. 2 is a schematic diagram of the background modeling principle of the present invention.

具体实施方式detailed description

下面结合附图和实施例,对本发明的具体实施方式作进一步详细描述。以下实施例适于说明本发明,但不用来限制本发明的范围。The specific implementation manners of the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. The following examples are suitable for illustrating the invention, but are not intended to limit the scope of the invention.

本例中采用双频识别声纳向水下发射频率为1.8MHz的高频声波,图1是按照本发明的一种鱼群统计的基本方法流程示意图;参照图1,主要方法过程阐述如下:In this example, dual-frequency identification sonar is adopted to underwater launch frequency to be the high-frequency sound wave of 1.8MHz, and Fig. 1 is a schematic flow chart of a basic method according to a kind of fish school statistics of the present invention; with reference to Fig. 1, the main method process is described as follows:

第一步,获取鱼群探测高频声纳数据;The first step is to obtain fish detection high-frequency sonar data;

在海里自然条件下,采用双频识别声纳固定于支架置于海下2米深度,该支架可根据实际调节角度,向海里发射高频声波波束进行探测获取数据。Under the natural conditions of the sea, the dual-frequency identification sonar is used to fix the bracket and place it at a depth of 2 meters under the sea. The bracket can adjust the angle according to the actual situation, and emit high-frequency sound beams into the sea to detect and obtain data.

第二步,将获取的声纳数据经预处理后得到鱼群声图序列(I1,I2,…IQ),Q为总帧数;In the second step, the acquired sonar data is preprocessed to obtain the fish sound image sequence (I1 , I2 ,...IQ ), where Q is the total number of frames;

将采集到的高频声纳数据按照其特定存储格式读取,转换为矩形声图像,并对每一帧图像的声波波束作插值预处理,使图像更清晰完整,易于观察。Read the collected high-frequency sonar data according to its specific storage format, convert it into a rectangular acoustic image, and perform interpolation preprocessing on the acoustic beam of each frame of image to make the image clearer and more complete, easy to observe.

使用最简单的线性插值即可基本满足要求,即在两条波束之间插出三条新的波束。The simplest linear interpolation can basically meet the requirements, that is, three new beams are inserted between two beams.

如下线性插值处理:The following linear interpolation processing:

其中,Bx表示在两条波束之间插入的三条波束,x=1,2,3,B0和B0'表示相邻的两条波束。Wherein, Bx represents three beams inserted between two beams, x=1, 2, 3, B0 and B0 ′ represent two adjacent beams.

因为是使用高频模式的声纳,故有96条声波波束,512个采样点,对这96条波束进行线性插值为381,再将该数据作图像显示,即得到声图像序列(I1,I2,…IQ)。Because the sonar in high-frequency mode is used, there are 96 acoustic beams and 512 sampling points. The linear interpolation of these 96 beams is 381, and then the data is displayed as an image, that is, the acoustic image sequence (I1 , I2 ,...IQ ).

第三步,根据鱼群声图序列,取上述序列中的N帧图像(I1,I2,…IN)进行基于统计方法信息背景建模,并对有效目标进行分割;The third step is to take N frames of images (I1 , I2 ,...IN ) in the above sequence according to the fish school acoustic image sequence to carry out information background modeling based on statistical methods, and segment effective targets;

该步骤的具体实现过程如图2所示。The specific implementation process of this step is shown in FIG. 2 .

确立灰度值统计矩阵M:矩阵中的每个元素M(j,x)代表j处像素点的灰度级x,0≤x≤255出现的总次数;Establish the gray value statistics matrix M: each element M(j,x) in the matrix represents the gray level x of the pixel at j, and the total number of occurrences of 0≤x≤255;

依次以第i,i=4,5,…,N-3帧图像为基准,如以i为基准,向前向后各选取间隔两帧图像i-3,i-1,i+1,i+3,组成五帧图像,计算出第i帧图像的每个像素点的灰度取值,以BKi(j)确定背景像素点,由此统计并更新灰度值矩阵M;Take the i, i=4, 5, ..., N-3 frame images as the reference in turn, such as taking i as the reference, select two frames of images i-3, i-1, i+1, i +3, form five frames of images, calculate the gray value of each pixel of the i-th frame image, determine the background pixels with BKi (j), and thus count and update the gray value matrix M;

Ii-3(j),Ii-1(j),Ii(j),Ii+1(j),Ii+3(j)分别表示第i-3,i-1,i,i+1,i+3图像帧中j处像素点的灰度值,Di-1(j),Di+1(j)分别表示前向差分和后向差分掩模:Ii-3 (j), Ii-1 (j), Ii (j), Ii+1 (j), Ii+3 (j) respectively represent i-3, i-1, i, The gray value of the pixel at j in the i+1, i+3 image frame, Di-1 (j), Di+1 (j) represent the forward difference and backward difference masks respectively:

其中,T表示阈值门限,由最大类间方差法Ostu自适应计算,用于判断像素点j处灰度值是否发生变化;Among them, T represents the threshold threshold, which is adaptively calculated by the maximum inter-class variance method Ostu, and is used to determine whether the gray value at pixel j has changed;

通过上述掩模判断间隔两帧声图中在该点是前景目标点还是背景点,公式如下:The above mask is used to determine whether the point in the acoustic image at intervals of two frames is a foreground target point or a background point. The formula is as follows:

其中,若BKi(j)=1,即Di-1(j)和Di+1(j)的值都为1时,可确定该点在其间隔的图像帧中即连续7帧中都是运动的;相反,若BKi(j)=0则为背景像素点;Among them, if BKi (j)=1, that is, when the values of Di-1 (j) and Di+1 (j) are both 1, it can be determined that the point is in the image frames at intervals, that is, in 7 consecutive frames are all in motion; on the contrary, if BKi (j)=0, it is the background pixel;

通过上述处理进行统计更新最初的灰度值统计矩阵M:Statistically update the initial gray value statistical matrix M through the above processing:

不断重复上述过程,直至i=N-3;Repeat the above process until i=N-3;

根据灰度值统计矩阵M判断出现频率最高的灰度值作为图像点的初始背景灰度值,从而完成背景建模:According to the gray value statistical matrix M, the gray value with the highest frequency is judged as the initial background gray value of the image point, so as to complete the background modeling:

其中,B(j)表示背景模型。Among them, B(j) represents the background model.

对于上述工程中出现的N值,可以根据实际灵活选择,该值在很大程度上决定了背景建模的声图质量。由于在海里实验所以此处的背景不是特别复杂,故为提高系统效率此处的N的个数为16。The N value that appears in the above project can be flexibly selected according to the actual situation, and this value determines the acoustic image quality of the background modeling to a large extent. The background here is not particularly complicated due to the experiment in the sea, so the number of N here is 16 to improve the system efficiency.

背景模型初始化完成后,需要利用阈值判断背景模型是否需要更新。随着新图像的到来,可能会因太阳光照等因素的影响,背景图像灰度值会发生变化,为降低因背景变化产生的误警率,需要进行背景模型的更新,以自适应方式动态更新维护背景模型,但仍然始终保留初始背景模型以确保其可靠性。首先根据初始背景模型判断当前帧的动态前景,若差分后图像中发生变化的像素数与全部像素数的百分比大于某一个阈值(通常取80%),则背景发生了变化,若连续多帧中该比值依然很大,则重新抽取此时的图像序列,重新背景建模。After the background model is initialized, it is necessary to use the threshold to judge whether the background model needs to be updated. With the arrival of new images, the gray value of the background image may change due to factors such as sunlight. In order to reduce the false alarm rate caused by background changes, it is necessary to update the background model and dynamically update it in an adaptive manner. The background model is maintained, but the initial background model is always kept to ensure its reliability. First, the dynamic foreground of the current frame is judged according to the initial background model. If the percentage of the number of pixels changed in the image after the difference to the total number of pixels is greater than a certain threshold (usually 80%), the background has changed. If the ratio is still too large, the image sequence at this time is re-extracted, and the background modeling is re-established.

第四步,图像分割,获得前景目标并二值化,即基于上述背景模型方法中,依次将每个图像帧与背景模型进行差分,得到每帧图像的前景目标;The fourth step is to segment the image, obtain the foreground object and binarize it, that is, based on the above-mentioned background model method, each image frame is sequentially differentiated from the background model to obtain the foreground object of each frame of image;

若背景模型B(j)记为B(x,y),当前帧I(x,y),其中,x和y表示图像的空间位置,x代表行,y代表列;得到的前景检测图像记为F1(x,y),则:If the background model B(j) is recorded as B(x,y), the current frame I(x,y), where x and y represent the spatial position of the image, x represents the row, and y represents the column; the obtained foreground detection image is recorded as is F1 (x, y), then:

T1是二值化阈值,利用Ostu自适应法计算得到。F1(x,y)=1对应前景目标及部分噪声。T1 is the binarization threshold, which is calculated using the Ostu adaptive method. F1 (x, y)=1 corresponds to the foreground target and part of the noise.

对上述二值化后的F1(x,y),利用图像形态学系统中的面积法去除孤立噪点,即小于某一个阈值的面积将其去除,并进行Blob分析并标记为F(x,y);For the above binarized F1 (x, y), use the area method in the image morphology system to remove isolated noise, that is, remove the area smaller than a certain threshold, and perform Blob analysis and mark it as F(x, y);

第五步,计算每帧图像的每个连通区域前景目标的质心,并在每帧图像上进行标注;The fifth step is to calculate the centroid of the foreground object in each connected area of each frame image, and mark it on each frame image;

首先对F(x,y)进行逐行和逐列扫描,找到第一个F(x,y)=1的点,标记该点,并迭代搜索该点邻域,计算该连通区域面积像素总和Si,然后计算运动目标质心:First scan F(x,y) row by row and column by column, find the first point where F(x,y)=1, mark the point, and iteratively search the neighborhood of the point, and calculate the sum of pixels in the connected area Si , and then calculate the center of mass of the moving target:

得到质心的坐标位置(Xi,Yi)将其记录标记并显示在声图像中,直至扫完整个F(x,y),该算法通过一次扫描即可实现全部连通区域的检测。The coordinate position (Xi , Yi ) of the centroid is obtained, recorded and marked and displayed in the acoustic image until the entire F(x, y) is scanned, and the algorithm can realize the detection of all connected regions through one scan.

每帧图像标记的目标质心数量,其可等效为鱼群数量,对于图像帧中标记过的将不再标记,如果出现新目标则对其进行特殊符号及颜色标记。The number of target centroids marked in each frame of the image can be equivalent to the number of fish schools. The marked in the image frame will no longer be marked. If a new target appears, it will be marked with a special symbol and color.

第六步,根据每帧图像鱼群的质心统计鱼群的数量;The sixth step is to count the number of fish schools according to the centroid of fish schools in each frame of image;

统计Q帧图像的累计的标记点即可获得整个声图像序列中的鱼群数量,最后将这些标记点显示在原始序列帧声图像中即可实时显示鱼群标记数量,完成鱼群数量的统计。The number of fish schools in the entire sound image sequence can be obtained by counting the accumulated marker points of the Q frame image, and finally display these marker points in the original sequence frame sound image to display the number of fish school markers in real time and complete the statistics of the number of fish schools .

本发明采取了双频识别声纳系统,可以将数据存储为视频格式或者DDF格式,但都可作为本发明的声图序列信息使用,通过简单的背景建模并更新更精确的在短时间内获得鱼群统计,提高了准确率且适用于小目标,不再局限于附带软件中只能计数十几厘米以上的目标。The present invention adopts a dual-frequency identification sonar system, which can store data in video format or DDF format, but both can be used as the acoustic image sequence information of the present invention, through simple background modeling and updating more accurately in a short time Obtaining fish school statistics improves the accuracy and is suitable for small targets. It is no longer limited to the targets that can only count more than ten centimeters in the attached software.

最后所应说明的是,以上实施例仅用以说明本发明的技术方案而非限制。尽管参照实施例对本发明进行了详细说明,本领域的普通技术人员应当理解,对本发明的技术方案进行修改或者等同替换,都不脱离本发明技术方案的精神和范围,其均应涵盖在本发明的权利要求范围当中。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention rather than limit them. Although the present invention has been described in detail with reference to the embodiments, those skilled in the art should understand that modifications or equivalent replacements to the technical solutions of the present invention do not depart from the spirit and scope of the technical solutions of the present invention, and all of them should be included in the scope of the present invention. within the scope of the claims.

Claims (7)

<mrow> <msub> <mi>D</mi> <mrow> <mi>i</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mn>1</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mo>|</mo> <mn>2</mn> <mo>*</mo> <msub> <mi>I</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>I</mi> <mrow> <mi>i</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>I</mi> <mrow> <mi>i</mi> <mo>-</mo> <mn>3</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>&gt;</mo> <mi>T</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>0</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mo>|</mo> <mn>2</mn> <mo>*</mo> <msub> <mi>I</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>I</mi> <mrow> <mi>i</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>I</mi> <mrow> <mi>i</mi> <mo>-</mo> <mn>3</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>&amp;le;</mo> <mi>T</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>4</mn> <mo>,</mo> <mn>5</mn> <mo>,</mo> <mn>...</mn> <mo>,</mo> <mi>N</mi> <mo>-</mo> <mn>3</mn> </mrow>
<mrow> <msub> <mi>D</mi> <mrow> <mi>i</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mn>1</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mo>|</mo> <msub> <mi>I</mi> <mrow> <mi>i</mi> <mo>+</mo> <mn>3</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>I</mi> <mrow> <mi>i</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>-</mo> <mn>2</mn> <mo>*</mo> <msub> <mi>I</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>&gt;</mo> <mi>T</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>0</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mo>|</mo> <msub> <mi>I</mi> <mrow> <mi>i</mi> <mo>+</mo> <mn>3</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>I</mi> <mrow> <mi>i</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>-</mo> <mn>2</mn> <mo>*</mo> <msub> <mi>I</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>&amp;le;</mo> <mi>T</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>4</mn> <mo>,</mo> <mn>5</mn> <mo>,</mo> <mn>...</mn> <mo>,</mo> <mi>N</mi> <mo>-</mo> <mn>3</mn> </mrow>
CN201710874194.1A2017-09-252017-09-25A kind of statistical method of the number of fish schoolPendingCN107730526A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201710874194.1ACN107730526A (en)2017-09-252017-09-25A kind of statistical method of the number of fish school

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201710874194.1ACN107730526A (en)2017-09-252017-09-25A kind of statistical method of the number of fish school

Publications (1)

Publication NumberPublication Date
CN107730526Atrue CN107730526A (en)2018-02-23

Family

ID=61207837

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201710874194.1APendingCN107730526A (en)2017-09-252017-09-25A kind of statistical method of the number of fish school

Country Status (1)

CountryLink
CN (1)CN107730526A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108392170A (en)*2018-02-092018-08-14中北大学A kind of human eye follow-up mechanism and recognition positioning method for optometry unit
CN108731795A (en)*2018-05-312018-11-02中国科学院声学研究所A kind of field birds quantity survey method based on acoustic imaging technology
CN108742159A (en)*2018-04-082018-11-06浙江安精智能科技有限公司Intelligent control device of water dispenser based on RGB-D cameras and its control method
CN110570361A (en)*2019-07-262019-12-13武汉理工大学 Method, system, device and storage medium for suppressing structured noise in sonar images
CN110992389A (en)*2019-11-082020-04-10浙江大华技术股份有限公司Termite monitoring method, termite monitoring device and termite monitoring storage device
CN112819847A (en)*2021-02-022021-05-18中国水利水电科学研究院Method and system for segmenting target fish image of fish passing channel
KR102276669B1 (en)*2021-04-122021-07-13(주)한컴인텔리전스Fish-shoal ecosystem monitoring system apparatus for detecting the abnormality of fish-shoal ecosystem and the operating method thereof
CN113327263A (en)*2021-05-182021-08-31浙江工业大学Fish shoal liveness monitoring method based on image vision
CN113658124A (en)*2021-08-112021-11-16杭州费尔马科技有限责任公司Method for checking underwater culture assets
CN114119662A (en)*2021-11-232022-03-01广州市斯睿特智能科技有限公司Image processing method and system in fish detection visual system
CN114494344A (en)*2022-01-112022-05-13浙江工业大学 A Transformer-based fish feeding decision method
CN116540244A (en)*2023-03-272023-08-04中国船舶集团有限公司第七一五研究所Three-dimensional sonar-based net cage fish swarm density estimation method
CN118261907A (en)*2024-05-092024-06-28广东锐创生态科技有限公司 An intelligent shrimp farming feed delivery control method and related device
CN118552836A (en)*2024-07-252024-08-27广东海洋大学Marine ranching fish yield evaluation method based on target detection

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102867349A (en)*2012-08-202013-01-09无锡慧眼电子科技有限公司People counting method based on elliptical ring template matching
US20150317797A1 (en)*2012-11-282015-11-05Zte CorporationPedestrian tracking and counting method and device for near-front top-view monitoring video
CN106408575A (en)*2016-09-062017-02-15东南大学Time-space image-based vehicle counting method applied to urban traffic scene
CN106780502A (en)*2016-12-272017-05-31江苏省无线电科学研究所有限公司Sugarcane seeding stage automatic testing method based on image
CN106815819A (en)*2017-01-242017-06-09河南工业大学Many strategy grain worm visible detection methods

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102867349A (en)*2012-08-202013-01-09无锡慧眼电子科技有限公司People counting method based on elliptical ring template matching
US20150317797A1 (en)*2012-11-282015-11-05Zte CorporationPedestrian tracking and counting method and device for near-front top-view monitoring video
CN106408575A (en)*2016-09-062017-02-15东南大学Time-space image-based vehicle counting method applied to urban traffic scene
CN106780502A (en)*2016-12-272017-05-31江苏省无线电科学研究所有限公司Sugarcane seeding stage automatic testing method based on image
CN106815819A (en)*2017-01-242017-06-09河南工业大学Many strategy grain worm visible detection methods

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
危自福 等: "基于背景重构和水平集的多运动目标分割", 《光电工程》*
张进: "基于双频识别声纳 DIDSON 的鱼群定量评估技术", 《中国优秀硕士学位论文全文数据库 信息科技辑》*

Cited By (20)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108392170A (en)*2018-02-092018-08-14中北大学A kind of human eye follow-up mechanism and recognition positioning method for optometry unit
CN108742159A (en)*2018-04-082018-11-06浙江安精智能科技有限公司Intelligent control device of water dispenser based on RGB-D cameras and its control method
CN108731795A (en)*2018-05-312018-11-02中国科学院声学研究所A kind of field birds quantity survey method based on acoustic imaging technology
CN110570361B (en)*2019-07-262022-04-01武汉理工大学Sonar image structured noise suppression method, system, device and storage medium
CN110570361A (en)*2019-07-262019-12-13武汉理工大学 Method, system, device and storage medium for suppressing structured noise in sonar images
CN110992389A (en)*2019-11-082020-04-10浙江大华技术股份有限公司Termite monitoring method, termite monitoring device and termite monitoring storage device
CN112819847A (en)*2021-02-022021-05-18中国水利水电科学研究院Method and system for segmenting target fish image of fish passing channel
KR102276669B1 (en)*2021-04-122021-07-13(주)한컴인텔리전스Fish-shoal ecosystem monitoring system apparatus for detecting the abnormality of fish-shoal ecosystem and the operating method thereof
WO2022220354A1 (en)*2021-04-122022-10-20(주)한컴인텔리전스Fish shoal ecosystem monitoring system device for detecting abnormality in fish shoal ecosystem, and method for operation same
CN113327263A (en)*2021-05-182021-08-31浙江工业大学Fish shoal liveness monitoring method based on image vision
CN113327263B (en)*2021-05-182024-03-01浙江工业大学Image vision-based fish school activity monitoring method
CN113658124A (en)*2021-08-112021-11-16杭州费尔马科技有限责任公司Method for checking underwater culture assets
CN113658124B (en)*2021-08-112024-04-09杭州费尔马科技有限责任公司Method for checking underwater culture assets
CN114119662A (en)*2021-11-232022-03-01广州市斯睿特智能科技有限公司Image processing method and system in fish detection visual system
CN114494344B (en)*2022-01-112025-04-29浙江工业大学 A fish feeding decision-making method based on Transformer
CN114494344A (en)*2022-01-112022-05-13浙江工业大学 A Transformer-based fish feeding decision method
CN116540244A (en)*2023-03-272023-08-04中国船舶集团有限公司第七一五研究所Three-dimensional sonar-based net cage fish swarm density estimation method
CN118261907A (en)*2024-05-092024-06-28广东锐创生态科技有限公司 An intelligent shrimp farming feed delivery control method and related device
CN118552836A (en)*2024-07-252024-08-27广东海洋大学Marine ranching fish yield evaluation method based on target detection
CN118552836B (en)*2024-07-252024-10-25广东海洋大学Marine ranching fish yield evaluation method based on target detection

Similar Documents

PublicationPublication DateTitle
CN107730526A (en)A kind of statistical method of the number of fish school
CN114022759B (en)Airspace finite pixel target detection system and method integrating neural network space-time characteristics
CN112669349A (en)Passenger flow statistical method, electronic equipment and storage medium
CN109685060B (en)Image processing method and device
CN105005992B (en) A Method of Background Modeling and Foreground Extraction Based on Depth Map
CN103871076B (en)Extracting of Moving Object based on optical flow method and super-pixel segmentation
CN111080673B (en)Anti-occlusion target tracking method
CN104978567B (en)Vehicle checking method based on scene classification
CN105427342B (en)A kind of underwater Small object sonar image target detection tracking method and system
CN109934224B (en)Small target detection method based on Markov random field and visual contrast mechanism
CN108764027A (en)A kind of sea-surface target detection method calculated based on improved RBD conspicuousnesses
CN102622598B (en)SAR (Synthesized Aperture Radar) image target detection method based on zone markers and grey statistics
CN107730515A (en)Panoramic picture conspicuousness detection method with eye movement model is increased based on region
CN103971386A (en)Method for foreground detection in dynamic background scenario
CN107886086A (en)A kind of target animal detection method and device based on image/video
CN108647649A (en)The detection method of abnormal behaviour in a kind of video
CN105389799B (en)SAR image object detection method based on sketch map and low-rank decomposition
CN110598613B (en) A kind of highway fog monitoring method
CN100382600C (en) Moving Object Detection Method in Dynamic Scene
CN104240257A (en)SAR (synthetic aperture radar) image naval ship target identification method based on change detection technology
CN107273815A (en)A kind of individual behavior recognition methods and system
CN109684986A (en)A kind of vehicle analysis method and system based on automobile detecting following
CN116758421A (en)Remote sensing image directed target detection method based on weak supervised learning
CN108596032B (en)Detection method, device, equipment and medium for fighting behavior in video
CN106485733A (en)A kind of method following the tracks of interesting target in infrared image

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication

Application publication date:20180223

RJ01Rejection of invention patent application after publication

[8]ページ先頭

©2009-2025 Movatter.jp