Movatterモバイル変換


[0]ホーム

URL:


CN104584076A - Image processing device, image processing method, and image processing program - Google Patents

Image processing device, image processing method, and image processing program
Download PDF

Info

Publication number
CN104584076A
CN104584076ACN201280075307.7ACN201280075307ACN104584076ACN 104584076 ACN104584076 ACN 104584076ACN 201280075307 ACN201280075307 ACN 201280075307ACN 104584076 ACN104584076 ACN 104584076A
Authority
CN
China
Prior art keywords
points
frame
unit
image processing
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201280075307.7A
Other languages
Chinese (zh)
Other versions
CN104584076B (en
Inventor
马场幸三
高桥国和
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Ltd
Original Assignee
Fujitsu Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu LtdfiledCriticalFujitsu Ltd
Publication of CN104584076ApublicationCriticalpatent/CN104584076A/en
Application grantedgrantedCritical
Publication of CN104584076BpublicationCriticalpatent/CN104584076B/en
Expired - Fee Relatedlegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

图像处理装置(10)具有生成部(11)、计算部(12)、确定部(13)。生成部(11)基于在一个视频数据内邻接的两个帧的差分,生成视频数据中的差分图像的平均图像。计算部(12)按照平均图像的纵向的每个点,计算平均图像的横向的亮度值的合计。确定部(13)从平均图像中的消失点的位置的下方向的点中的、亮度值的合计的二次微分值的值较高的点,选择规定个数的点,并从选择出的规定个数的点中确定发动机罩与道路的分界线的点。

An image processing device (10) has a generation unit (11), a calculation unit (12), and a determination unit (13). A generation unit (11) generates an average image of difference images in video data based on the difference between two adjacent frames in one video data. A calculation unit (12) calculates a total of horizontal luminance values of the average image for each point in the vertical direction of the average image. The specifying unit (13) selects a predetermined number of points from the points below the position of the vanishing point in the average image, and the points with a higher value of the second differential value of the sum of the luminance values, and selects a predetermined number of points from the selected points. Among the predetermined number of points, the boundary line between the bonnet and the road is determined.

Description

Translated fromChinese
图像处理装置、图像处理方法以及图像处理程序Image processing device, image processing method, and image processing program

技术领域technical field

本发明涉及图像处理装置等。The present invention relates to an image processing device and the like.

背景技术Background technique

若能够将驾驶中马上要与过路者接触等潜在事故,即、驾驶员惊慌、吃惊那样的事态容易产生的位置的信息通知给驾驶员,则能够防止事故的产生。为了确定潜在事故容易产生的位置的信息,能够利用行车记录仪所记录的数据。例如,在行车记录仪记录有车辆的位置、拍摄日期时间、车辆的加速度、车辆的速度、车辆前方的影像等。If it is possible to notify the driver of potential accidents such as contact with passers-by immediately during driving, that is, the information of the location where the driver panic or surprise is likely to occur, then the occurrence of accidents can be prevented. In order to determine information on locations where potential accidents are likely to occur, data recorded by a driving recorder can be utilized. For example, the vehicle's position, shooting date and time, acceleration of the vehicle, speed of the vehicle, images in front of the vehicle, etc. are recorded in the driving recorder.

这里,若仅利用行车记录仪所记录的加速度等数值数据试验潜在事故的检测,则存在将本来不为潜在事故的事态误检测为潜在事故的情况。这是因为,在车辆行驶中,由于道路的起伏等,存在即使与潜在事故无关,加速度也急剧地变化的情况。Here, if only the numerical data such as acceleration recorded by the drive recorder is used to test the detection of potential accidents, a situation that is not originally a potential accident may be mistakenly detected as a potential accident. This is because, while the vehicle is running, the acceleration may suddenly change due to undulations of the road or the like even if it is not related to a potential accident.

为了防止上述那样的潜在事故的误检测,期望根据与加速度一起记录的车辆前方的影像来解析是否为潜在事故。In order to prevent erroneous detection of the above-mentioned potential accident, it is desirable to analyze whether it is a potential accident or not based on the video ahead of the vehicle recorded together with the acceleration.

作为潜在事故的产生原因,能够列举本车道内存在的前面的车、过路者、自行车等检测对象的存在。因此,为了判断在影像内是否存在可能是潜在事故的原因的上述检测对象,进行从影像检测出的物体是否存在于本车道内的判定。这里,作为以本车道为检测对象的监视对象区域的现有技术,列举有将本车辆行驶的车道作为监视对象区域的现有技术,另外,列举有将连接本车辆的左右的白线位置和无限远点的区域作为监视对象区域的技术。As the cause of the potential accident, the presence of detection objects such as a vehicle in front, a passer-by, and a bicycle existing in the own lane can be mentioned. Therefore, in order to determine whether the above-mentioned detection object that may be the cause of a potential accident exists in the image, it is determined whether the object detected from the image exists in the own lane. Here, as the prior art of the monitoring target area whose detection target is the own lane, the prior art of setting the lane in which the host vehicle is traveling as the monitoring target area is listed. In addition, the position of the white line connecting the left and right sides of the host vehicle and the A technology that uses the area at infinity as the monitoring target area.

专利文献1:日本特开2000-315255号公报Patent Document 1: Japanese Patent Laid-Open No. 2000-315255

专利文献2:日本特开2004-280194号公报Patent Document 2: Japanese Unexamined Patent Publication No. 2004-280194

专利文献3:日本特开2006-018751号公报Patent Document 3: Japanese Patent Laid-Open No. 2006-018751

这里,存在在监视对象区域包含有映现了车辆的发动机罩的区域的情况。映现了发动机罩的区域当然是不可能存在检测对象的区域,所以考虑从监视对象区域排除发动机罩的区域。这样的情况下,例如,考虑使用发动机罩的模板图像,来检测发动机罩的区域的方法。Here, an area in which the hood of the vehicle is reflected may be included in the monitoring target area. Of course, the area where the hood is reflected is an area where the detection target is unlikely to exist, so it is considered to exclude the area of the hood from the monitoring target area. In such a case, for example, a method of detecting the region of the hood using a template image of the hood is considered.

然而,在这样的方法中,存在不能够正确地检测从监视对象区域排除的车辆的发动机罩的区域这样的问题。However, in such a method, there is a problem that the area of the hood of the vehicle excluded from the monitoring target area cannot be accurately detected.

例如,在上述那样的发动机罩的区域的检测方法中,光被发动机罩反射,而发动机罩的区域与模板图像的相似度变低,所以难以正确地检测发动机罩的区域。此外,行车记录仪用的照相机在每个车辆安装位置不同,所以不能够唯一地决定发动机罩的区域的图像上的位置。For example, in the detection method of the hood region as described above, since light is reflected by the hood, the degree of similarity between the hood region and the template image decreases, so it is difficult to accurately detect the hood region. In addition, since the mounting position of the camera for a drive recorder is different for each vehicle, the position on the image of the hood region cannot be uniquely determined.

发明内容Contents of the invention

在一个方面,本发明是鉴于上述而完成的,目的在于提供能够正确地检测从监视对象区域排除的车辆的发动机罩的区域的图像处理装置、图像处理方法以及图像处理程序。In one aspect, the present invention has been made in view of the above, and an object of the present invention is to provide an image processing device, an image processing method, and an image processing program capable of accurately detecting a region of a hood of a vehicle excluded from a monitoring target region.

在第1方案中,图像处理装置具有生成部、计算部以及确定部。生成部基于在一个视频数据内邻接的两个帧的差分,生成视频数据中的差分图像的平均图像。计算部按照平均图像的纵向的每个点计算平均图像的横向的亮度值的合计。确定部从平均图像中的消失点的位置的下方向的点中的、亮度值的合计的二次微分值的值较高的点,选择规定个数的点,并从选择出的规定个数的点中确定发动机罩与道路的分界线的点。In the first aspect, the image processing device includes a generation unit, a calculation unit, and a determination unit. The generation unit generates an average image of difference images in the video data based on a difference between two adjacent frames in one video data. The calculation unit calculates the total of horizontal brightness values of the average image for each point in the vertical direction of the average image. The specifying unit selects a predetermined number of points from points below the position of the vanishing point in the average image, points having a higher value of the sum of the second differential values of the luminance values, and selects a predetermined number of points from the selected predetermined number of points. Among the points of , determine the point where the bonnet divides from the road.

根据本发明的一实施方式,起到能够正确地检测从监视对象区域排除的车辆的发动机罩的区域这样的效果。According to one embodiment of the present invention, there is an effect that the area of the hood of the vehicle excluded from the monitoring target area can be accurately detected.

附图说明Description of drawings

图1是表示本实施例1所涉及的图像处理装置的构成的功能框图。FIG. 1 is a functional block diagram showing the configuration of an image processing device according to the first embodiment.

图2是表示本实施例2所涉及的图像处理装置的构成的功能框图。FIG. 2 is a functional block diagram showing the configuration of an image processing device according to the second embodiment.

图3是表示行车记录仪信息的数据结构的一个例子的图。FIG. 3 is a diagram showing an example of a data structure of drive recorder information.

图4A是表示处理帧的一个例子的图。FIG. 4A is a diagram showing an example of a processing frame.

图4B是表示处理帧的一个例子的图。FIG. 4B is a diagram showing an example of a processing frame.

图4C是表示处理帧的一个例子的图。FIG. 4C is a diagram showing an example of a processing frame.

图4D是表示处理帧的一个例子的图。FIG. 4D is a diagram showing an example of a processing frame.

图5A是表示差分图像的一个例子的图。FIG. 5A is a diagram showing an example of a differential image.

图5B是表示差分图像的一个例子的图。FIG. 5B is a diagram showing an example of a differential image.

图5C是表示差分图像的一个例子的图。FIG. 5C is a diagram showing an example of a differential image.

图6是表示生成图像的一个例子的图。FIG. 6 is a diagram showing an example of a generated image.

图7是表示将表示亮度值的合计的图表与平均图像重叠的情况下的画面的一个例子的图。FIG. 7 is a diagram showing an example of a screen when a graph showing a total of luminance values is superimposed on an average image.

图8是表示V(y)的计算方法的一个例子的图。FIG. 8 is a diagram showing an example of a calculation method of V(y).

图9是表示V(y)的计算方法的一个例子的图。FIG. 9 is a diagram showing an example of a calculation method of V(y).

图10是表示处理帧不包含发动机罩的区域的情况下的亮度的合计值的一个例子的图。FIG. 10 is a diagram showing an example of a total value of luminance in a case where a processing frame does not include a region of a bonnet.

图11是用于说明确定发动机罩与道路的分界线的点的Y坐标的值y′下的本车道的宽度的两端位置的处理的图。FIG. 11 is a diagram for explaining the process of specifying the positions of both ends of the width of the own lane at the Y coordinate value y′ of the boundary point between the bonnet and the road.

图12是表示监视对象区域的一个例子的图。FIG. 12 is a diagram showing an example of a monitoring target area.

图13是表示本实施例2所涉及的图像处理装置的处理顺序的流程图。FIG. 13 is a flowchart showing the processing procedure of the image processing device according to the second embodiment.

图14是表示执行图像处理程序的计算机的一个例子的图。FIG. 14 is a diagram showing an example of a computer that executes an image processing program.

具体实施方式Detailed ways

以下,基于附图对本发明所涉及的图像处理装置、图像处理方法以及图像处理程序的实施例进行详细的说明。此外,并不由该实施例对该发明进行限定。Hereinafter, embodiments of an image processing device, an image processing method, and an image processing program according to the present invention will be described in detail based on the drawings. In addition, this invention is not limited by this Example.

实施例1Example 1

对本实施例1所涉及的图像处理装置的构成进行说明。图1是表示本实施例1所涉及的图像处理装置的构成的功能框图。如图1所示,该图像处理装置10具有生成部11、计算部12、以及确定部13。The configuration of the image processing device according to the first embodiment will be described. FIG. 1 is a functional block diagram showing the configuration of an image processing device according to the first embodiment. As shown in FIG. 1 , this image processing device 10 has a generation unit 11 , a calculation unit 12 , and a determination unit 13 .

生成部11基于在一个视频数据内邻接的两个帧的差分,生成视频数据中的差分图像的平均图像。The generation unit 11 generates an average image of difference images in the video data based on the difference between two adjacent frames in one video data.

计算部12按照平均图像的纵向的每个点计算平均图像的横向的亮度值的合计。The calculation unit 12 calculates the sum of the horizontal brightness values of the average image for each point in the vertical direction of the average image.

确定部13从平均图像中的消失点的位置的下方向的点中的、亮度值的合计的二次微分值的值较高的点,选择规定个数的点,并从选择出的规定个数的点中确定发动机罩与道路的分界线的点。The specifying unit 13 selects a predetermined number of points from points below the position of the vanishing point in the average image, and the points having a higher value of the sum of luminance values of the second differential value, and selects the selected predetermined number of points. Determine the point of the dividing line between the hood and the road out of the number of points.

对本实施例1所涉及的图像处理装置10的效果进行说明。图像处理装置10基于在一个视频数据内邻接的两个帧的差分,生成视频数据中的差分图像的平均图像,并按照生成的平均图像的纵向的每个点计算平均图像的横向的亮度值的合计。图像处理装置10从平均图像中的消失点的位置的下方向的点中的、亮度值的合计的二次微分值的值较高的点,选择规定个数的点,并从选择出的规定个数的点中确定发动机罩与道路的分界线的点。例如,在发动机罩的前端部分外部光被反射。因此,图像中的对应于发动机罩的前端部分的区域的上述亮度值的合计与对应于与前端部分邻接的区域的上述亮度值的合计的差如以下那样。即,这样的差比对应于前端部分的区域以外的其他区域的上述亮度值的合计与对应于与这样的其他区域邻接的区域的上述亮度值的合计的差大的情况较多。因此,与发动机罩的前端部分的区域对应的亮度值的合计的二次微分值比与其他区域对应的亮度值的合计的二次微分值大的情况较多。另外,根据光源的位置,存在光源映现到映现在帧中的发动机罩上的情况。这样的情况下,与发动机罩的前端部分的区域对应的亮度值的合计的二次微分值基本比与其他区域对应的二次微分值大的情况并不变,但存在在光源映现的位置的Y坐标计算到更大的二次微分值的情况。根据图像处理装置10,从亮度值的合计的二次微分值较高的点,选择规定个数的点,并从选择出的规定个数的点中确定发动机罩与道路的分界线的点,所以能够正确地检测从监视对象区域排除的车辆的发动机罩的区域。Effects of the image processing device 10 according to the first embodiment will be described. The image processing device 10 generates an average image of the difference image in the video data based on the difference between two adjacent frames in one video data, and calculates the horizontal luminance value of the average image for each point in the vertical direction of the generated average image. total. The image processing device 10 selects a predetermined number of points from the points below the position of the vanishing point in the average image, which have a higher value of the second differential value of the sum of the luminance values, and selects a predetermined number of points from the selected points. The point of the dividing line between the hood and the road is determined among the number of points. For example, external light is reflected at the front end portion of the hood. Therefore, the difference between the sum of the luminance values of the region corresponding to the front end portion of the hood and the sum of the luminance values corresponding to the region adjacent to the front end portion in the image is as follows. That is, such a difference is often greater than the difference between the sum of the luminance values corresponding to other areas other than the tip portion and the sum of the luminance values corresponding to areas adjacent to such other areas. Therefore, the total quadratic differential value of the luminance values corresponding to the region of the front end portion of the hood is larger than the sum of the luminance values corresponding to other regions. Also, depending on the position of the light source, the light source may be reflected on the hood reflected in the frame. In such a case, the total quadratic differential value of the luminance value corresponding to the region of the front end portion of the hood is basically larger than the quadratic differential value corresponding to other regions, but there is a problem at the position where the light source is reflected. The case where the Y coordinate is calculated to a larger quadratic differential value. According to the image processing device 10, a predetermined number of points are selected from points having a high sum of the second differential value of the luminance value, and a point of a boundary line between the bonnet and the road is determined from the selected predetermined number of points, Therefore, the area of the hood of the vehicle excluded from the monitoring target area can be accurately detected.

实施例2Example 2

对本实施例2所涉及的图像处理装置的构成进行说明。图2是表示本实施例2所涉及的图像处理装置的构成的功能框图。如图2所示,图像处理装置100具有通信部110、输入部120、显示部130、存储部140、以及控制部150。The configuration of the image processing device according to the second embodiment will be described. FIG. 2 is a functional block diagram showing the configuration of an image processing device according to the second embodiment. As shown in FIG. 2 , the image processing device 100 has a communication unit 110 , an input unit 120 , a display unit 130 , a storage unit 140 , and a control unit 150 .

通信部110是经由网络与其他的装置执行数据通信的处理部。例如,通信部110与通信装置等对应。The communication unit 110 is a processing unit that performs data communication with other devices via a network. For example, the communication unit 110 corresponds to a communication device or the like.

输入部120是将各种数据输入给图像处理装置100的输入装置。例如,输入部120与键盘、鼠标、触摸面板等对应。显示部130是显示从控制部150输出的数据的显示装置。例如,显示部130与液晶显示器、触摸面板等对应。The input unit 120 is an input device for inputting various data to the image processing device 100 . For example, the input unit 120 corresponds to a keyboard, a mouse, a touch panel, and the like. The display unit 130 is a display device that displays data output from the control unit 150 . For example, the display unit 130 corresponds to a liquid crystal display, a touch panel, or the like.

存储部140是存储行车记录仪信息141、照相机参数142的存储部。存储部140例如与RAM(Random Access Memory:随机存取存储器)、ROM(Read Only Memory:只读存储器)、闪存(Flash Memory)等半导体存储器元件等存储装置对应。The storage unit 140 is a storage unit that stores drive recorder information 141 and camera parameters 142 . The storage unit 140 corresponds to storage devices such as semiconductor memory elements such as RAM (Random Access Memory), ROM (Read Only Memory), and flash memory (Flash Memory), for example.

行车记录仪信息141包含由行车记录仪记录的各种数据。例如,行车记录仪信息141是加速度变化了规定值以上的时刻的前后数秒的视频数据。图3是表示行车记录仪信息的数据结构的一个例子的图。如图3所示,该行车记录仪信息141将帧编号、日期时间、速度、加速度、位置坐标、以及图像建立对应地存储。帧编号是唯一地识别帧的编号。日期时间是拍摄了相应的帧的日期时间。速度是拍摄了相应的帧的时刻的、安装了行车记录仪的车辆的速度。另外,加速度是拍摄了相应的帧的时刻的、安装了行车记录仪的车辆的加速度。另外,位置坐标是拍摄了相应的帧的时刻的、安装了行车记录仪的车辆的位置坐标。另外,图像是相应的帧的图像数据。The drive recorder information 141 includes various data recorded by the drive recorder. For example, the drive recorder information 141 is video data for several seconds before and after the time when the acceleration changed by a predetermined value or more. FIG. 3 is a diagram showing an example of a data structure of drive recorder information. As shown in FIG. 3 , the drive recorder information 141 stores frame numbers, date and time, speed, acceleration, position coordinates, and images in association with each other. The frame number is a number that uniquely identifies a frame. datetime is the datetime at which the corresponding frame was captured. The speed is the speed of the vehicle on which the drive recorder is mounted at the time when the corresponding frame is captured. In addition, the acceleration is the acceleration of the vehicle on which the drive recorder is mounted at the time when the corresponding frame is photographed. In addition, the position coordinates are the position coordinates of the vehicle on which the drive recorder is mounted at the time when the corresponding frame is photographed. In addition, an image is image data of a corresponding frame.

照相机参数142包含行车记录仪利用的照相机的参数。后述关于照相机参数142的具体的说明。The camera parameters 142 include parameters of cameras used by the drive recorder. A specific description of the camera parameter 142 will be described later.

控制部150具有帧确定部151、检测部152、生成部153、计算部154、确定部155、决定部156、距离计算部157、以及判定部158。控制部150例如与ASIC(Application Specific Integrated Circuit:专用集成电路)、FPGA(Field Programmable Gate Array:现场可编程门阵列)等集成装置对应。另外,控制部150例如与CPU(Central ProcessingUnit:中央处理器)、MPU(Micro Processing Unit:微处理器)等电子电路对应。The control unit 150 has a frame identification unit 151 , a detection unit 152 , a generation unit 153 , a calculation unit 154 , a determination unit 155 , a determination unit 156 , a distance calculation unit 157 , and a determination unit 158 . The control unit 150 corresponds to integrated devices such as ASIC (Application Specific Integrated Circuit) and FPGA (Field Programmable Gate Array: Field Programmable Gate Array). In addition, the control unit 150 corresponds to electronic circuits such as CPU (Central Processing Unit: central processing unit) and MPU (Micro Processing Unit: microprocessor), for example.

帧确定部151是参照行车记录仪信息141,确定安装了行车记录仪以及照相机的车辆进行直线行驶的情况下拍摄到的帧的处理部。例如,帧确定部151利用行车记录仪信息141的加速度,从行车记录仪信息141确定加速度几乎为0的情况下的帧的图像数据,从而能够确定车辆进行直线行驶的情况下拍摄到的帧。此外,所谓的加速度几乎为0的情况下的帧例如,是指加速度在(-α)以上(α)以下的范围内的帧。这里,α是规定值。在以下的说明中,将车辆进行直线行驶的情况下拍摄到的帧标记为处理帧。通过像这样确定处理帧,帧确定部151能够从后述的各种处理所使用的帧排除车辆在转弯处行驶的情况下、车辆在存在台阶等的道路行驶的情况下拍摄到的在上下左右存在偏移的帧。The frame specifying unit 151 is a processing unit that specifies a frame captured when a vehicle equipped with a drive recorder and a camera travels straight ahead with reference to the drive recorder information 141 . For example, the frame specifying unit 151 uses the acceleration of the drive recorder information 141 to specify image data of a frame in which the acceleration is almost zero from the drive recorder information 141 , thereby specifying a frame captured when the vehicle is traveling straight. Note that a frame in which the acceleration is almost zero means, for example, a frame in which the acceleration is in the range of (−α) or more and (α) or less. Here, α is a predetermined value. In the following description, frames captured when the vehicle is traveling straight are referred to as processing frames. By specifying the processing frame in this way, the frame specifying unit 151 can exclude from the frames used in the various processing described later, images of vertical and horizontal motions captured when the vehicle is traveling on a curve or on a road with steps or the like. There are offset frames.

对由帧确定部151确定出的处理帧的一个例子进行说明。图4A~图4D是表示处理帧的一个例子的图。图4A示出了由帧确定部151从通过照相机以规定的帧速率拍摄到的多个帧中确定出的处理帧20。另外,图4B示出了由帧确定部151从多个帧中确定出的处理帧21。此外,处理帧21是处理帧20的下一个由照相机拍摄到的帧,处理帧20与处理帧21是邻接的帧。另外,图4C示出了由帧确定部151从多个帧中确定出的处理帧22。此外,处理帧22是处理帧21的下一个由照相机拍摄到的帧,处理帧21与处理帧22是邻接的帧。另外,图4D示出了由帧确定部151从多个帧中确定出的处理帧23。此外,处理帧23是处理帧22的下一个由照相机拍摄到的帧,处理帧22与处理帧23是邻接的帧。如图4A~图4D所示,处理帧20~23的图像随着车辆的直线行驶而变化。An example of the processing frame specified by the frame specifying unit 151 will be described. 4A to 4D are diagrams showing examples of processing frames. FIG. 4A shows a processing frame 20 specified by the frame specifying unit 151 from a plurality of frames captured by a camera at a predetermined frame rate. In addition, FIG. 4B shows the processing frame 21 specified by the frame specifying unit 151 from a plurality of frames. In addition, the processing frame 21 is a frame captured by the camera next to the processing frame 20, and the processing frame 20 and the processing frame 21 are adjacent frames. In addition, FIG. 4C shows the processing frame 22 specified by the frame specifying unit 151 from a plurality of frames. In addition, the processing frame 22 is a frame captured by the camera next to the processing frame 21, and the processing frame 21 and the processing frame 22 are adjacent frames. In addition, FIG. 4D shows the processing frame 23 specified by the frame specifying unit 151 from a plurality of frames. In addition, the processing frame 23 is a frame captured by the camera next to the processing frame 22, and the processing frame 22 and the processing frame 23 are adjacent frames. As shown in FIGS. 4A to 4D , the images of the processed frames 20 to 23 change as the vehicle travels straight.

另外,帧确定部151也能够通过利用行车记录仪信息141的速度,根据行车记录仪信息141确定速度的变化几乎为0的情况下的帧的图像数据,来确定出处理帧。此外,所谓速度的变化几乎为0的情况下的帧例如,是指速度的变化在规定的范围内的帧。In addition, the frame specifying unit 151 can specify the processing frame by using the speed of the drive recorder information 141 and specifying the image data of the frame in which the speed change is almost zero from the drive recorder information 141 . In addition, the frame in which the change in speed is almost zero means, for example, a frame in which the change in speed is within a predetermined range.

并且,帧确定部151也能够比较各帧的位置信息和地图信息,并将位于地图信息表示的直线的道路上的各帧确定为处理帧。Furthermore, the frame identification unit 151 can also compare the position information of each frame with the map information, and identify each frame located on a straight road indicated by the map information as a processing frame.

检测部152是检测处理帧中的消失点的处理部。例如,检测部152在各处理帧中,检测后述的候选点。然后,检测部152计算各处理帧的候选点的位置的平均位置,并将计算出的平均位置作为消失点,从而检测消失点。The detection unit 152 is a processing unit that detects a vanishing point in a processing frame. For example, the detection unit 152 detects candidate points described later in each processing frame. Then, the detection unit 152 calculates the average position of the positions of the candidate points in each processing frame, and uses the calculated average position as the vanishing point to detect the vanishing point.

这里,对检测部152的候选点的检测方法的一个例子进行说明。例如,检测部152在一个处理帧中,对处理帧的图像数据进行霍夫变换来检测多个直线。然后,检测部152提取检测出的直线中的、在处理帧的左侧的画面的区域中从左下向右上延伸的直线中的、相对于画面的横向的角度在规定的范围,例如,(45-β)°以上(45+β)°以下的范围内的直线。另外,检测部152提取检测出的直线中的、在处理帧的右侧的画面的区域中从右下向左上延伸的直线中的、相对于画面的横向的角度在规定的范围,例如,(135-β)°以上(135+β)°以下的范围内的直线。接着,检测部152求出所提取出的直线的交点。然后,检测部152计算求出的交点的位置的平均位置,并将计算出的交点的位置的平均位置作为一个处理帧中的候选点检测。而且,检测部152对所有的处理帧相同地进行检测候选点的处理。Here, an example of a method of detecting candidate points by the detection unit 152 will be described. For example, the detection unit 152 performs Hough transform on the image data of the processing frame in one processing frame to detect a plurality of straight lines. Then, the detection unit 152 extracts, among the detected straight lines, the straight lines extending from the lower left to the upper right in the area of the screen on the left side of the processing frame, and the angle with respect to the horizontal direction of the screen is within a predetermined range, for example, (45 A straight line within the range of -β)° to (45+β)°. In addition, the detection unit 152 extracts, among the detected straight lines extending from the lower right to the upper left in the area of the screen on the right side of the processing frame, the angle with respect to the horizontal direction of the screen is within a predetermined range, for example, ( A straight line within the range of 135-β)° to (135+β)°. Next, the detection unit 152 obtains an intersection point of the extracted straight lines. Then, the detection unit 152 calculates the average position of the calculated intersection positions, and detects the calculated average position of the intersection positions as a candidate point in one processing frame. Furthermore, the detection unit 152 performs the process of detecting candidate points in the same way for all processing frames.

这里,处理帧是车辆进行直线行驶的情况下拍摄到的帧。因此,通过对处理帧的图像数据进行霍夫变换而检测出的直线是朝向消失点延伸的直线。另一方面,通过对车辆在转弯处行驶的情况下拍摄到的帧的图像数据进行霍夫变换而检测出的直线不为朝向消失点延伸的直线的情况较多。因此,检测部152通过仅将多个帧中的处理帧作为霍夫变换的对象,能够正确地检测消失点。Here, the frame to be processed is a frame captured when the vehicle is traveling straight. Therefore, the straight line detected by performing Hough transform on the image data of the processing frame is a straight line extending toward the vanishing point. On the other hand, the straight line detected by performing Hough transform on frame image data captured when the vehicle is traveling on a curve is not a straight line extending toward the vanishing point in many cases. Therefore, the detection unit 152 can accurately detect the vanishing point by making only the processing frame among the plurality of frames the target of the Hough transform.

生成部153是进行根据由帧确定部151确定出的处理帧生成差分图像的处理、和根据生成的差分图像生成平均图像的处理的处理部。The generating unit 153 is a processing unit that performs a process of generating a difference image from the processing frame specified by the frame specifying unit 151 and a process of generating an average image from the generated difference image.

首先,对生成部153生成差分图像的处理进行说明。例如,生成部153从处理帧中,确定邻接的两个处理帧的对。例如,由帧确定部151确定出图4A~图4D的例子所示的四个处理帧20~23的情况下,生成部153进行以下那样的处理。即,生成部153确定邻接的处理帧20与处理帧21的对、邻接的处理帧21与处理帧22的对、以及邻接的处理帧22与处理帧23的对这三对。然后,生成部153通过按照确定出的每一对,从一个处理帧的各像素的像素值减去另一个处理帧的各像素的像素值,来生成差分图像。图5A~图5C是表示差分图像的一个例子的图。例如,生成部153通过从处理帧21的各像素的像素值减去处理帧20的各像素的像素值,如图5A所示,生成差分图像30。另外,生成部153通过从处理帧22的各像素的像素值减去处理帧21的各像素的像素值,如图5B所示,生成差分图像31。另外,生成部153通过从处理帧23的各像素的像素值减去处理帧22的各像素的像素值,如图5C所示,生成差分图像32。First, the process of generating a difference image by the generating unit 153 will be described. For example, the generating unit 153 specifies a pair of two adjacent processing frames from among the processing frames. For example, when the four processing frames 20 to 23 shown in the example of FIGS. 4A to 4D are specified by the frame specification unit 151 , the generation unit 153 performs the following processing. That is, the generating unit 153 specifies three pairs of the adjacent processing frame 20 and processing frame 21 , the adjacent processing frame 21 and processing frame 22 pair, and the adjacent processing frame 22 and processing frame 23 pair. Then, the generating unit 153 subtracts the pixel value of each pixel of one processing frame from the pixel value of each pixel of the other processing frame for each identified pair to generate a difference image. 5A to 5C are diagrams showing examples of difference images. For example, the generator 153 subtracts the pixel value of each pixel in the processing frame 20 from the pixel value of each pixel in the processing frame 21 to generate a difference image 30 as shown in FIG. 5A . In addition, the generation unit 153 subtracts the pixel value of each pixel in the processing frame 21 from the pixel value of each pixel in the processing frame 22 to generate a difference image 31 as shown in FIG. 5B . In addition, the generation unit 153 subtracts the pixel value of each pixel in the processing frame 22 from the pixel value of each pixel in the processing frame 23 to generate a difference image 32 as shown in FIG. 5C .

接下来,对生成部153生成平均图像的处理进行说明。例如,生成部153按照每个像素对生成的各差分图像的各像素的像素值进行相加。然后,生成部153通过将相加后的每个像素的像素值除以差分图像的数目,来生成平均图像。图6是表示生成图像的一个例子的图。例如,生成部153按照每个像素对差分图像30~32的各像素的像素值进行相加,并将相加后的每个像素的像素值除以差分图像30~32的数目“3”,从而如图6所示,生成平均图像35。Next, the process of generating the average image by the generating unit 153 will be described. For example, the generating unit 153 adds pixel values of each pixel of each generated difference image for each pixel. Then, the generation unit 153 generates an average image by dividing the added pixel value for each pixel by the number of difference images. FIG. 6 is a diagram showing an example of a generated image. For example, the generation unit 153 adds the pixel value of each pixel of the difference images 30 to 32 for each pixel, and divides the added pixel value of each pixel by the number "3" of the difference images 30 to 32, Thus, as shown in FIG. 6 , an average image 35 is generated.

此外,生成部153生成差分图像的处理以及生成平均图像的处理并不限定于上述的处理,能够使用公知技术等其他的方法,生成差分图像、平均图像。In addition, the process of generating the difference image and the process of generating the average image by the generating unit 153 are not limited to the above-mentioned processing, and other methods such as known techniques can be used to generate the difference image and the average image.

计算部154是按照平均图像的纵向的每个像素计算由生成部153生成的平均图像的横向的像素的亮度值的合计的处理部。例如,列举平均图像是横向(X方向)为640像素,纵向(Y方向)为480像素的平均图像,即、由480行、640列的像素构成的情况作为例子进行说明。该平均图像的第N(0≤N≤479)行的像素的亮度值的合计SUM(N)以下式(1)表示。The calculation unit 154 is a processing unit that calculates, for each pixel in the vertical direction of the average image, the sum of the luminance values of the pixels in the horizontal direction of the average image generated by the generation unit 153 . For example, a case where the average image has 640 pixels in the horizontal direction (X direction) and 480 pixels in the vertical direction (Y direction), that is, consists of pixels in 480 rows and 640 columns, will be described as an example. The sum SUM(N) of the luminance values of the pixels in the Nth (0≤N≤479)th row of the average image is represented by the following equation (1).

SUM(N)=p(0,N)+p(1,N)+···+p(639,N)···(1)SUM(N)=p(0,N)+p(1,N)+···+p(639,N)···(1)

这里,p(x,y)表示平均图像中的(x,y)的位置的像素的亮度值。Here, p(x, y) represents the luminance value of the pixel at the position (x, y) in the average image.

计算部154对式(1)的N的值代入0~479的各个整数,计算480行的亮度值的合计。图7是表示将表示亮度值的合计的图表与平均图像重叠的情况下的画面的一个例子。图7示出了在平均图像35重叠了表示亮度值的合计的图表50的情况下的画面41。图7的例子所示的图表50表示纵向的每个像素的亮度值的合计。图7的图表50表示距离画面的左端的位置越大,位于对应的Y坐标的像素的亮度的合计越大。此外,图表50能够以Y坐标的值y的函数f(y)表示。The calculation unit 154 substitutes each integer of 0 to 479 for the value of N in the formula (1), and calculates the total of the luminance values of the 480 lines. FIG. 7 shows an example of a screen when a graph showing the sum of luminance values is superimposed on an average image. FIG. 7 shows a screen 41 when a graph 50 representing a sum of luminance values is superimposed on the average image 35 . The graph 50 shown in the example of FIG. 7 shows the sum of the luminance values for each pixel in the vertical direction. The graph 50 in FIG. 7 shows that the greater the distance from the left end of the screen, the greater the sum of the luminances of the pixels located at the corresponding Y coordinate. In addition, the graph 50 can be expressed by the function f(y) of the value y of the Y coordinate.

确定部155是确定发动机罩的前端部的处理部。例如,确定部155首先,从通过计算部154对平均图像的所有的行计算出的像素的亮度值的合计SUM(0)~SUM(479)中,确定值最大的合计,并确定与确定出的合计对应的Y坐标。这样确定出的Y坐标能够认为是消失点的Y坐标。对该理由进行说明。例如,在帧中,变化较大的区域是相当于路侧的景色的区域。以消失点为边界,相比消失点靠上侧多为天空的区域,相比消失点靠下侧多为道路的区域。然而,在消失点的横向的区域中,几乎没有天空的区域以及道路的区域,而亮度值较高的路侧的景色的区域成为最大。因此,与消失点的Y坐标或者Y坐标附近对应的亮度值的合计在计算出的合计中值最大。因此,与值最大的合计对应的Y坐标能够认为是消失点的Y坐标。The specifying unit 155 is a processing unit that specifies the front end of the hood. For example, the determination unit 155 first determines the sum with the largest value among the sums SUM(0) to SUM(479) of the brightness values of the pixels calculated by the calculation unit 154 for all the rows of the average image, and determines and determines The total corresponding to the Y coordinate. The Y coordinate determined in this way can be regarded as the Y coordinate of the vanishing point. The reason will be described. For example, in the frame, the region that changes greatly corresponds to the scenery on the roadside. With the vanishing point as the boundary, the area above the vanishing point is mostly the sky, and the area below the vanishing point is mostly the road. However, in the horizontal area of the vanishing point, the sky area and the road area are almost absent, and the roadside scene area with a high luminance value is the largest. Therefore, the sum of the luminance values corresponding to the Y coordinate of the vanishing point or the vicinity of the Y coordinate is the largest among the calculated sums. Therefore, the Y coordinate corresponding to the sum with the largest value can be regarded as the Y coordinate of the vanishing point.

然后,确定部155计算将与确定出的Y坐标相比位于画面的下方向的Y坐标的值y,即比确定出的Y坐标大的Y坐标的值y代入函数f(y)的情况下的二次微分值V(y)。例如,作为计算二次微分值V(y)时的式子,能够使用下述的式(2)。Then, the specifying unit 155 calculates the case where the value y of the Y coordinate located in the lower direction of the screen than the specified Y coordinate, that is, the value y of the Y coordinate larger than the specified Y coordinate is substituted into the function f(y). The second differential value of V(y). For example, the following formula (2) can be used as the formula for calculating the quadratic differential value V(y).

V(y)=4*f(y)-f(y-5)-f(y-10)V(y)=4*f(y)-f(y-5)-f(y-10)

+f(y+5)+f(y+10)·········(2)+f(y+5)+f(y+10)·········(2)

这里,“*”是表示乘法的符号。Here, "*" is a symbol indicating multiplication.

确定部155使用式(2),对与确定出的Y坐标相比位于画面的下方向且为整数的所有的值y,计算V(y)。图8以及图9表示V(y)的计算方法的一个例子。图8示出了图表50。确定部155对图8的图表50中与确定出的Y坐标51相比位于画面的下方向且为整数的所有的值y,计算f(y)的二次微分值V(y)。图9示出了像这样计算出的二次微分值V(y)的图表60的一个例子。在图9的图表60中,以Y轴为中心,右侧是正值,左侧表示负值。The specifying unit 155 calculates V(y) for all values y that are integers that are lower than the specified Y-coordinate on the screen using Equation (2). 8 and 9 show an example of a calculation method of V(y). FIG. 8 shows a graph 50 . The specifying unit 155 calculates the quadratic differential value V(y) of f(y) for all values y that are integers located below the specified Y coordinate 51 in the graph 50 of FIG. 8 . FIG. 9 shows an example of a graph 60 of the quadratic differential value V(y) thus calculated. In the graph 60 of FIG. 9 , with the Y axis as the center, positive values are shown on the right side and negative values are shown on the left side.

这里,在发动机罩的前端部分外部光被反射。因此,对应于图像中的发动机罩的前端部分的区域的Y坐标的亮度值的合计与对应于与前端部分邻接的区域的Y坐标的亮度值的合计的差如以下那样。即,这样的差比对应于前端部分的区域以外的其他区域的Y坐标的亮度值的合计与对应于和这样的其他区域邻接的区域的Y坐标的亮度值的合计的差大的情况较多。因此,与发动机罩的前端部分的区域的Y坐标对应的亮度值的合计的二次微分值比与其他区域的Y坐标对应的亮度值的合计的二次微分值大的情况较多。因此,确定部155将与计算出的二次微分值中的、值最大的二次微分值V(y′)对应的点的Y坐标的值y′确定为发动机罩的前端部的Y坐标的值,即、发动机罩与道路的分界线的点的Y坐标的值。这样,根据确定部155,将亮度值的合计的二次微分值最高的点的Y坐标的值y′确定为发动机罩与道路的分界线的点的Y坐标的值,所以能够正确地检测从监视对象区域排除的车辆的发动机罩的区域。例如,确定部155计算出二次微分值V(y)的图表60的情况下,将图表60的点61的Y坐标的值y′确定为发动机罩与道路的分界线的点的Y坐标的值。Here, external light is reflected at the front end portion of the hood. Therefore, the difference between the sum of the luminance values of the Y-coordinates corresponding to the region of the front end of the hood in the image and the sum of the luminance values of the Y-coordinates of the region adjacent to the front end is as follows. That is, such a difference is often larger than the difference between the sum of the luminance values corresponding to the Y coordinates of other regions other than the region of the tip portion and the sum of the luminance values corresponding to the Y coordinates of regions adjacent to such other regions. . Therefore, the sum of the luminance values corresponding to the Y coordinates of the region of the front end of the hood is often larger than the sum of the luminance values corresponding to the Y coordinates of other regions. Therefore, the determination unit 155 determines the Y-coordinate value y' of a point corresponding to the largest quadratic differential value V(y') among the calculated quadratic differential values as the Y-coordinate value of the front end of the hood. Value, that is, the value of the Y coordinate of the point of the boundary between the hood and the road. In this way, according to the determination unit 155, the value y′ of the Y coordinate of the point where the total second differential value of the luminance value is the highest is determined as the value of the Y coordinate of the point of the boundary line between the bonnet and the road. The area of the hood of the vehicle excluded from the monitoring target area. For example, when the determination unit 155 calculates the graph 60 of the quadratic differential value V(y), the value y' of the Y coordinate of the point 61 of the graph 60 is determined as the value of the Y coordinate of the point of the boundary line between the hood and the road. value.

或者,也可以确定部155从计算出的二次微分值中的较高的值选择规定个数,并求出分别与选择出的规定个数的值对应的点的Y坐标的值。而且也可以在其中,将在帧中位于最上方向的Y坐标y′确定为发动机罩的前端部的Y坐标的值,即、发动机罩与道路的分界线的点的Y坐标的值。根据光源的位置,存在光源映现到帧中所映现的发动机罩上的情况。这样的情况下,与发动机罩的前端部分的区域的Y坐标对应的亮度值的合计的二次微分值基本比与其他区域的Y坐标对应的亮度值的合计的二次微分值大的情况并不变,但存在在映现了光源的位置的Y坐标算出更大的二次微分值的情况。Alternatively, the specifying unit 155 may select a predetermined number of higher values among the calculated second differential values, and obtain Y-coordinate values of points corresponding to the selected predetermined number of values. In addition, the Y coordinate y' located in the uppermost direction in the frame may be determined as the Y coordinate value of the front end of the hood, that is, the Y coordinate value of the boundary point between the bonnet and the road. Depending on the position of the light source, there are cases where the light source is reflected on the hood reflected in the frame. In such a case, the sum of the quadratic differential values of the luminance values corresponding to the Y coordinates of the region of the front end of the hood is substantially larger than the sum of the quadratic differentials of the luminance values corresponding to the Y coordinates of other regions. It does not change, but there may be cases where a larger quadratic differential value is calculated at the Y coordinate of the position where the light source is reflected.

若从较高的值选择规定个数,例如选择两个,则与选择出的值对应的两个点的Y坐标中,一个是发动机罩的前端部的Y坐标,另一个是映现到发动机罩上的光源的Y坐标的可能性较高。在帧中,发动机罩的前端部的Y坐标与映现到发动机罩上的光源的Y坐标相比,存在于上方向。因此,在确定部155中,将选择了多个的Y坐标中的、在帧中位于更上方向的Y坐标y′作为发动机罩的前端部的Y坐标的值。此外,确定部155从较高的值选择的规定个数的数目并不需要限定于两个。若将其设为一个,则如先前所说明的那样,选择值最大的二次微分值。If a predetermined number is selected from a higher value, for example, two are selected, one of the Y coordinates of the two points corresponding to the selected value is the Y coordinate of the front end of the hood, and the other is the Y coordinate of the front end of the hood, and the other is the The Y coordinate on the light source has a higher probability. In the frame, the Y-coordinate of the front end of the hood is in the upward direction compared to the Y-coordinate of the light source reflected on the hood. Therefore, in the specifying unit 155 , among the selected Y coordinates, the Y coordinate y′ located in the upper direction in the frame is set as the value of the Y coordinate of the front end portion of the hood. In addition, the number of the predetermined number selected by the determination part 155 from a higher value does not need to be limited to two. If this is set to one, as described above, the second differential value with the largest value is selected.

此外,在由安装于车辆的照相机拍摄到的拍摄区域包含了发动机罩的情况下,如先前的图8所示,存在与确定出的Y坐标51相比在画面的下方向亮度的合计值f(y)再次增大的点52。另一方面,根据车辆的形状、安装于车辆的照相机的视场角,存在拍摄区域不包含发动机罩的情况。图10是表示处理帧不包含发动机罩的区域的情况下的亮度的合计值的一个例子的图。图10的例子示出了处理帧不包含发动机罩的区域的情况下的亮度的合计值f(y)的图表55。如图10的图表55所示,拍摄区域不包含发动机罩的情况下,不存在与确定出的Y坐标56相比在画面的下方向亮度的合计值f(y)再次增大的点。因此,在这样的情况下,不能够通过确定部155确定发动机罩与道路的分界线的点的Y坐标的值。In addition, when the imaging area captured by the camera mounted on the vehicle includes the bonnet, as shown in FIG. (y) Point 52 again increased. On the other hand, depending on the shape of the vehicle and the angle of view of the camera mounted on the vehicle, the hood may not be included in the imaging area. FIG. 10 is a diagram showing an example of a total value of luminance in a case where a processing frame does not include a region of a bonnet. The example in FIG. 10 shows a graph 55 of the total value f(y) of the luminance in the case where the processing frame does not include a region of the hood. As shown in the graph 55 of FIG. 10 , when the imaging area does not include the hood, there is no point at which the total value f(y) of the luminance increases again in the lower direction of the screen than the specified Y coordinate 56 . Therefore, in such a case, the value of the Y coordinate of the point of the boundary line between the hood and the road cannot be specified by the specifying unit 155 .

决定部156是基于由检测部152检测出的消失点、和由确定部155确定出的发动机罩与道路的分界线的点的Y坐标的值y′决定监视对象区域的处理部。例如,决定部156确定发动机罩与道路的分界线的点的Y坐标的值y′上的本车道的宽度两端的位置。The determination unit 156 is a processing unit that determines the monitoring target area based on the vanishing point detected by the detection unit 152 and the value y′ of the Y coordinate of the boundary between the hood and the road specified by the determination unit 155 . For example, the determining unit 156 specifies the positions of both ends of the width of the own lane on the Y coordinate value y′ of the boundary between the bonnet and the road.

图11是用于说明确定发动机罩与道路的分界线的点的Y坐标的值y′上的本车道的宽度两端的位置的处理的图。首先,决定部156获取照相机参数142。照相机参数142包含照相机40的水平视角CH[radian]、照相机40的垂直视角CV[radian]、帧的水平分辨率SH[pixel]、帧的垂直分辨率SV[radian]、以及照相机40的设置高度HGT[m]。11 is a diagram for explaining a process of specifying the positions of both ends of the width of the own lane on the value y′ of the Y coordinate of the point of the boundary between the bonnet and the road. First, the determination unit 156 acquires the camera parameters 142 . The camera parameters 142 include the horizontal viewing angle CH[radian] of the camera 40, the vertical viewing angle CV[radian] of the camera 40, the horizontal resolution SH[pixel] of the frame, the vertical resolution SV[radian] of the frame, and the installation height of the camera 40 HGT[m].

在图11中,40a表示照相机视场,40b表示消失点的位置。另外,41在与照相机40的距离为d[m]的投影面SV上,与检测出检测对象的检测位置对应。另外,图11的θ[radian]是连接照相机40与消失点40b的直线、和连接照相机40与检测位置41的直线所成的角。另外,cy[pixel]是消失点40b与检测位置41的垂直方向的距离。In FIG. 11 , 40a denotes the field of view of the camera, and 40b denotes the position of the vanishing point. In addition, 41 corresponds to the detection position where the detection object is detected on the projection plane SV whose distance from the camera 40 is d[m]. In addition, θ[radian] in FIG. 11 is an angle formed by a straight line connecting the camera 40 and the vanishing point 40 b and a straight line connecting the camera 40 and the detection position 41 . In addition, cy[pixel] is the vertical distance between the vanishing point 40b and the detection position 41 .

这里,式(3)成立,所以θ由式(4)表示。另外,通过使用θ,距离d由式(5)表示。Here, Equation (3) holds, so θ is expressed by Equation (4). Also, by using θ, the distance d is represented by Equation (5).

cy/SV=θ/CV···(3)cy/SV=θ/CV...(3)

θ=CV×cy/SV···(4)θ=CV×cy/SV···(4)

d=HGT/tan(θ)···(5)d=HGT/tan(θ)···(5)

另外,检测对象的Y坐标的值为y″的情况下的、连接照相机40以及消失点40b的直线、和连接照相机40以及检测位置41的直线所成的角θ(y″)[radian]由式(6)表示。In addition, when the value of the Y coordinate of the detection object is y", the angle θ(y") [radian] formed by the straight line connecting the camera 40 and the vanishing point 40b and the straight line connecting the camera 40 and the detection position 41 is given by Formula (6) expresses.

θ(y″)=CV×ABS(VanY-y″)/SV···(6)θ(y″)=CV×ABS(VanY-y″)/SV···(6)

这里,VanY[pixel]表示帧上的消失点的Y坐标的值。另外,ABS(X)是表示X的绝对值的函数。Here, VanY[pixel] represents the value of the Y coordinate of the vanishing point on the frame. In addition, ABS(X) is a function representing the absolute value of X.

另外,检测对象的Y坐标的值为y″的情况下的、照相机40与投影面SV的距离d(y″)[m]由式(7)表示。In addition, when the value of the Y coordinate of the detection object is y", the distance d(y'') [m] between the camera 40 and the projection plane SV is represented by Formula (7).

d(y″)=HGT/tan(θ(y″))···(7)d(y″)=HGT/tan(θ(y″))···(7)

这里,若像素的像素纵横比为1:1(纵:横),则Y坐标的值y″下的与X方向的SH[pixel]相当的距离SHd(y″)[m]由式(8)表示。Here, if the pixel aspect ratio of the pixel is 1:1 (vertical:horizontal), then the distance SHd(y″)[m] corresponding to SH[pixel] in the X direction under the value y″ of the Y coordinate is given by the formula (8 )express.

SHd(y″)=d(y″)×tan(CH/2)×2···(8)SHd(y″)=d(y″)×tan(CH/2)×2···(8)

这里,若将车道宽度设为Wd[m],则帧上的道路宽度W(y″)[pixel]由式(9)表示。Here, assuming that the lane width is Wd[m], the road width W(y″)[pixel] on the frame is expressed by Equation (9).

W(y″)=SH×Wd/SHd(y″)···(9)W(y″)=SH×Wd/SHd(y″)···(9)

这里,若将发动机罩与道路的分界线的点的Y坐标的值设为y′,则由式(10),表示发动机罩与道路的分界线的道路宽度W(y′)。Here, assuming that the value of the Y coordinate of the point of the boundary between the bonnet and the road is y', the road width W(y') of the boundary between the bonnet and the road is expressed by Equation (10).

W(y′)=SH×Wd/SHd(y′)···(10)W(y')=SH×Wd/SHd(y')···(10)

然后,决定部156确定发动机罩与道路的分界线的点的Y坐标的值y′下的本车道的宽度两端的位置p1(VanX-(W(y′)/2),y′)、以及p2(VanX+(W(y′)/2),y′)。此外,VanX[pixel]表示帧上的消失点的X坐标的值。Then, the determining unit 156 determines the positions p1(VanX−(W(y′)/2), y′) of both ends of the width of the own lane at the value y′ of the Y coordinate of the boundary point between the bonnet and the road, and p2(VanX+(W(y')/2), y'). Also, VanX[pixel] represents the value of the X coordinate of the vanishing point on the frame.

若确定出本车道的宽度两端的位置p1以及p2,则决定部156将连接消失点、位置p1、以及位置p2的区域决定为监视对象区域。图12是表示监视对象区域的一个例子的图。如图12的例子所示,决定部156在帧60上,将连接消失点61、和本车道的宽度两端的位置p1以及p2的区域决定为监视对象区域63。When the positions p1 and p2 at both ends of the width of the own lane are identified, the determination unit 156 determines an area connecting the vanishing point, the position p1, and the position p2 as the monitoring target area. FIG. 12 is a diagram showing an example of a monitoring target area. As shown in the example of FIG. 12 , the determination unit 156 determines, on the frame 60 , an area connecting the vanishing point 61 and the positions p1 and p2 at both ends of the lane width as the monitoring target area 63 .

距离计算部157是对行车记录仪信息141所包含的各帧,设定由决定部156决定的监视对象区域,检测监视对象区域中的检测对象的物体,并计算检测出的物体与照相机的距离的处理部。The distance calculation unit 157 sets the monitoring target area determined by the determination unit 156 for each frame included in the drive recorder information 141, detects the detection target object in the monitoring target area, and calculates the distance between the detected object and the camera. processing department.

例如,距离计算部157获取行车记录仪信息141以及照相机参数142,对行车记录仪信息141所包含的各帧设定监视对象区域。然后,距离计算部157按照每个帧,试着进行监视对象区域中所存在的检测对象的物体的检测。此外,距离计算部157能够使用公知技术试着进行物体的检测。在能够检测到物体的情况下,距离计算部157使用下面的参数,通过式(11)计算检测对象的物体的X坐标的值为x″、Y坐标的值为y″的情况下的、照相机40与物体的距离D(x″,y″)。即,距离计算部157使用发动机罩与道路的分界线的点的Y坐标的值y′、消失点的位置(VanX,VanY)、上述的水平视角CH、垂直视角CV、水平分辨率SH、垂直分辨率SV、以及设置高度HGT,计算距离D(x″,y″)。For example, the distance calculation unit 157 acquires the drive recorder information 141 and the camera parameters 142 , and sets a monitoring target area for each frame included in the drive recorder information 141 . Then, the distance calculation unit 157 tries to detect the detection target object existing in the monitoring target area for each frame. In addition, the distance calculation unit 157 can try to detect an object using a known technique. When the object can be detected, the distance calculation unit 157 uses the following parameters to calculate the camera in the case where the X coordinate value of the object to be detected is x″ and the Y coordinate value is y″ by Equation (11). 40 Distance D(x", y") from the object. That is, the distance calculation unit 157 uses the Y coordinate value y′ of the boundary between the bonnet and the road, the position of the vanishing point (VanX, VanY), the above-mentioned horizontal angle of view CH, vertical angle of view CV, horizontal resolution SH, vertical Resolution SV, and setting height HGT, calculate distance D(x", y").

D(x″,y″)=SHd(y″)×(x″-VanX)/SH···(11)D(x″, y″)=SHd(y″)×(x″-VanX)/SH···(11)

判定部158是基于由距离计算部157计算出的检测对象的物体与照相机的距离、行车记录仪信息141等的加速度、速度等,判定是否产生了所谓的“潜在事故”的处理部。The determination unit 158 is a processing unit that determines whether a so-called "potential accident" has occurred based on the distance between the object to be detected and the camera calculated by the distance calculation unit 157, the acceleration, the speed, etc. of the drive recorder information 141, and the like.

例如,判定部158基于由距离计算部157计算出的检测对象的物体与照相机的距离、行车记录仪信息141等的加速度、速度等,按照每个帧,判定是否产生了“潜在事故”。此外,判定部158能够使用公知技术,判定是否产生了“潜在事故”。然后,判定部158将每个帧的判定结果输出给显示部130,使显示部130显示判定结果。For example, the determination unit 158 determines whether a “potential accident” has occurred for each frame based on the distance between the object to be detected and the camera calculated by the distance calculation unit 157 , the acceleration and velocity of the drive recorder information 141 , and the like. In addition, the judging unit 158 can judge whether or not a "potential accident" has occurred using a known technique. Then, the determination unit 158 outputs the determination result for each frame to the display unit 130 and causes the display unit 130 to display the determination result.

接下来,对本实施例2所涉及的图像处理装置100的处理顺序进行说明。图13是表示本实施例2所涉及的图像处理装置的处理顺序的流程图。例如,图13的流程图示出的处理以受理了处理执行命令为契机、执行。图像处理装置100可以从输入部120受理处理执行命令,也可以经由通信部110从其他的装置受理。Next, the processing procedure of the image processing apparatus 100 according to the second embodiment will be described. FIG. 13 is a flowchart showing the processing procedure of the image processing device according to the second embodiment. For example, the processing shown in the flowchart of FIG. 13 is executed upon receipt of a processing execution command. The image processing device 100 may receive a processing execution command from the input unit 120 , or may receive it from another device via the communication unit 110 .

如图13所示,图像处理装置100参照行车记录仪信息141,确定安装了行车记录仪以及照相机的车辆进行直线行驶的情况下拍摄到的处理帧(步骤S101)。图像处理装置100在各处理帧中,检测候选点(步骤S102)。图像处理装置100计算各处理帧的候选点的位置的平均位置,并将计算出的平均位置作为消失点,从而检测消失点(步骤S103)。As shown in FIG. 13 , the image processing device 100 refers to the drive recorder information 141 to specify processing frames captured when a vehicle equipped with a drive recorder and a camera is traveling straight (step S101 ). The image processing device 100 detects candidate points in each processing frame (step S102). The image processing apparatus 100 calculates the average position of the positions of the candidate points in each processing frame, and uses the calculated average position as the vanishing point to detect the vanishing point (step S103 ).

图像处理装置100根据处理帧生成差分图像(步骤S104)。图像处理装置100根据生成的差分图像生成平均图像(步骤S105)。图像处理装置100按照平均图像的纵向的每个像素计算平均图像的横向的像素的亮度值的合计(步骤S106)。The image processing device 100 generates a difference image from the processed frame (step S104). The image processing device 100 generates an average image from the generated difference image (step S105). The image processing device 100 calculates the sum of the luminance values of the pixels in the horizontal direction of the average image for each pixel in the vertical direction of the average image (step S106 ).

图像处理装置100从对平均图像的所有的行计算出的像素的亮度值的合计中,确定与值最大的合计对应的Y坐标(步骤S107)。图像处理装置100对与确定出的Y坐标相比位于画面的下方向且为整数的所有的值y,计算f(y)的二次微分值V(y)(步骤S108)。图像处理装置100根据计算出的二次微分值V(y)中的较大的值计算与规定个数的二次微分值对应的点的Y坐标的值,并进行以下那样的处理。即,图像处理装置100将计算出的Y坐标中的、在帧中位于最上方向的Y坐标的值y′确定为发动机罩与道路的分界线的点的Y坐标的值(步骤S109)。The image processing device 100 specifies the Y-coordinate corresponding to the maximum sum among the sums of the luminance values of the pixels calculated for all the rows of the average image (step S107 ). The image processing device 100 calculates the quadratic differential value V(y) of f(y) for all values y that are integers that are lower than the specified Y coordinate on the screen (step S108 ). The image processing device 100 calculates Y-coordinate values of points corresponding to a predetermined number of quadratic differential values from the larger value among the calculated quadratic differential values V(y), and performs the following processing. That is, the image processing device 100 determines the Y-coordinate value y' positioned uppermost in the frame among the calculated Y-coordinates as the Y-coordinate value of the boundary point between the hood and the road (step S109 ).

图像处理装置100确定发动机罩与道路的分界线的点的Y坐标的值y′下的本车道的宽度两端的位置p1、p2(步骤S110)。图像处理装置100判定行车记录仪信息141所包含的帧中,是否存在未选择的帧(步骤S111)。在没有未选择的帧的情况下(步骤S111:否),结束处理。The image processing device 100 specifies the positions p1 and p2 at both ends of the width of the own lane at the Y-coordinate value y′ of the boundary point between the bonnet and the road (step S110 ). The image processing device 100 determines whether there is an unselected frame among the frames included in the drive recorder information 141 (step S111 ). When there is no unselected frame (step S111: NO), the process ends.

另一方面,在存在未选择的帧的情况下(步骤S111:是),图像处理装置100选择一个未选择的帧(步骤S112)。图像处理装置100在选择的帧设定监视对象区域(步骤S113)。图像处理装置100试着进行在所设定的监视对象区域中存在的检测对象的物体的检测(步骤S114)。On the other hand, when there are unselected frames (step S111: Yes), the image processing apparatus 100 selects one unselected frame (step S112). The image processing apparatus 100 sets a monitoring target area in the selected frame (step S113). The image processing device 100 attempts to detect a detection target object existing in the set monitoring target area (step S114 ).

图像处理装置100判定是否能够检测到物体(步骤S115)。在能够检测到物体的情况下(步骤S115:是),图像处理装置100计算照相机40与物体的距离(步骤S116)。图像处理装置100基于检测对象的物体与照相机的距离、行车记录仪信息141等的加速度、速度等,判定在选择的帧中,是否产生了“潜在事故”(步骤S117)。图像处理装置100将判定结果输出给显示部130,使显示部130显示判定结果(步骤S118),并返回到步骤S111。另一方面,在未能够检测到物体的情况下(步骤S115:否),也返回到步骤S111。The image processing device 100 determines whether or not an object can be detected (step S115). When the object can be detected (step S115: Yes), the image processing device 100 calculates the distance between the camera 40 and the object (step S116). The image processing device 100 determines whether a "potential accident" has occurred in the selected frame based on the distance between the object to be detected and the camera, the acceleration and speed of the drive recorder information 141, etc. (step S117). The image processing apparatus 100 outputs the determination result to the display unit 130, causes the display unit 130 to display the determination result (step S118), and returns to step S111. On the other hand, also when an object cannot be detected (step S115: NO), it returns to step S111.

接下来,对本实施例2所涉及的图像处理装置100的效果进行说明。图像处理装置100基于在一个视频数据内邻接的两个处理帧的差分,生成视频数据中的差分图像的平均图像,并按照生成的平均图像的纵向的每个像素计算平均图像的横向的像素的亮度值的合计。图像处理装置100从平均图像中的消失点的位置的下方向的点中的、亮度值的合计的二次微分值的值较高的点,选择规定个数的点,并从选择出的规定个数的点中确定发动机罩与道路的分界线的点的Y坐标的值y′。例如,在发动机罩的前端部分外部光被反射。因此,对应于发动机罩的前端部分的区域的Y坐标的亮度值的合计与对应于与前端部分邻接的区域的Y坐标的亮度值的合计的差如以下那样。即,这样的差比对应于前端部分的区域以外的其他区域的Y坐标的亮度值的合计与对应于与这样的其他区域邻接的区域的Y坐标的亮度值的合计的差大的情况较多。因此,与发动机罩的前端部分的区域的Y坐标对应的亮度值的合计的二次微分值比与其他区域的Y坐标对应的亮度值的合计的二次微分值大的情况较多。另外,根据光源的位置,存在光源映现到在帧中映现的发动机罩上的情况。这样的情况下,与发动机罩的前端部分的区域的Y坐标对应的亮度值的合计的二次微分值基本比与其他区域的Y坐标对应的二次微分值大的情况并不变,但存在在映现了光源的位置的Y坐标计算到更大的二次微分值的情况。根据图像处理装置100,从亮度值的合计的二次微分值较高的点,选择规定个数的点,并从选择出的规定个数的点中确定发动机罩与道路的分界线的点,所以能够正确地检测从监视对象区域排除的车辆的发动机罩的区域。Next, effects of the image processing device 100 according to the second embodiment will be described. The image processing device 100 generates an average image of the difference image in the video data based on the difference between two adjacent processing frames in one video data, and calculates the average value of the horizontal pixel of the average image for each pixel in the vertical direction of the generated average image. Sum of brightness values. The image processing apparatus 100 selects a predetermined number of points from points below the position of the vanishing point in the average image, and points whose sum of luminance values has a higher value of the second differential value, and selects the selected points from the selected The value y' of the Y coordinate of the point of the boundary line between the engine hood and the road is determined among the number of points. For example, external light is reflected at the front end portion of the hood. Therefore, the difference between the sum of the luminance values of the Y coordinates corresponding to the region of the front end portion of the hood and the sum of the brightness values of the Y coordinates corresponding to the region adjacent to the front end portion is as follows. That is, such a difference is often greater than the difference between the sum of the luminance values corresponding to the Y coordinates of other regions other than the region of the tip portion and the sum of the luminance values corresponding to the Y coordinates of regions adjacent to such other regions. . Therefore, the sum of luminance values corresponding to the Y coordinates of the region of the front end of the hood is often larger than the sum of luminance values corresponding to the Y coordinates of other regions. Also, depending on the position of the light source, the light source may be reflected on the hood reflected in the frame. In such a case, the sum of the quadratic differential values of the luminance values corresponding to the Y coordinates of the region of the front end of the hood is basically larger than the quadratic differential values corresponding to the Y coordinates of other regions. In the case where the Y coordinate reflecting the position of the light source is calculated to a larger quadratic differential value. According to the image processing device 100, a predetermined number of points are selected from points whose sum total of luminance values has a high second differential value, and a point of a boundary line between the hood and the road is determined from the selected predetermined number of points, Therefore, the area of the hood of the vehicle excluded from the monitoring target area can be accurately detected.

另外,图像处理装置100基于加速度、移动等与移动体的移动有关的信息,从由安装于车辆等移动体的照相机40拍摄到的视频数据的多个帧中,确定移动体进行直线行驶的情况下拍摄到的处理帧。图像处理装置100按照确定出的每个处理帧,提取在画面左侧的区域中从左下朝向右上方向的直线、和在画面右侧的区域中从右下朝向左上方向的直线,并基于提取出的直线的交点,检测消失点。图像处理装置100基于检测到的消失点、和确定出的发动机罩与道路的分界线的点,决定监视对象区域。这里,处理帧是车辆进行直线行驶的情况下拍摄到的帧。因此,从处理帧检测到的直线是朝向消失点延伸的直线。另一方面,从车辆在转弯处行驶的情况下拍摄到的帧检测出的直线不为朝向消失点延伸的直线的情况较多。因此,图像处理装置100从多个帧中的处理帧检测直线来检测消失点,所以能够正确地检测消失点。In addition, the image processing device 100 specifies that the moving body is traveling straight from a plurality of frames of video data captured by the camera 40 attached to the moving body such as a vehicle based on information related to the movement of the moving body such as acceleration and movement. The processed frames captured below. The image processing apparatus 100 extracts a straight line from the lower left to the upper right in the area on the left side of the screen and a straight line from the lower right to the upper left in the area on the right side of the screen for each determined processing frame, and based on the extracted The intersection of the straight lines to detect the vanishing point. The image processing device 100 determines the monitoring target area based on the detected vanishing point and the identified boundary point between the hood and the road. Here, the frame to be processed is a frame captured when the vehicle is traveling straight. Therefore, the straight line detected from the processed frame is a straight line extending toward the vanishing point. On the other hand, a straight line detected from a frame captured when the vehicle is traveling on a curve is often not a straight line extending toward the vanishing point. Therefore, the image processing device 100 detects the vanishing point by detecting the straight line from the processing frame among the plurality of frames, and thus can accurately detect the vanishing point.

另外,图像处理装置100使用发动机罩与道路的分界线的点的Y坐标的值y′、消失点的位置(VanX,VanY)、上述的水平视角CH、垂直视角CV、水平分辨率SH、垂直分辨率SV、以及设置高度HGT,计算照相机40与物体的距离。因此,根据图像处理装置100,使用精度良好地检测出的消失点计算照相机40与物体的距离,所以能够精度良好地计算照相机40与物体的距离。In addition, the image processing device 100 uses the value y' of the Y coordinate of the boundary between the hood and the road, the position of the vanishing point (VanX, VanY), the above-mentioned horizontal angle of view CH, vertical angle of view CV, horizontal resolution SH, vertical The resolution SV, and the setting height HGT calculate the distance between the camera 40 and the object. Therefore, according to the image processing device 100 , the distance between the camera 40 and the object is calculated using the precisely detected vanishing point, so the distance between the camera 40 and the object can be calculated accurately.

接下来,对执行实现与上述的实施例示出的图像处理装置相同的功能的图像处理程序的计算机的一个例子进行说明。图14是表示执行图像处理程序的计算机的一个例子的图。Next, an example of a computer that executes an image processing program that realizes the same functions as those of the image processing apparatus described in the above-mentioned embodiments will be described. FIG. 14 is a diagram showing an example of a computer that executes an image processing program.

如图14所示,计算机200具有执行各种运算处理的CPU201、受理来自用户的数据的输入的输入装置202、以及显示器203。另外,计算机200具有从存储介质读取程序等的读取装置204、和经由网络在与其他的计算机之间进行数据的交换的接口装置205。另外,计算机200具有暂时存储各种信息的RAM206、和硬盘装置207。而且,各装置201~207与总线208连接。As shown in FIG. 14 , the computer 200 has a CPU 201 that executes various arithmetic processes, an input device 202 that accepts input of data from a user, and a display 203 . In addition, the computer 200 has a reading device 204 for reading a program or the like from a storage medium, and an interface device 205 for exchanging data with other computers via a network. In addition, the computer 200 has a RAM 206 for temporarily storing various information, and a hard disk device 207 . Furthermore, the respective devices 201 to 207 are connected to the bus 208 .

硬盘装置207例如具有生成程序207a、计算程序207b、以及确定程序207c。CPU201读出各程序207a~207c并展开在RAM206。The hard disk device 207 has, for example, a generation program 207a, a calculation program 207b, and a determination program 207c. CPU201 reads out each program 207a-207c, and expands in RAM206.

生成程序207a作为生成进程206a发挥作用。计算程序207b作为计算进程206b发挥作用。确定程序207c作为确定进程206c发挥作用。The generation program 207a functions as the generation process 206a. The calculation program 207b functions as a calculation process 206b. The determination program 207c functions as a determination process 206c.

例如,生成进程206a与生成部11、153等对应。计算进程206b与计算部12、154等对应。确定进程206c与确定部13、155等对应。For example, the generation process 206a corresponds to the generation units 11, 153, and the like. The calculation process 206b corresponds to the calculation units 12, 154, and the like. The determination process 206c corresponds to the determination sections 13, 155, and the like.

此外,对于各程序207a~207c,也可以并不一定从最初开始存储于硬盘装置207。例如,也可以使各程序存储于插入到计算机200的软盘(FD)、CD-ROM、DVD盘、光磁盘、IC卡等“便携式物理介质”。然后,计算机200从这些物理介质读出各程序207a~207c并执行。In addition, each program 207a-207c does not necessarily have to be stored in the hard disk device 207 from the beginning. For example, each program may be stored in a "portable physical medium" inserted into the computer 200 such as a floppy disk (FD), a CD-ROM, a DVD disk, a magneto-optical disk, or an IC card. Then, the computer 200 reads and executes the respective programs 207a to 207c from these physical media.

另外,也可以进一步使帧确定程序、检测程序、决定程序、距离计算程序、以及判定程序存储于硬盘装置207。该情况下,CPU201除了各程序207a~207c之外,还读出这些各程序并展开在RAM206。帧确定程序作为帧确定进程发挥作用。检测程序作为检测进程发挥作用。决定程序作为决定进程发挥作用。距离计算程序作为距离计算进程发挥作用。判定程序作为判定进程发挥作用。例如,帧确定进程与帧确定部151等对应。检测进程与检测部152等对应。决定进程与决定部156等对应。距离计算进程与距离计算部157等对应。判定进程与判定部158等对应。In addition, a frame determination program, a detection program, a determination program, a distance calculation program, and a determination program may be further stored in the hard disk device 207 . In this case, the CPU 201 reads out the respective programs in addition to the respective programs 207 a to 207 c and expands them in the RAM 206 . The frame determination program functions as a frame determination process. A detection program functions as a detection process. The decision procedure functions as a decision process. The distance calculation program functions as a distance calculation process. The judgment program functions as a judgment process. For example, the frame determination process corresponds to the frame determination unit 151 and the like. The detection process corresponds to the detection unit 152 and the like. The determination process corresponds to the determination unit 156 and the like. The distance calculation process corresponds to the distance calculation unit 157 and the like. The determination process corresponds to the determination unit 158 and the like.

符号说明Symbol Description

10…图像处理装置,11…生成部,12…计算部,13…确定部。10...image processing device, 11...generating unit, 12...calculating unit, 13...determining unit.

Claims (5)

Translated fromChinese
1.一种图像处理装置,其特征在于,具有:1. An image processing device, characterized in that it has:生成部,其基于在一个视频数据内邻接的两个帧的差分,生成该视频数据中的差分图像的平均图像;a generating unit that generates an average image of difference images in one video data based on a difference between two adjacent frames in the video data;计算部,其按照所述平均图像的纵向的每个点,计算所述平均图像的横向的亮度值的合计;以及a calculation unit that calculates a total of horizontal luminance values of the average image for each point in the vertical direction of the average image; and确定部,其从所述平均图像中的消失点的位置的下方向的点中的、所述亮度值的合计的二次微分值的值较高的点,选择规定个数的点,并从选择出的该规定个数的点中确定发动机罩与道路的分界线的点。The specifying unit selects a predetermined number of points from points below the position of the vanishing point in the average image and the points having a higher value of the second differential value of the sum of the luminance values, and selects the points from Among the selected predetermined number of points, a point defining a boundary line between the bonnet and the road is determined.2.根据权利要求1所述的图像处理装置,其特征在于,还具有:2. The image processing device according to claim 1, further comprising:帧确定部,其基于与所述移动体的移动有关的信息,从由安装于移动体的照相机拍摄到的视频数据的多个帧中,确定在所述移动体进行直线行驶的情况下拍摄到的帧;A frame specifying unit that determines, from a plurality of frames of video data captured by a camera attached to the mobile body, that the mobile body is captured while the mobile body is traveling straight, based on information on the movement of the mobile body. frame;检测部,其按照由所述帧确定部确定出的每个帧,提取在画面左侧的区域中从左下朝向右上方向的直线、和在画面右侧的区域中从右下朝向左上方向的直线,并基于提取出的直线的交点,检测消失点;以及a detection unit for extracting, for each frame specified by the frame specifying unit, a straight line from lower left to upper right in an area on the left side of the screen and a straight line from lower right to upper left in an area on the right side of the screen , and based on the intersection points of the extracted lines, detect the vanishing point; and决定部,其基于由所述检测部检测出的消失点、和由所述确定部确定出的发动机罩与道路的分界线的点,决定监视对象区域。A determination unit that determines a monitoring target area based on the vanishing point detected by the detection unit and the boundary point between the hood and the road specified by the determination unit.3.根据权利要求2所述的图像处理装置,其特征在于,还具有:3. The image processing device according to claim 2, further comprising:距离计算部,其基于由所述确定部确定出的发动机罩与道路的分界线的点、由所述检测部检测出的消失点、所述照相机的水平方向以及垂直方向的视角、所述照相机的设置高度、以及所述照相机的分辨率信息,计算至所述监视对象区域中的物体为止的距离。a distance calculation unit based on the point of the boundary line between the hood and the road specified by the specifying unit, the vanishing point detected by the detecting unit, the horizontal and vertical angles of view of the camera, the camera and the resolution information of the camera to calculate the distance to the object in the monitoring target area.4.一种图像处理方法,为计算机执行的图像处理方法,其特征在于,4. An image processing method, which is an image processing method performed by a computer, characterized in that,执行以下各处理,即、Each of the following processes is performed, namely,基于在一个视频数据内邻接的两个帧的差分,生成该视频数据中的差分图像的平均图像,Based on the difference between two adjacent frames in one video data, an average image of the difference images in the video data is generated,按照所述平均图像的纵向的每个点,计算所述平均图像的横向的亮度值的合计,calculating the sum of horizontal brightness values of the average image for each point in the vertical direction of the average image,从所述平均图像中的消失点的位置的下方向的点中的、所述亮度值的合计的二次微分值的值较高的点,选择规定个数的点,并从选择出的该规定个数的点中确定发动机罩与道路的分界线的点。From the points below the position of the vanishing point in the average image, a predetermined number of points having a higher value of the total quadratic differential value of the luminance value are selected, and the selected points are selected. Among the predetermined number of points, the boundary line between the bonnet and the road is determined.5.一种图像处理程序,其特征在于,5. An image processing program, characterized in that,其使计算机执行以下各处理,即、It causes the computer to perform the following processes, namely,基于在一个视频数据内邻接的两个帧的差分,生成该视频数据中的差分图像的平均图像,Based on the difference between two adjacent frames in one video data, an average image of the difference images in the video data is generated,按照所述平均图像的纵向的每个点,计算所述平均图像的横向的亮度值的合计,calculating the sum of horizontal brightness values of the average image for each point in the vertical direction of the average image,从所述平均图像中的消失点的位置的下方向的点中的、所述亮度值的合计的二次微分值的值较高的点,选择规定个数的点,并从选择出的该规定个数的点中确定发动机罩与道路的分界线的点。From the points below the position of the vanishing point in the average image, a predetermined number of points having a higher value of the total quadratic differential value of the luminance value are selected, and the selected points are selected. Among the predetermined number of points, the boundary line between the bonnet and the road is determined.
CN201280075307.7A2012-08-312012-08-31Image processing device and image processing methodExpired - Fee RelatedCN104584076B (en)

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
PCT/JP2012/072268WO2014033936A1 (en)2012-08-312012-08-31Image processing device, image processing method, and image processing program

Publications (2)

Publication NumberPublication Date
CN104584076Atrue CN104584076A (en)2015-04-29
CN104584076B CN104584076B (en)2017-05-10

Family

ID=50182782

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201280075307.7AExpired - Fee RelatedCN104584076B (en)2012-08-312012-08-31Image processing device and image processing method

Country Status (4)

CountryLink
US (1)US9454704B2 (en)
JP (1)JP5943084B2 (en)
CN (1)CN104584076B (en)
WO (1)WO2014033936A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107016329A (en)*2015-11-202017-08-04松下电器(美国)知识产权公司 image processing method
CN112819783A (en)*2021-01-302021-05-18同济大学Engine cylinder carbon deposition identification method and device based on image background difference

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP6191160B2 (en)*2012-07-122017-09-06ノーリツプレシジョン株式会社 Image processing program and image processing apparatus
WO2014061123A1 (en)*2012-10-172014-04-24富士通株式会社Image processing device, image processing program and image processing method
JP6264173B2 (en)*2014-04-182018-01-24富士通株式会社 Normality determination method for imaging direction, imaging device attachment state evaluation program, and imaging device attachment state evaluation device
JP6299371B2 (en)2014-04-182018-03-28富士通株式会社 Imaging direction inclination detection method, imaging direction inclination detection program, and imaging direction inclination detection apparatus
JP6299373B2 (en)2014-04-182018-03-28富士通株式会社 Imaging direction normality determination method, imaging direction normality determination program, and imaging direction normality determination apparatus
JPWO2015162910A1 (en)*2014-04-242017-04-13パナソニックIpマネジメント株式会社 In-vehicle display device, control method for in-vehicle display device, and program
US11505292B2 (en)*2014-12-312022-11-22FLIR Belgium BVBAPerimeter ranging sensor systems and methods
JP6735659B2 (en)*2016-12-092020-08-05株式会社日立製作所 Driving support information collection device
US12084155B2 (en)2017-06-162024-09-10FLIR Belgium BVBAAssisted docking graphical user interface systems and methods
US12205473B2 (en)2017-06-162025-01-21FLIR Belgium BVBACollision avoidance systems and methods
JP6694902B2 (en)*2018-02-282020-05-20株式会社日立国際電気 Video coding apparatus and video coding method
JP6694905B2 (en)*2018-03-132020-05-20株式会社日立国際電気 Video coding apparatus and video coding method
US12117832B2 (en)2018-10-312024-10-15FLIR Belgium BVBADynamic proximity alert systems and methods
US11132562B2 (en)*2019-06-192021-09-28Toyota Motor Engineering & Manufacturing North America, Inc.Camera system to detect unusual circumstances and activities while driving

Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US4931937A (en)*1987-09-011990-06-05Aisin Seiki Kabushiki KaishaDistance detector mounted on vehicle
EP1033693A2 (en)*1999-03-012000-09-06Yazaki CorporationRear and side view monitor with camera for a vehicle
WO2008089964A2 (en)*2007-01-232008-07-31Valeo Schalter Und Sensoren GmbhMethod and system for video-based road lane curvature measurement
JP2009023560A (en)*2007-07-202009-02-05Toyota Motor Corp Driving assistance device
CN101454814A (en)*2006-05-262009-06-10富士通株式会社 Vehicle type judging device, program and method
CN101608924A (en)*2009-05-202009-12-23电子科技大学 A Lane Line Detection Method Based on Gray Level Estimation and Cascaded Hough Transform
CN101876535A (en)*2009-12-022010-11-03北京中星微电子有限公司Method, device and monitoring system for height measurement

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US4079519A (en)*1976-08-121978-03-21Carmouche William JRoad delineators
US5245422A (en)*1991-06-281993-09-14Zexel CorporationSystem and method for automatically steering a vehicle within a lane in a road
US8255144B2 (en)*1997-10-222012-08-28Intelligent Technologies International, Inc.Intra-vehicle information conveyance system and method
JP2000315255A (en)1999-03-012000-11-14Yazaki Corp Rear side monitoring device for vehicle and rear side monitoring alarm device for vehicle
JP3823760B2 (en)*2001-05-282006-09-20日本電気株式会社 Robot equipment
TWI246665B (en)*2001-07-122006-01-01Ding-Jang TzengMethod for aiding the driving safety of road vehicle by monocular computer vision
JP3822468B2 (en)*2001-07-182006-09-20株式会社東芝 Image processing apparatus and method
JP3868915B2 (en)2003-03-122007-01-17株式会社東芝 Forward monitoring apparatus and method
JP4258385B2 (en)*2004-01-142009-04-30株式会社デンソー Road surface reflection detector
JP3931891B2 (en)2004-07-052007-06-20日産自動車株式会社 In-vehicle image processing device
JP2006264416A (en)*2005-03-222006-10-05Takata CorpObject detection system, protection system, and vehicle
JP4551313B2 (en)*2005-11-072010-09-29本田技研工業株式会社 Car
US7693629B2 (en)*2006-11-142010-04-06Denso CorporationOnboard fog determining apparatus
JP2008257378A (en)2007-04-032008-10-23Honda Motor Co Ltd Object detection device
JP2011080859A (en)*2009-10-072011-04-21Nippon Telegr & Teleph Corp <Ntt>Apparatus, method and program for detection of water surface boundary
DE112012006147B8 (en)*2012-03-292018-09-06Toyota Jidosha Kabushiki Kaisha Road surface condition determination means
US20150168174A1 (en)*2012-06-212015-06-18Cellepathy Ltd.Navigation instructions

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US4931937A (en)*1987-09-011990-06-05Aisin Seiki Kabushiki KaishaDistance detector mounted on vehicle
EP1033693A2 (en)*1999-03-012000-09-06Yazaki CorporationRear and side view monitor with camera for a vehicle
CN101454814A (en)*2006-05-262009-06-10富士通株式会社 Vehicle type judging device, program and method
WO2008089964A2 (en)*2007-01-232008-07-31Valeo Schalter Und Sensoren GmbhMethod and system for video-based road lane curvature measurement
JP2009023560A (en)*2007-07-202009-02-05Toyota Motor Corp Driving assistance device
CN101608924A (en)*2009-05-202009-12-23电子科技大学 A Lane Line Detection Method Based on Gray Level Estimation and Cascaded Hough Transform
CN101876535A (en)*2009-12-022010-11-03北京中星微电子有限公司Method, device and monitoring system for height measurement

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107016329A (en)*2015-11-202017-08-04松下电器(美国)知识产权公司 image processing method
CN107016329B (en)*2015-11-202021-12-21松下电器(美国)知识产权公司Image processing method
CN112819783A (en)*2021-01-302021-05-18同济大学Engine cylinder carbon deposition identification method and device based on image background difference
CN112819783B (en)*2021-01-302022-05-17同济大学 Method and device for identifying carbon deposits in engine cylinders based on image background difference

Also Published As

Publication numberPublication date
CN104584076B (en)2017-05-10
JPWO2014033936A1 (en)2016-08-08
US9454704B2 (en)2016-09-27
WO2014033936A1 (en)2014-03-06
JP5943084B2 (en)2016-06-29
US20150154460A1 (en)2015-06-04

Similar Documents

PublicationPublication DateTitle
CN104584076B (en)Image processing device and image processing method
US8902053B2 (en)Method and system for lane departure warning
JP5787024B2 (en) Three-dimensional object detection device
CN102016921B (en) image processing device
US10331962B2 (en)Detecting device, detecting method, and program
JP4656456B2 (en) Lane marking device, lane marking detection method, and lane marking detection program
US10339396B2 (en)Vehicle accessibility determination device
JP5754470B2 (en) Road surface shape estimation device
JP5874831B2 (en) Three-dimensional object detection device
JP2006268097A (en) On-vehicle object detection device and object detection method
JP2009053818A (en) Image processing apparatus and method
WO2020154990A1 (en)Target object motion state detection method and device, and storage medium
KR20220160850A (en)Method and apparatus of estimating vanishing point
US20150379334A1 (en)Object recognition apparatus
CN114943836B (en) Trailer angle detection method, device and electronic equipment
JP2012252501A (en)Traveling path recognition device and traveling path recognition program
JP5983749B2 (en) Image processing apparatus, image processing method, and image processing program
JP6431299B2 (en) Vehicle periphery monitoring device
JP2016162130A (en)Device and method for detecting pedestrian crossing and computer for pedestrian crossing detection
JP5062316B2 (en) Lane marking device, lane marking detection method, and lane marking detection program
JP4765113B2 (en) Vehicle periphery monitoring device, vehicle, vehicle periphery monitoring program, and vehicle periphery monitoring method
CN109600598B (en)Image processing method, image processing device and computer readable recording medium
JP7142131B1 (en) Lane detection device, lane detection method, and lane detection program
KR101895678B1 (en)Efficient Search Window Set-Up Method for the Automotive Image Recognition System
JP2024032396A (en) Information processing device, information processing method and program

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
CF01Termination of patent right due to non-payment of annual fee

Granted publication date:20170510

CF01Termination of patent right due to non-payment of annual fee

[8]ページ先頭

©2009-2025 Movatter.jp