




技术领域technical field
本发明实施例涉及车辆安全技术领域,具体涉及一种基于360度鱼眼摄像头的货车全景行人定位及预警方法。The embodiment of the present invention relates to the technical field of vehicle safety, in particular to a method for positioning and early warning of pedestrians in a panoramic view of a truck based on a 360-degree fish-eye camera.
背景技术Background technique
目前基于车载的AVM系统作为全景影像视觉辅助应用已经越来越广泛,该系统一般用于在车辆低速度停泊过程中使用摄像头对360度范围内景象形成鸟瞰图。对于货车等由于车辆高度较高导致的驾驶员近距视野盲区较大问题,在车辆启动或者停泊过程中很容易导致成年人(特别是未成年人)在盲区中无法被及时发现而导致的交通事故。At present, the vehicle-based AVM system has become more and more widely used as a panoramic image visual aid. This system is generally used to use the camera to form a bird's-eye view of the scene within a 360-degree range when the vehicle is parked at a low speed. For trucks and other problems due to the high vehicle height caused by the driver's short-range vision blind spot, it is easy to cause traffic accidents caused by adults (especially minors) being unable to be found in the blind spot in time when the vehicle is started or parked. ACCIDENT.
发明内容Contents of the invention
鉴于上述问题,本发明实施例提供了一种基于360度鱼眼摄像头的货车全景行人定位及预警方法,用于解决现有技术中存在货车停车或者启动过程中行人处于驾驶员视觉盲区内的问题。In view of the above problems, the embodiment of the present invention provides a truck panoramic pedestrian positioning and early warning method based on a 360-degree fisheye camera, which is used to solve the problem in the prior art that pedestrians are in the driver's blind spot when the truck is parked or started. .
根据本发明实施例的一个方面,提供了一种基于360度鱼眼摄像头的货车全景行人定位及预警方法,所述方法包括:According to an aspect of an embodiment of the present invention, a method for positioning and warning pedestrians in a panoramic view of a truck based on a 360-degree fisheye camera is provided, the method comprising:
步骤1:通过货车上安装的鱼眼摄像头采集车身周围360°环境图片;Step 1: Collect 360° environmental pictures around the vehicle body through the fisheye camera installed on the truck;
步骤2:根据车身周围360°环境图片构造车辆360°范围内的2D鸟瞰图;Step 2: Construct a 2D bird's-eye view within the 360° range of the vehicle according to the 360° environment picture around the vehicle body;
步骤3:对2D鸟瞰图中的行人进行识别,并获取行人距车辆的距离;Step 3: Identify the pedestrian in the 2D bird's-eye view, and obtain the distance between the pedestrian and the vehicle;
步骤4:根据行人距车辆的距离,判断行人是否在预设的非安全区域内;Step 4: According to the distance between the pedestrian and the vehicle, determine whether the pedestrian is in the preset unsafe area;
步骤5:若行人位于非安全区域内,发送预警信息。Step 5: If the pedestrian is in an unsafe area, send an early warning message.
在一种可选的方式中,所述步骤1中还包括测定单个鱼眼摄像头的畸变系数。In an optional manner, the step 1 also includes determining the distortion coefficient of a single fisheye camera.
在一种可选的方式中,测定单个所述鱼眼摄像头畸变系数测定时的泰勒展开系数为:In an optional manner, the Taylor expansion coefficient when determining the distortion coefficient of a single fisheye camera is:
x’=x(1+k1r2+k2r4+k3r6+k4r8);x'=x(1+k1 r2 +k2 r4 +k3 r6 +k4 r8 );
y’=y(1+k1r1+k1r4+k3r6+k4r8);y'=y(1+k1 r1 +k1 r4 +k3 r6 +k4 r8 );
其中,(x,y),(x’,y’)是其中是无畸变的归一化图像坐标和畸变后的归一化图像坐标,r是(x,y)到图像中心的距离。Among them, (x, y), (x', y') are the normalized image coordinates without distortion and the normalized image coordinates after distortion, and r is the distance from (x, y) to the center of the image.
在一种可选的方式中,所述非安全区域包括注意区域、警惕区域和危险区域。In an optional manner, the non-safety area includes a caution area, a vigilance area and a danger area.
在一种可选的方式中,所述步骤5具体包括:若行人位于注意区域内,发送注意信息;若行人位于警惕区域内,发送警惕信息;若行人位于危险区域内,发送危险信息。In an optional manner, the step 5 specifically includes: if the pedestrian is in the caution area, sending attention information; if the pedestrian is in the alert area, sending alert information; if the pedestrian is in the danger area, sending danger information.
在一种可选的方式中,所述步骤3具体包括:对车身周围360°环境图片进行行人鞋子物体识别,将识别到的行人鞋子投影到2D鸟瞰图平面UV空间,获取行人鞋子距车辆的距离。In an optional manner, the step 3 specifically includes: performing pedestrian shoe object recognition on the 360 ° environment picture around the vehicle body, projecting the identified pedestrian shoes into the 2D bird's-eye view plane UV space, and obtaining the distance between the pedestrian shoes and the vehicle. distance.
在一种可选的方式中,所述步骤3中具体采用Yolov5模型进行行人鞋子物体识别。In an optional manner, in step 3, the Yolov5 model is specifically used for pedestrian shoe object recognition.
在一种可选的方式中,获取单个鱼眼摄像头单应矩阵,根据单应矩阵获取摄像头图像的像素在顶部2D鸟瞰图平面UV空间的映射坐标。In an optional manner, a single fisheye camera homography matrix is obtained, and the mapping coordinates of the pixels of the camera image in the top 2D bird's-eye view plane UV space are obtained according to the homography matrix.
在一种可选的方式中,所述步骤2还包括获取车辆安全等级框线,并将车辆安全等级框线的四个顶点通过单应变换投影到2D鸟瞰图平面UV空间,形成非安全区域。In an optional manner, the step 2 also includes obtaining the frame line of the vehicle safety level, and projecting the four vertices of the frame line of the vehicle safety level into the 2D bird's-eye view plane UV space through homography transformation to form an unsafe area .
在一种可选的方式中,所述步骤2中还包括:不同车身周围360°环境图片的重叠区域对对应数量的图片进行加权处理,所述加权处理包括区域先接近权法。In an optional manner, the step 2 further includes: performing weighting processing on the corresponding number of pictures in overlapping areas of 360° environmental pictures around different vehicle bodies, and the weighting process includes a region-first approach weighting method.
本发明实施例通过构造车辆360°范围内的2D鸟瞰图,并通过对鸟瞰图中的行人进行识别和定位其距车辆的距离,能够判断其是否在非安全区域内,从而对驾驶员进行预警,避免处于驾驶员视觉盲区内的行人受到车辆的伤害,降低驾驶员视野盲区导致的人伤事故的发生;通过将人为设定的3D中的安全边框线都映射到2D鸟瞰图平面UV空间,避免了计算机旋转矩阵R和平移向量t的显性计算,直接得到了3D行人位置的2D转换,实现了从行人识别、定位和危险区域判断的自动预警。In the embodiment of the present invention, by constructing a 2D bird's-eye view of the vehicle within a 360° range, and by identifying pedestrians in the bird's-eye view and locating their distance from the vehicle, it can be judged whether they are in an unsafe area, thereby giving an early warning to the driver , avoid pedestrians in the driver's blind spot from being hurt by vehicles, and reduce the occurrence of personal injury accidents caused by the driver's blind spot; by mapping the artificially set 3D safety border lines to the 2D bird's-eye view plane UV space, The explicit calculation of computer rotation matrix R and translation vector t is avoided, and the 2D transformation of 3D pedestrian position is directly obtained, realizing automatic early warning from pedestrian identification, positioning and judgment of dangerous areas.
上述说明仅是本发明实施例技术方案的概述,为了能够更清楚了解本发明实施例的技术手段,而可依照说明书的内容予以实施,并且为了让本发明实施例的上述和其它目的、特征和优点能够更明显易懂,以下特举本发明的具体实施方式。The above description is only an overview of the technical solutions of the embodiments of the present invention. In order to better understand the technical means of the embodiments of the present invention, it can be implemented according to the contents of the description, and in order to make the above and other purposes, features and The advantages can be more obvious and understandable, and the specific embodiments of the present invention are enumerated below.
附图说明Description of drawings
附图仅用于示出实施方式,而并不认为是对本发明的限制。而且在整个附图中,用相同的参考符号表示相同的部件。在附图中:The drawings are only for illustrating the embodiments and are not to be considered as limiting the invention. Also throughout the drawings, the same reference numerals are used to designate the same parts. In the attached picture:
图1示出了本发明提供的一种基于360度鱼眼摄像头的货车全景行人定位及预警方法的实施例的流程示意图。Fig. 1 shows a schematic flowchart of an embodiment of a method for positioning and early warning of pedestrians in a panoramic view of a truck based on a 360-degree fisheye camera provided by the present invention.
图2示出了本发明提供的一种基于360度鱼眼摄像头的货车全景行人定位及预警方法的实施例的标定板示意图。Fig. 2 shows a schematic diagram of a calibration board of an embodiment of a truck panoramic pedestrian positioning and early warning method based on a 360-degree fisheye camera provided by the present invention.
图3示出了本发明提供的一种基于360度鱼眼摄像头的货车全景行人定位及预警方法的实施例的车辆360度范围内的2D鸟瞰图。Fig. 3 shows a 2D bird's-eye view of a vehicle within a 360-degree range of an embodiment of a truck panoramic pedestrian positioning and early warning method based on a 360-degree fisheye camera provided by the present invention.
图4示出了本发明提供的一种基于360度鱼眼摄像头的货车全景行人定位及预警方法的实施例中不同方向鸟瞰图拼接形成的2D鸟瞰图的局部重合区域形成的黑色三角型的视觉消失区域示意图。Fig. 4 shows the black triangle-shaped vision formed by the partially overlapping areas of the 2D bird's-eye view formed by splicing bird's-eye views in different directions in an embodiment of a method for panoramic pedestrian positioning and warning of trucks based on a 360-degree fisheye camera provided by the present invention. Schematic diagram of the vanishing area.
图5示出了本发明提供的一种基于360度鱼眼摄像头的货车全景行人定位及预警方法的实施例图3的重叠区域融合后的示意图。FIG. 5 shows a schematic diagram of an embodiment of a truck panorama pedestrian positioning and early warning method based on a 360-degree fisheye camera provided by the present invention after fusion of overlapping areas in FIG. 3 .
具体实施方式Detailed ways
下面将参照附图更详细地描述本发明的示例性实施例。虽然附图中显示了本发明的示例性实施例,然而应当理解,可以以各种形式实现本发明而不应被这里阐述的实施例所限制。Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. Although exemplary embodiments of the present invention are shown in the drawings, it should be understood that the invention may be embodied in various forms and should not be limited to the embodiments set forth herein.
图1示出了本发明一种基于360度鱼眼摄像头的货车全景行人定位及预警方法的流程图,如图1所示,该方法包括以下步骤:Fig. 1 shows a kind of flow chart of the present invention based on 360 degree fisheye camera panoramic pedestrian positioning and early warning method for trucks, as shown in Fig. 1, the method comprises the following steps:
步骤1:通过货车上安装的鱼眼摄像头采集车身周围360°环境图片;Step 1: Collect 360° environmental pictures around the vehicle body through the fisheye camera installed on the truck;
本实施例中使用4个视觉角度在180°的鱼眼摄像头,分别安装在货车前车顶,车货架尾部,车左右反光镜或车前货箱边缘,安装以后的各摄像头在左前、右前、左后、右后拥有视觉重叠区域,实现对车身周围全360度环境的视觉数据采集及同步。In this embodiment, 4 fisheye cameras with a visual angle of 180° are used, which are respectively installed on the front roof of the truck, the rear of the truck shelf, the left and right mirrors of the car or the edge of the front cargo box of the truck. The left rear and right rear have visual overlapping areas to realize visual data collection and synchronization of the full 360-degree environment around the vehicle body.
步骤2:根据车身周围360°环境图片构造车辆360°范围内的2D鸟瞰图;本步骤中具体通过AVM算法实现构造车辆360度范围内的2D鸟瞰图。Step 2: Construct a 2D bird's-eye view of the vehicle within a 360-degree range based on the 360-degree environment picture around the vehicle body; in this step, the AVM algorithm is used to construct a 2D bird's-eye view within a 360-degree range of the vehicle.
具体包括摄像头除去畸变,由于鱼眼摄像头的畸变较普通摄像头严重,需要用5参数的径向畸变进行拟合,本实施中具体使用张正友的棋盘标定法对每个鱼眼摄像统一在装车之前完成畸变系数的计算。本实施例中的畸变系数计算使用了更大的泰勒展开系数:Specifically, it includes removing distortion of the camera. Since the distortion of the fisheye camera is more severe than that of the ordinary camera, it needs to be fitted with 5 parameters of radial distortion. In this implementation, Zhang Zhengyou’s checkerboard calibration method is used to unify each fisheye camera before loading Complete the calculation of the distortion coefficient. The distortion coefficient calculation in this embodiment uses a larger Taylor expansion coefficient:
x’=x(1+k1r2+k2r4+k3r6+k4r8) (1)x'=x(1+k1 r2 +k2 r4 +k3 r6 +k4 r8 ) (1)
y’=y(1+k1r2+k2r4+k3r6+k4r8) (2)y'=y(1+k1 r2 +k2 r4 +k3 r6 +k4 r8 ) (2)
(x,y),(x’,y’)是其中是无畸变的归一化图像坐标和畸变后的归一化图像坐标,r是(x,y)到图像中心的距离。(x, y), (x', y') are the undistorted normalized image coordinates and the distorted normalized image coordinates, and r is the distance from (x, y) to the center of the image.
具体使用棋盘标定法中的单应变化计算出相机的内参数K,再将坐标点投影到成像坐标中:Specifically, use the homography change in the checkerboard calibration method to calculate the internal parameter K of the camera, and then project the coordinate points into the imaging coordinates:
u,v是物体坐标在没有畸变的情况下投影到像素平面的坐标,用最小二乘法或者svd分解求解超定方程,使得经过k1,k2,k3,k4系数修正以后的有畸变的坐标u’,v’之间的差异尽可能的小,原张正友标定法中的畸变系数方程变为:u, v are the coordinates of the object coordinates projected to the pixel plane without distortion, and the least square method or svd decomposition is used to solve the overdetermined equation, so that the distorted coordinate u' after correction of k1, k2, k3, k4 coefficients , and the difference between v' is as small as possible, the distortion coefficient equation in the original Zhang Zhengyou calibration method becomes:
本实施例中具体使用的标定板为8行4列的黑白棋盘个,上下左右均使用相同的标定板,鱼眼摄像头分别固定在1,2,3,4的位置上,具体如附图2所示。The specific calibration board used in this embodiment is a black and white checkerboard with 8 rows and 4 columns. The same calibration board is used for the upper, lower, left, and right sides. The fisheye cameras are respectively fixed at positions 1, 2, 3, and 4, as shown in Figure 2. shown.
通过单应变换矩阵拟合,分别计算四个摄像头相对于车辆鸟瞰视觉角度下的H1,H2,H3,H4单应矩阵,用于计算4个摄像头图像的像素在顶部鸟瞰UV平面中的映射坐标,具体如下所示:Through homography transformation matrix fitting, calculate the H1, H2, H3, H4 homography matrices of the four cameras relative to the vehicle's bird's-eye view angle, which are used to calculate the mapping coordinates of the pixels of the four camera images in the top bird's-eye view UV plane , as follows:
四个鱼眼摄像头分别拍摄自己最近的标定板作为src,同时在货车顶端中心位置上空设置相同型号的摄像头,其成像平面与地面平行,拍摄一个至少覆盖全部标定板的图像作为des,计算src到des的单应变换矩阵H1、H2、H3、H4,使用SVD求解超定线性方程组:The four fisheye cameras respectively take pictures of their nearest calibration boards as src, and at the same time set up the same type of camera above the center of the top of the truck. The homography transformation matrices H1, H2, H3, H4 of des, use SVD to solve overdetermined linear equations:
Xdes=H*Xsrc (6)Xdes=H*Xsrc (6)
其中,Xsrc是4个摄像头拍摄的棋盘黑白格交点在棋盘坐标系中坐标到摄像头归一化平面的投影,Xdes是鸟瞰平面摄像头拍摄的俯视图中相同坐标到该摄像头归一化平面的投影。通过H坐标变换,使得4个摄像头拍摄的局部图像,转化到鸟瞰2D坐标下,实现联合标定。Among them, Xsrc is the projection of the coordinates of the checkerboard black and white grid intersections captured by the four cameras to the normalized plane of the camera in the checkerboard coordinate system, and Xdes is the projection of the same coordinates in the top view captured by the bird's-eye view plane camera to the normalized plane of the camera. Through the H-coordinate transformation, the local images captured by the four cameras are transformed into bird's-eye view 2D coordinates to achieve joint calibration.
附图3中拼接形成的2D鸟瞰图中存在黑色部分和不同摄像头拍摄图片的重叠区域,该区域需要该区域需要对两个图片进行加权处理。该加权中使用了区域先接近权法,在拼接找哪个同时结合摄像头微调实现前左右鸟瞰图和后鸟瞰图的重合。In the 2D bird's-eye view formed by splicing in Figure 3, there are black parts and overlapping areas of pictures taken by different cameras. This area requires weighting of the two images. In this weighting, the region first approach weight method is used, which is combined with camera fine-tuning to achieve the overlap of the front, left, and rear bird's-eye views.
重叠区域加权计算的权重通过每张图上黑色三角型的视觉消失区域进行计算,由于摄像头拍摄时候受到有限成像锥体的限制,一定会存在景象无法拍摄的地方,该区域在顶端鸟瞰图摄像头UV平面下投影回4个摄像头空间是超过它们各自的UV成像平面范围的(超过成像锥体范围),所以呈现出黑色三角形(像素0)。The weight of overlapping area weighting calculation is calculated through the black triangle-shaped visual disappearing area on each picture. Since the camera is limited by the limited imaging cone when shooting, there must be places where the scene cannot be photographed. This area is at the top of the bird's-eye view camera UV The projection back to the four camera spaces under the plane is beyond the range of their respective UV imaging planes (beyond the range of the imaging cone), so a black triangle (pixel 0) appears.
三角形区域计算权重计算,具体例如附图4所示:左和上重叠区域,在在空心点中区域内的点都需要进行像素加权,加权权重使用像素到两条直线距离垂直距离的占比构成,及越靠近下方曲线那么左边摄像头的图像权重越高,反之上方图像权重越高。Calculate the weight of the triangular area, as shown in Figure 4 for example: the left and upper overlapping areas, the points in the area in the hollow point need to be weighted by pixels, and the weighted weight is composed of the proportion of the vertical distance from the pixel to the two straight lines , and the closer to the lower curve, the higher the image weight of the left camera is, otherwise the higher the weight of the upper image.
本步骤中重叠区域拼接融合之后的鸟瞰图如附图5所示。The bird's-eye view after splicing and fusion of overlapping regions in this step is shown in Figure 5.
步骤3:对2D鸟瞰图中的行人进行识别,并获取行人距车辆的距离;Step 3: Identify the pedestrian in the 2D bird's-eye view, and obtain the distance between the pedestrian and the vehicle;
由于进行鸟瞰图坐标转换的前提是,摄像头如果是进行空间旋转和平移,那么能够使用单应变换的坐标点必须是在空间中同一平面上,本实施例中的鞋子是与标定板处于同一平面的最好的近似物体,本实施例中通过进行行人鞋子物体识别,从而进行行人识别,通过鞋子识别映射,没有必要映射全部局部摄像头图像,减少计算量。Since the premise of the coordinate conversion of the bird's-eye view is that if the camera performs spatial rotation and translation, the coordinate points that can use the homography transformation must be on the same plane in space. The shoes in this embodiment are on the same plane as the calibration plate The best approximate object, in this embodiment, pedestrian recognition is performed by performing pedestrian shoe object recognition, and through shoe recognition mapping, it is not necessary to map all local camera images, reducing the amount of calculation.
本实施例中采用Yolov5模型进行行人鞋子物体识别,在具体训练模型中,需要达标各处出现在鱼眼摄像头摄像机图片中的行人鞋子box框,由于训练样本比较少,本实施中使用了非鱼眼行人样本进行预训练,再使用数量较少的鱼眼行人图片样本进行二次训练。In this embodiment, the Yolov5 model is used for object recognition of pedestrian shoes. In the specific training model, it is necessary to meet the standard of the pedestrian shoe box that appears in the picture of the fisheye camera everywhere. Since the training samples are relatively small, non-fish is used in this implementation. Eye pedestrian samples are used for pre-training, and then a small number of fish-eye pedestrian image samples are used for secondary training.
将局部获取的鞋子坐标(具体为鞋子的中心坐标)映射到2D鸟瞰图平面UV空间,由于只需要获取行人在2D鸟瞰空间的坐标,所以没有必要映射全部局部相机图像,只需要确定局部相机平面中鞋子box框的中心位置,再将该坐标通过,如下公式影射到2D鸟瞰空间中:Map the locally obtained shoe coordinates (specifically, the center coordinates of the shoes) to the 2D bird's-eye view plane UV space. Since only the coordinates of pedestrians in the 2D bird's-eye view space need to be obtained, there is no need to map all local camera images, only the local camera plane needs to be determined The center position of the shoe box in the center, and then pass the coordinates, and the following formula is mapped to the 2D bird's-eye view space:
步骤4:根据行人距车辆的距离,判断行人是否在预设的非安全区域内;Step 4: According to the distance between the pedestrian and the vehicle, determine whether the pedestrian is in the preset unsafe area;
在进行车辆摄像头联合标定时候,在上下左右,分别设置4个条形线标记线,构成一个行人距离车辆的标准安全边界框,再将车辆安全边界框的四个顶点通过单应变换投影到2D鸟瞰图平面UV空间,形成非安全区域。本实施例中,设置了3中不同车辆安全等级框线,本专利中设置的3等级分别为:注意、警惕、危险,并最终形成环绕车辆的安全矩形框,具体划分形成注意区域,警惕区域和危险区域这几个非安全区域,其中危险区域紧挨车辆,注意区域离车辆最远。During the joint calibration of the vehicle camera, set four bar-line marking lines at the top, bottom, left and right respectively to form a standard safe bounding box for pedestrians and vehicles, and then project the four vertices of the safe bounding box of the vehicle to 2D through homography transformation A bird's-eye view flat UV space, forming an unsafe area. In this embodiment, 3 different vehicle safety levels frame lines are set. The 3 levels set in this patent are: attention, vigilance, and danger, and finally form a safe rectangular frame around the vehicle, which is specifically divided to form attention areas and vigilance areas. There are several non-safe areas such as the dangerous area and the dangerous area, in which the dangerous area is close to the vehicle, and the attention area is the farthest from the vehicle.
步骤5:若行人位于非安全区域内,发送预警信息。Step 5: If the pedestrian is in an unsafe area, send an early warning message.
具体包括:若行人(即鞋子box框)位于注意区域内,发送注意信息提醒驾驶员和行人;若行人(即鞋子box框)位于警惕区域内,发送警惕信息提醒驾驶员和行人;若行人(即鞋子box框)位于危险区域内,发送危险信息提醒驾驶员和行人。行人不处于非安全区域内时不对驾驶员发送预警信息。本实施例中的预警信息具体通过扬声器进行播放。Specifically include: if the pedestrian (i.e. shoe box frame) is located in the attention area, send attention information to remind the driver and pedestrians; if the pedestrian (i.e. shoe box frame) is located in the alert area, send alert information to remind the driver and pedestrians; That is, the shoe box) is located in the dangerous area, and the dangerous information is sent to remind drivers and pedestrians. When pedestrians are not in the unsafe area, no warning information will be sent to the driver. The early warning information in this embodiment is played through a loudspeaker.
本方案提出了一种AVM全景影像及YOLOV5物体定位算法相结合的车周边360视觉范围的针对行人安全距离的自动预警方法,该方法利用了AVM的算法生成2D鸟瞰图,并将YOLOV5识别出的2D中的行人鞋子中心坐标,及人为设定的3D中的的安全边界框都映射到2D鸟瞰图平面UV空间中,由于很好利用H单应映射,避免了计算相机R和t平移的显性计算,直接得到了3D行人位置的2D转换,实现了从行人识别、定位、危险框判断的自动预警,能够有效降低货车在停车或启动时候,由于驾驶员视野盲区导致的人伤事故发生率,具有较好的辅助效果。This program proposes an automatic early warning method for pedestrian safety distance in the 360 visual range around the car combined with AVM panoramic image and YOLOV5 object positioning algorithm. The center coordinates of pedestrian shoes in 2D and the artificially set safe bounding box in 3D are mapped to the 2D bird’s-eye view plane UV space. Due to the good use of H homography mapping, the obvious calculation of camera R and t translation is avoided. It can directly obtain the 2D conversion of the 3D pedestrian position, realize the automatic early warning from the pedestrian identification, positioning, and danger frame judgment, and can effectively reduce the incidence of personal injury accidents caused by the driver's blind spot when the truck is parked or started. , has a better auxiliary effect.
在此处所提供的说明书中,说明了大量具体细节。然而能够理解,本发明的实施例可以在没有这些具体细节的情况下实践。类似地,为了精简本发明并帮助理解各个发明方面中的一个或多个,在上面对本发明的示例性实施例的描述中,本发明实施例的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。其中,遵循具体实施方式的权利要求书由此明确地并入该具体实施方式,其中每个权利要求本身都作为本发明的单独实施例。In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. Similarly, in the above description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together into a single embodiment, figure , or in its description. Wherein the claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of this invention.
本领域技术人员可以理解,可以对实施例中的设备中的模块进行自适应性地改变并且把它们设置在与该实施例不同的一个或多个设备中。可以把实施例中的模块或单元或组件组合成一个模块或单元或组件,以及此外可以把它们分成多个子模块或子单元或子组件。除了这样的特征和/或过程或者单元中的至少一些是相互排斥之外。Those skilled in the art can understand that the modules in the device in the embodiment can be adaptively changed and arranged in one or more devices different from the embodiment. Modules or units or components in the embodiments may be combined into one module or unit or component, and furthermore may be divided into a plurality of sub-modules or sub-units or sub-assemblies. Except that at least some of such features and/or processes or elements are mutually exclusive.
应该注意的是上述实施例对本发明进行说明而不是对本发明进行限制,并且本领域技术人员在不脱离所附权利要求的范围的情况下可设计出替换实施例。在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的元件或步骤。位于元件之前的单词“一”或“一个”不排除存在多个这样的元件。本发明可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。上述实施例中的步骤,除有特殊说明外,不应理解为对执行顺序的限定。It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention can be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In a unit claim enumerating several means, several of these means can be embodied by one and the same item of hardware. The use of the words first, second, and third, etc. does not indicate any order. These words can be interpreted as names. The steps in the above embodiments, unless otherwise specified, should not be construed as limiting the execution sequence.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211712096.5ACN116409243A (en) | 2022-12-29 | 2022-12-29 | Truck panoramic pedestrian positioning and early warning method based on 360-degree fisheye camera |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211712096.5ACN116409243A (en) | 2022-12-29 | 2022-12-29 | Truck panoramic pedestrian positioning and early warning method based on 360-degree fisheye camera |
| Publication Number | Publication Date |
|---|---|
| CN116409243Atrue CN116409243A (en) | 2023-07-11 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202211712096.5APendingCN116409243A (en) | 2022-12-29 | 2022-12-29 | Truck panoramic pedestrian positioning and early warning method based on 360-degree fisheye camera |
| Country | Link |
|---|---|
| CN (1) | CN116409243A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN117853569A (en)* | 2024-03-07 | 2024-04-09 | 上海励驰半导体有限公司 | Vehicle peripheral area presentation device and method and electronic equipment |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20110157361A1 (en)* | 2009-12-31 | 2011-06-30 | Industrial Technology Research Institute | Method and system for generating surrounding seamless bird-view image with distance interface |
| CN112069980A (en)* | 2020-09-03 | 2020-12-11 | 三一专用汽车有限责任公司 | Obstacle recognition method, obstacle recognition system, and storage medium |
| CN112224132A (en)* | 2020-10-28 | 2021-01-15 | 武汉极目智能技术有限公司 | Vehicle panoramic all-around obstacle early warning method |
| CN216331767U (en)* | 2021-11-08 | 2022-04-19 | 长安大学 | Visual field blind area early warning system for heavy-duty traction truck |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20110157361A1 (en)* | 2009-12-31 | 2011-06-30 | Industrial Technology Research Institute | Method and system for generating surrounding seamless bird-view image with distance interface |
| CN112069980A (en)* | 2020-09-03 | 2020-12-11 | 三一专用汽车有限责任公司 | Obstacle recognition method, obstacle recognition system, and storage medium |
| CN112224132A (en)* | 2020-10-28 | 2021-01-15 | 武汉极目智能技术有限公司 | Vehicle panoramic all-around obstacle early warning method |
| CN216331767U (en)* | 2021-11-08 | 2022-04-19 | 长安大学 | Visual field blind area early warning system for heavy-duty traction truck |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN117853569A (en)* | 2024-03-07 | 2024-04-09 | 上海励驰半导体有限公司 | Vehicle peripheral area presentation device and method and electronic equipment |
| CN117853569B (en)* | 2024-03-07 | 2024-05-28 | 上海励驰半导体有限公司 | Vehicle peripheral area presentation device and method and electronic equipment |
| Publication | Publication Date | Title |
|---|---|---|
| CN109741455B (en) | Vehicle-mounted stereoscopic panoramic display method, computer readable storage medium and system | |
| CN111369439B (en) | Real-time mosaic method of panoramic surround view image based on automatic parking space recognition based on surround view | |
| CN109435852B (en) | A panoramic type assisted driving system and method for large trucks | |
| US9858639B2 (en) | Imaging surface modeling for camera modeling and virtual view synthesis | |
| CN107133988B (en) | Calibration method and calibration system for camera in vehicle-mounted panoramic looking-around system | |
| JP5124147B2 (en) | Camera calibration apparatus and method, and vehicle | |
| CN103617606B (en) | For assisting the vehicle multi-angle panorama generation method of driving | |
| JP5455124B2 (en) | Camera posture parameter estimation device | |
| CN106952311B (en) | Auxiliary parking system and method based on panoramic stitching data mapping table | |
| JP4695167B2 (en) | Method and apparatus for correcting distortion and enhancing an image in a vehicle rear view system | |
| CN106373091A (en) | Automatic panorama parking aerial view image splicing method, system and vehicle | |
| CN102881016B (en) | 360 ° of environment reconstructing methods of vehicle periphery based on car networking | |
| CN109948398B (en) | Image processing method for panoramic parking and panoramic parking device | |
| CN110390695A (en) | A fusion calibration system and calibration method of lidar and camera based on ROS | |
| CN112233188B (en) | Calibration method of data fusion system of laser radar and panoramic camera | |
| CN109087251B (en) | Vehicle-mounted panoramic image display method and system | |
| CN103810686A (en) | Seamless splicing panorama assisting driving system and method | |
| CN112224132A (en) | Vehicle panoramic all-around obstacle early warning method | |
| CN111815752B (en) | Image processing method and device and electronic equipment | |
| CN110059574A (en) | A kind of vehicle blind zone detection method | |
| CN115936995A (en) | A method for panorama stitching of vehicle four-way fisheye camera | |
| CN110626269A (en) | An intelligent imaging driving assistance system and method based on intention recognition fuzzy control | |
| CN113313813A (en) | Vehicle-mounted 3D panoramic all-around viewing system capable of actively early warning | |
| CN109472737B (en) | A panoramic alarm method of vehicle-mounted six-channel camera | |
| CN116409243A (en) | Truck panoramic pedestrian positioning and early warning method based on 360-degree fisheye camera |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |