





技术领域technical field
本公开的实施例涉及计算机技术领域,具体涉及用于识别分歧路口的方法和装置。Embodiments of the present disclosure relate to the field of computer technology, and in particular to methods and devices for identifying divergent intersections.
背景技术Background technique
分歧路口,作为地图导航中的一个关键点位,常常需要AR导航算法在这里准确的绘制出车道线并做出正确的引导。As a key point in map navigation, diverging intersections often require AR navigation algorithms to accurately draw lane lines and provide correct guidance.
现有的车道线识别技术基于边缘检测和平行线检测等传统图像技术手段。The existing lane line recognition technology is based on traditional image techniques such as edge detection and parallel line detection.
现有的技术方案主要有以下几点不足:Existing technical scheme mainly has the following deficiencies:
1)基于图像的边缘信息等基础特征和人为设定的规则,不能很好的适应真实驾驶场景中复杂的环境,容易受到干扰,在分歧路口处车道线识别结果不准。1) Based on basic features such as image edge information and artificially set rules, it cannot adapt well to the complex environment in real driving scenes, and is easily disturbed, and the lane line recognition results at divergent intersections are inaccurate.
2)没有针对分歧路口做优化,无法判断是否偏航,引导效果不够。2) There is no optimization for divergent intersections, and it is impossible to judge whether it is yaw or not, and the guiding effect is not enough.
发明内容Contents of the invention
本公开的实施例提出了用于识别分歧路口的方法和装置。Embodiments of the present disclosure propose methods and devices for identifying divergent intersections.
第一方面,本公开的实施例提供了一种用于识别分歧路口的方法,包括:获取车辆的位置和分歧路口的位置,并根据车辆的位置和分歧路口的位置判断车辆是否在分歧路口检测范围内;若在分歧路口检测范围内,则将实时获取的车辆前方的图像输入预先训练的检测模型,得到候选导流区域;将候选导流区域输入预先训练的后处理模型,得到分类结果和候选导流区域的关键点;若分类结果为导流区域,则根据候选导流区域的关键点拟合出车道线。In the first aspect, an embodiment of the present disclosure provides a method for identifying a divergent intersection, including: obtaining the position of the vehicle and the position of the divergent intersection, and judging whether the vehicle is detected at the divergent intersection according to the position of the vehicle and the position of the divergent intersection within the range; if it is within the detection range of the divergent intersection, input the image in front of the vehicle acquired in real time into the pre-trained detection model to obtain the candidate diversion area; input the candidate diversion area into the pre-trained post-processing model to obtain the classification result and The key points of the candidate diversion area; if the classification result is a diversion area, the lane line is fitted according to the key points of the candidate diversion area.
在一些实施例中,该方法还包括:获取导航信息;根据车辆的位置和车道线的位置确定车辆相对于车道线的相对位置;根据相对位置和导航信息,判断车辆是否偏航;若偏航,则输出变道提醒信息,否则输出直行提醒信息。In some embodiments, the method further includes: acquiring navigation information; determining the relative position of the vehicle relative to the lane line according to the position of the vehicle and the position of the lane line; judging whether the vehicle is off course according to the relative position and the navigation information; , output a warning message for changing lanes, otherwise output a reminder message for going straight.
在一些实施例中,将实时获取的车辆前方的图像输入检测模型,得到候选导流区域,包括:将实时获取的车辆前方的图像输入检测模型,检测出候选区域和路牌;若车辆的位置与分歧路口的位置的距离大于预定值,则借助路牌的位置确定候选区域是否在容错范围内;若在容错范围内,则将候选区域确定为候选导流区域。In some embodiments, inputting the image in front of the vehicle acquired in real time into the detection model to obtain a candidate diversion area includes: inputting the image in front of the vehicle acquired in real time into the detection model to detect the candidate area and the street sign; If the distance between the positions of the divergent intersections is greater than a predetermined value, it is determined whether the candidate area is within the tolerance range by means of the position of the road sign; if it is within the tolerance range, the candidate area is determined as a candidate diversion area.
在一些实施例中,将实时获取的车辆前方的图像输入检测模型,得到候选导流区域,包括:在预定周期的起始帧将实时获取的车辆前方的图像输入检测模型,得到候选导流区域;以及将候选导流区域输入后处理模型,得到分类结果和候选导流区域的关键点,包括:若分类结果为导流区域,则记录导流区域左右车道各3个关键点坐标作为状态变量。In some embodiments, inputting the image in front of the vehicle acquired in real time into the detection model to obtain a candidate diversion area includes: inputting the image in front of the vehicle acquired in real time into the detection model at the start frame of a predetermined period to obtain a candidate diversion area ; and input the candidate diversion area into the post-processing model to obtain the classification result and the key points of the candidate diversion area, including: if the classification result is the diversion area, record the coordinates of 3 key points on the left and right lanes of the diversion area as state variables .
在一些实施例中,将候选导流区域输入后处理模型,得到分类结果和候选导流区域的关键点,包括:在每个预定周期内从起始帧的下一帧开始执行如下跟踪步骤:取上一帧记录的状态变量计算其外接矩形,将外接矩形扩展后作为本帧的搜索区域;将本帧的搜索区域输入后处理模型,得到本帧的分类结果和本帧的候选导流区域的关键点;若本帧的分类结果为导流区域,则用本帧的候选导流区域的关键点更新状态变量,否则,停止跟踪。In some embodiments, inputting the candidate diversion area into the post-processing model to obtain the classification result and the key points of the candidate diversion area includes: performing the following tracking steps from the next frame of the initial frame in each predetermined period: Take the state variables recorded in the previous frame to calculate its circumscribed rectangle, and expand the circumscribed rectangle as the search area of this frame; input the search area of this frame into the post-processing model to obtain the classification result of this frame and the candidate diversion area of this frame If the classification result of this frame is a diversion area, update the state variable with the key points of the candidate diversion area of this frame, otherwise, stop tracking.
在一些实施例中,检测模型采用yolov3架构,骨干网采用shufflenet_v2。In some embodiments, the detection model adopts yolov3 architecture, and the backbone network adopts shufflenet_v2.
在一些实施例中,后处理模型采用shufflenet_v2网络,后处理模型包括分类和回归两条分支。In some embodiments, the post-processing model adopts shufflenet_v2 network, and the post-processing model includes two branches of classification and regression.
第二方面,本公开的实施例提供了一种用于识别分歧路口的装置,包括:获取单元,被配置成获取车辆的位置和分歧路口的位置,并根据车辆的位置和分歧路口的位置判断车辆是否在分歧路口检测范围内;检测单元,被配置成若在分歧路口检测范围内,则将实时获取的车辆前方的图像输入预先训练的检测模型,得到候选导流区域;处理单元,被配置成将候选导流区域输入预先训练的后处理模型,得到分类结果和候选导流区域的关键点;拟合单元,被配置成若分类结果为导流区域,则根据候选导流区域的关键点拟合出车道线。In a second aspect, an embodiment of the present disclosure provides a device for identifying a divergent intersection, including: an acquisition unit configured to acquire the position of the vehicle and the position of the divergent intersection, and judge according to the position of the vehicle and the position of the divergent intersection Whether the vehicle is within the detection range of the branch intersection; the detection unit is configured to input the real-time acquired image in front of the vehicle into the pre-trained detection model to obtain the candidate diversion area if it is within the detection range of the branch intersection; the processing unit is configured To input the candidate diversion area into the pre-trained post-processing model to obtain the classification result and the key points of the candidate diversion area; the fitting unit is configured so that if the classification result is the diversion area, then according to the key points Fitted lane lines.
在一些实施例中,该装置还包括导航单元,被配置成:获取导航信息;根据车辆的位置和车道线的位置确定车辆相对于车道线的相对位置;根据相对位置和导航信息,判断车辆是否偏航;若偏航,则输出变道提醒信息,否则输出直行提醒信息。In some embodiments, the device further includes a navigation unit configured to: obtain navigation information; determine the relative position of the vehicle relative to the lane line according to the position of the vehicle and the position of the lane line; determine whether the vehicle is Yawing; if yaw, then output a warning message for changing lanes, otherwise output a warning message for going straight.
在一些实施例中,检测单元进一步被配置成:将实时获取的车辆前方的图像输入检测模型,检测出候选区域和路牌;若车辆的位置与分歧路口的位置的距离大于预定值,则借助路牌的位置确定候选区域是否在容错范围内;若在容错范围内,则将候选区域确定为候选导流区域。In some embodiments, the detection unit is further configured to: input the image in front of the vehicle acquired in real time into the detection model to detect candidate areas and road signs; Determine whether the candidate area is within the fault tolerance range; if it is within the fault tolerance range, then determine the candidate area as a candidate diversion area.
在一些实施例中,检测单元进一步被配置成:在预定周期的起始帧将实时获取的车辆前方的图像输入检测模型,得到候选导流区域;以及处理单元进一步被配置成:若分类结果为导流区域,则记录导流区域左右车道各3个关键点坐标作为状态变量。In some embodiments, the detection unit is further configured to: input the image in front of the vehicle acquired in real time into the detection model at the start frame of the predetermined period to obtain a candidate diversion area; and the processing unit is further configured to: if the classification result is In the diversion area, the coordinates of three key points in the left and right lanes of the diversion area are recorded as state variables.
在一些实施例中,处理单元进一步被配置成:在每个预定周期内从起始帧的下一帧开始执行如下跟踪步骤:取上一帧记录的状态变量计算其外接矩形,将外接矩形扩展后作为本帧的搜索区域;将本帧的搜索区域输入后处理模型,得到本帧的分类结果和本帧的候选导流区域的关键点;若本帧的分类结果为导流区域,则用本帧的候选导流区域的关键点更新状态变量,否则,停止跟踪。In some embodiments, the processing unit is further configured to: perform the following tracking step from the next frame of the initial frame in each predetermined period: take the state variable recorded in the previous frame to calculate its circumscribing rectangle, and expand the circumscribing rectangle Finally, it is used as the search area of this frame; input the search area of this frame into the post-processing model to obtain the classification result of this frame and the key points of the candidate diversion area of this frame; if the classification result of this frame is a diversion area, use The key points of the candidate diversion area of this frame update the state variable, otherwise, stop tracking.
在一些实施例中,检测模型采用yolov3架构,骨干网采用shufflenet_v2。In some embodiments, the detection model adopts yolov3 architecture, and the backbone network adopts shufflenet_v2.
在一些实施例中,后处理模型采用shufflenet_v2网络,后处理模型包括分类和回归两条分支。In some embodiments, the post-processing model adopts shufflenet_v2 network, and the post-processing model includes two branches of classification and regression.
第三方面,本公开的实施例提供了一种用于识别分歧路口的电子设备,包括:一个或多个处理器;存储装置,其上存储有一个或多个程序,当一个或多个程序被一个或多个处理器执行,使得一个或多个处理器实现如第一方面中任一的方法。In a third aspect, an embodiment of the present disclosure provides an electronic device for identifying divergent intersections, including: one or more processors; a storage device, on which one or more programs are stored, when one or more programs Executed by one or more processors, so that the one or more processors implement the method according to any one of the first aspect.
第四方面,本公开的实施例提供了一种计算机可读介质,其上存储有计算机程序,其中,程序被处理器执行时实现如第一方面中任一的方法。In a fourth aspect, the embodiments of the present disclosure provide a computer-readable medium on which a computer program is stored, wherein, when the program is executed by a processor, the method according to any one of the first aspect is implemented.
本公开的实施例提供的用于识别分歧路口的方法和装置,提出了一种基于深度学习的可实时运行在车机环境的分歧路口车道线定位和拟合方法,通过定位分歧路口处的关键目标(路牌和导流带)与当前车辆的相对位置,可对当前车辆是否偏航做出相应判断和恰当的引导;通过拟合分歧路口处的左右车道线,可以为AR导航车道线绘制提供更准确的坐标形状。同时在行驶过程中结合GPS信息控制算法的调用,只在分歧路段有效范围内调用算法,避免计算资源的浪费。本公开的实施例能够适应真实驾驶场景中复杂的环境,在分歧路口处带来更好的导航体验。The method and device for identifying divergent intersections provided by the embodiments of the present disclosure proposes a deep learning-based method for locating and fitting lane lines at divergent intersections that can run in real-time in the vehicle-machine environment. By locating the key The relative position of the target (street sign and guide belt) and the current vehicle can make a corresponding judgment and appropriate guidance on whether the current vehicle is yaw; by fitting the left and right lane lines at the divergent intersection, it can provide AR navigation lane line drawing More accurate coordinate shapes. At the same time, in combination with the call of the GPS information control algorithm during the driving process, the algorithm is only called within the effective range of the divergent road sections to avoid the waste of computing resources. The embodiments of the present disclosure can adapt to complex environments in real driving scenarios and bring better navigation experience at divergent intersections.
附图说明Description of drawings
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本公开的其它特征、目的和优点将会变得更明显:Other characteristics, objects and advantages of the present disclosure will become more apparent by reading the detailed description of non-limiting embodiments made with reference to the following drawings:
图1是本公开的一个实施例可以应用于其中的示例性系统架构图;FIG. 1 is an exemplary system architecture diagram to which an embodiment of the present disclosure can be applied;
图2是根据本公开的用于识别分歧路口的方法的一个实施例的流程图;FIG. 2 is a flowchart of one embodiment of a method for identifying divergent intersections according to the present disclosure;
图3是根据本公开的用于识别分歧路口的方法的一个应用场景的示意图;FIG. 3 is a schematic diagram of an application scenario of a method for identifying divergent intersections according to the present disclosure;
图4是根据本公开的用于识别分歧路口的方法的又一个实施例的流程图;FIG. 4 is a flow chart of yet another embodiment of a method for identifying divergent intersections according to the present disclosure;
图5是根据本公开的用于识别分歧路口的装置的一个实施例的结构示意图;Fig. 5 is a schematic structural diagram of an embodiment of a device for identifying a divergent intersection according to the present disclosure;
图6是适于用来实现本公开的实施例的电子设备的计算机系统的结构示意图。FIG. 6 is a structural schematic diagram of a computer system suitable for implementing an electronic device according to an embodiment of the present disclosure.
具体实施方式Detailed ways
下面结合附图和实施例对本公开作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释相关发明,而非对该发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关发明相关的部分。The present disclosure will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain related inventions, rather than to limit the invention. It should also be noted that, for the convenience of description, only the parts related to the related invention are shown in the drawings.
需要说明的是,在不冲突的情况下,本公开中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本公开。It should be noted that, in the case of no conflict, the embodiments in the present disclosure and the features in the embodiments can be combined with each other. The present disclosure will be described in detail below with reference to the accompanying drawings and embodiments.
图1示出了可以应用本申请的用于识别分歧路口的方法或用于识别分歧路口装置的实施例的示例性系统架构100。FIG. 1 shows an
如图1所示,系统架构100可以包括车辆101和交通标识102。As shown in FIG. 1 , a
车辆101可以是普通机动车也可以是无人驾驶车辆。车辆101中可以安装有控制器1011、网络1012和传感器1013。网络1012用以在控制器1011和传感器1013之间提供通信链路的介质。网络1012可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。The
控制器(又称为车载大脑)1011负责车辆101的智能控制。控制器1011可以是单独设置的控制器,例如可编程逻辑控制器(Programmable Logic Controller,PLC)、单片机、工业控制机等;也可以是由其他具有输入/输出端口,并具有运算控制功能的电子器件组成的设备;还可以是安装有车辆驾驶控制类应用的计算机设备。控制器安装有训练好的检测模型和后处理模型。The controller (also known as the vehicle brain) 1011 is responsible for the intelligent control of the
传感器1013可以是各种类型的传感器,例如,摄像机、重力传感器、轮速传感器、温度传感器、湿度传感器、激光雷达、毫米波雷达等。某些情况下,车辆101中还可以安装有GNSS(Global Navigation Satellite System,全球导航卫星系统)设备和SINS(Strap-down Inertial Navigation System,捷联惯性导航系统)等等。The
车辆101在行驶过程中拍摄到交通标识102。交通标识102可包括路牌、导流带等。The
车辆101将拍摄到的包括交通标识的原始图像交由控制器进行识别,确定出导流区域,并拟合出车道线。结合导航信息,确定车辆是否偏离航线,如果偏航,则提醒车辆变道,否则,保持直行。The
需要说明的是,本申请实施例所提供的用于识别分歧路口方法一般由控制器1011执行,相应地,用于识别分歧路口的装置一般设置于控制器1011中。It should be noted that the method for identifying a branch intersection provided in the embodiment of the present application is generally executed by the
应该理解,图1中的控制器、网络和传感器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的控制器、网络和传感器。It should be understood that the number of controllers, networks, and sensors in Figure 1 is illustrative only. There can be any number of controllers, networks and sensors depending on implementation needs.
继续参考图2,示出了根据本公开的用于识别分歧路口的方法的一个实施例的流程200。该用于识别分歧路口的方法,包括以下步骤:Continuing to refer to FIG. 2 , a
步骤201,获取车辆的位置和分歧路口的位置,并根据车辆的位置和分歧路口的位置判断车辆是否在分歧路口检测范围内。
在本实施例中,用于识别分歧路口的方法的执行主体(例如图1所示的控制器)可通过车载摄像头采集前方图像,通过图像识别来判断分歧路口的导流区。因为分歧路口只在驾驶过程中的某些时间段内出现,且车机的计算资源非常宝贵,所以只在这些时间段内运行算法,其余时间则不占用计算资源。具体实现为在驾驶过程中实时获取车辆的当前GPS位置信息,当车辆位置距离导航系统中预存的某一分歧路口位置小于预定第一距离(例如90米)时发信号给主程序,开始调用算法,实时处理车载摄像头传来的图片,当车辆距离该分歧路口位置小于预定第二距离(例如5米)时,此时已不具备变道条件,停止调用算法,该分歧路段处理完毕。行驶过程中的所有分歧路口均按上述逻辑处理。可选地,可采用除GPS外的定位系统,例如北斗系统获取车辆位置信息。In this embodiment, the executing subject of the method for identifying a branch intersection (for example, the controller shown in FIG. 1 ) can collect the front image through the vehicle-mounted camera, and judge the diversion area of the branch intersection through image recognition. Because divergent intersections only appear during certain periods of time during driving, and the computing resources of the car and machine are very precious, the algorithm is only run during these periods of time, and does not occupy computing resources for the rest of the time. The specific implementation is to obtain the current GPS position information of the vehicle in real time during the driving process, and when the distance between the vehicle position and a certain bifurcation intersection pre-stored in the navigation system is less than the predetermined first distance (for example, 90 meters), a signal is sent to the main program to start calling the algorithm , real-time processing of pictures from the on-board camera, when the distance between the vehicle and the diverging intersection is less than the predetermined second distance (for example, 5 meters), the condition for changing lanes is no longer available at this time, stop calling the algorithm, and the processing of the diverging road section is completed. All divergent intersections during driving are handled according to the above logic. Optionally, a positioning system other than GPS, such as the Beidou system, may be used to acquire vehicle position information.
步骤202,若在分歧路口检测范围内,则将实时获取的车辆前方的图像输入预先训练的检测模型,得到候选导流区域。
在本实施例中,如果车辆当前在分歧路口检测范围内,则可启用步骤202-204所示的检测算法。可通过轻量级的检测网络在图像中定位分歧路口处的关键要素(导流区、路牌)。检测模型是一种用于识别导流区和路牌的神经网络。首先,在模型结构上,为了减少对车机计算资源的消耗,对检测模型采用轻量级的骨干网络shufflenet_v2并进一步裁剪网络以减少计算量。为了适应导流区域和路牌的形状及尺度变化,采用yolov3检测网络,并在更多层次上进行特征融合以提升模型的表现。其次,在调用策略上,当距离分歧路口较远时(即大于预定值,例如大于60米),此时导流区在图像上目标很小,检测模型的输出存在不确定性,需要借助路牌的位置来辅助定位导流区的位置,当两者的位置关系在容错范围内(例如,路牌的外接矩形框的中心垂直线穿过导流区的外接矩形框)时可认为此时的导流区位置是可靠的。随着行驶距离逐渐减小,导流区目标逐渐变大,此时不再需要路牌位置来辅助定位。检测模型的训练样本需要标注导流区和路牌。In this embodiment, if the vehicle is currently within the detection range of the divergent intersection, the detection algorithm shown in steps 202-204 may be activated. The key elements (diversion area, street signs) at the divergence intersection can be located in the image through a lightweight detection network. The detection model is a neural network used to recognize diversion areas and street signs. First of all, in terms of model structure, in order to reduce the consumption of computer computing resources, the lightweight backbone network shufflenet_v2 is used for the detection model and the network is further cut to reduce the amount of calculation. In order to adapt to the shape and scale changes of diversion areas and road signs, the yolov3 detection network is used, and feature fusion is performed on more levels to improve the performance of the model. Secondly, in terms of invoking strategy, when the distance from the bifurcation intersection is far away (that is, greater than the predetermined value, such as greater than 60 meters), the target of the diversion area on the image is very small at this time, and the output of the detection model is uncertain, so it is necessary to use street signs The location of the diversion area is used to assist in locating the position of the diversion area. When the positional relationship between the two is within the tolerance range (for example, the center vertical line of the circumscribed rectangular frame of the street sign passes through the circumscribed rectangular frame of the diversion area), it can be considered that the diversion area at this time The stream area location is reliable. As the driving distance gradually decreases, the target in the diversion area gradually becomes larger. At this time, the position of the street sign is no longer needed to assist in positioning. The training samples of the detection model need to label diversion areas and road signs.
检测模型是通过有监督的方式训练得到的。训练样本需要标注出导流区、路牌。The detection model is trained in a supervised manner. The training samples need to be marked with diversion areas and road signs.
步骤203,将候选导流区域输入预先训练的后处理模型,得到分类结果和候选导流区域的关键点。
在本实施例中,通过轻量级的后处理模型完成对检测到的导流区域左右车道线的关键点拟合以及误检测过滤,同时借助该后处理模型完成对导流区域的跟踪。后处理模型是一种神经网络,可进一步判断检测模型输出的候选导流区域是否为真的导流区,并且还可识别出导流区中左右两侧车道线的关键点。In this embodiment, the light-weight post-processing model is used to complete the key point fitting of the detected left and right lane lines of the diversion area and the false detection filtering, and at the same time, the tracking of the diversion area is completed by means of the post-processing model. The post-processing model is a kind of neural network, which can further judge whether the candidate diversion area output by the detection model is a real diversion area, and can also identify the key points of the left and right lane lines in the diversion area.
首先,后处理模型的模型结构是基于shufflenet_v2进行设计,模型共有分类和回归两条分支,其中分类分支(二分类器,判断检测模型输出的候选导流区域是否为真的导流区)可以对检测到的导流区域做进一步的区分,主要用来过滤误检测的样本;回归分支(回归模型)用于计算导流区域左右车道线各自3个关键点,用来辅助车道线绘制。后处理模型需要标注导流区和关键点。后处理模型是通过有监督的训练得到的。在制作后处理模型的训练样本时,导流区左右两侧各选择至少三个关键点,包括起点、终点和中间点。中间点一般选择起点和终点中间的位置。First of all, the model structure of the post-processing model is designed based on shufflenet_v2. The model has two branches: classification and regression. Among them, the classification branch (two classifiers, which judges whether the candidate diversion area output by the detection model is a real diversion area) can The detected diversion area is further distinguished, mainly used to filter false detection samples; the regression branch (regression model) is used to calculate the three key points of the left and right lane lines in the diversion area, which are used to assist in the drawing of lane lines. The post-processing model needs to label the diversion area and key points. The post-processing model is obtained through supervised training. When making training samples for the post-processing model, select at least three key points on the left and right sides of the diversion area, including the start point, end point, and middle point. The middle point generally chooses the middle position between the start point and the end point.
其次,因检测模型计算量较大,为了进一步减少计算资源的占用,只在关键帧(每个周期检测起始帧,周期可以是一秒)进行检测,周期内其余帧进行跟踪。采用常规的基于模版匹配类的跟踪方法会引入额外的计算量,结合我们实际的应用场景,提出一种基于上述后处理模型的跟踪方法,可以在不额外引入计算量的同时完成跟踪。具体实现为:1)在关键帧调用检测网络,将检测到的导流区域送入后处理模型进行分类和回归,对分类结果为正的导流区域记录其6个关键点坐标作为状态变量,从下一帧开始跟踪该目标2)取上一帧记录的状态变量计算其外接矩形,将此外接矩形适当扩展作为本帧的搜索区域,将本帧该图像区域送入后处理模型,利用分类分支判断该区域内是否包含导流区,利用回归分支获取该区域内导流区域的关键点的准确位置。若判该区域包含导流区则用本帧获取的导流区的关键点更新状态变量,若判断不包含导流区域则停止跟踪。3)重复步骤2)完成对目标的跟踪。Secondly, due to the large amount of calculation of the detection model, in order to further reduce the occupancy of computing resources, detection is only performed at key frames (the initial frame is detected in each cycle, and the cycle can be one second), and the rest of the frames in the cycle are tracked. Using the conventional tracking method based on template matching class will introduce additional computation. Combined with our actual application scenario, we propose a tracking method based on the above post-processing model, which can complete the tracking without introducing additional computation. The specific implementation is as follows: 1) Call the detection network at the key frame, send the detected diversion area to the post-processing model for classification and regression, and record the six key point coordinates of the diversion area with positive classification results as state variables, Start tracking the target from the next frame 2) Take the state variable recorded in the previous frame to calculate its circumscribed rectangle, expand this circumscribed rectangle appropriately as the search area of this frame, send the image area of this frame into the post-processing model, and use classification The branch judges whether the diversion area is included in the area, and uses the regression branch to obtain the accurate position of the key points of the diversion area in the area. If it is judged that the area contains a diversion area, the key points of the diversion area acquired in this frame are used to update the state variable, and if it is judged that the diversion area is not included, the tracking is stopped. 3) Repeat step 2) to complete the tracking of the target.
如果跟踪中断,则可在下一周期启动检测,也可在本周期内扩大搜索区域后,利用最近一次的状态变量重新进行跟踪。If the tracking is interrupted, the detection can be started in the next cycle, or the search area can be expanded in this cycle, and the latest state variable can be used to re-track.
步骤204,若分类结果为导流区域,则根据候选导流区域的关键点拟合出车道线。
在本实施例中,利用前面得到的车道线关键点拟合贝塞尔曲线绘制分歧路口处的车道线。如图3所示,导流区左右两侧各3个关键点,可分别拟合出一条车道线。In this embodiment, the previously obtained key points of the lane line are used to fit the Bezier curve to draw the lane line at the branch intersection. As shown in Figure 3, there are three key points on the left and right sides of the diversion area, and a lane line can be fitted respectively.
继续参见图3,图3是根据本实施例的用于识别分歧路口的方法的应用场景的一个示意图。在图3的应用场景中,在车机上的具体运行流程如下:Continuing to refer to FIG. 3 , FIG. 3 is a schematic diagram of an application scenario of the method for identifying divergent intersections according to this embodiment. In the application scenario in Figure 3, the specific operation process on the vehicle is as follows:
1.汽车正常行驶过程中,通过测量车辆和分歧路口的距离判断当前是否在分歧路口检测范围内,若在,则满足检测算法启动条件,进入流程2,否则重复流程1。1. During the normal driving process of the car, measure the distance between the vehicle and the branch intersection to determine whether it is currently within the detection range of the branch intersection. If it is, the detection algorithm activation condition is met, and the process enters process 2; otherwise, repeat process 1.
2.在满足检测算法启动条件的情况下,通过车载的摄像头实时获取车辆前方的图像信息,每秒钟可采集多帧图像,但每秒钟只调用一次检测模型(如图3所示,仅在周期T、2T、3T…的起始帧获取图像输入检测模型),然后将检测到的图像区域送入后处理模型进行分类和关键点回归,若判断非导流区域则下一秒再调用检测算法;若判断为导流区域则记录当前帧的关键点信息作为状态变量,在该秒的余下时间内将利用后处理模型进行跟踪,直到下一秒再次调用检测算法为止。以实现秒间检测,秒内跟踪。2. Under the condition of meeting the starting conditions of the detection algorithm, the image information in front of the vehicle can be obtained in real time through the vehicle-mounted camera, and multiple frames of images can be collected per second, but the detection model is only called once per second (as shown in Figure 3, only Get the image input detection model at the initial frame of cycle T, 2T, 3T...), and then send the detected image area to the post-processing model for classification and key point regression. If the non-draining area is judged, it will be called in the next second Detection algorithm; if it is judged to be a diversion area, the key point information of the current frame will be recorded as a state variable, and the post-processing model will be used for tracking in the rest of the second until the detection algorithm is called again in the next second. In order to realize detection within seconds and tracking within seconds.
3.实时获取上一步得到的导流区位置以及关键点坐标,进行车道线的绘制以及提供恰当的导航提示。3. Obtain the position of the diversion area and the coordinates of key points obtained in the previous step in real time, draw lane lines and provide appropriate navigation prompts.
4.判断不满足检测算法启动条件时,停止调用检测模型,到下一次满足检测算法启动条件时再启动检测算法。4. When it is judged that the starting conditions of the detection algorithm are not met, stop calling the detection model, and start the detection algorithm again when the starting conditions of the detection algorithm are met next time.
进一步参考图4,其示出了用于识别分歧路口的方法的又一个实施例的流程400。该用于识别分歧路口的方法的流程400,包括以下步骤:Further referring to FIG. 4 , it shows a
步骤401,获取车辆的位置和分歧路口的位置,并根据车辆的位置和分歧路口的位置判断车辆是否在分歧路口检测范围内。
步骤402,若在分歧路口检测范围内,则将实时获取的车辆前方的图像输入预先训练的检测模型,得到候选导流区域。
步骤403,将候选导流区域输入预先训练的后处理模型,得到分类结果和候选导流区域的关键点。
步骤404,若分类结果为导流区域,则根据候选导流区域的关键点拟合出车道线。
步骤401-404与步骤201-204基本相同,因此不再赘述。Steps 401-404 are basically the same as steps 201-204, so they are not repeated here.
步骤405,获取导航信息,并根据车辆的位置和车道线的位置确定车辆相对于车道线的相对位置。
在本实施例中,导航信息中包括行驶路线,指示了在分歧路口应该走哪条车道。实时通过GPS获取车辆的位置,然后将步骤404拟合出的车道线的坐标映射到真实空间中,确定车辆相对于车道线的位置。例如,车辆位于导流区左车道线的左侧车道。In this embodiment, the navigation information includes the driving route, which indicates which lane should be taken at the divergent intersection. The position of the vehicle is acquired through GPS in real time, and then the coordinates of the lane line fitted in
步骤406,根据相对位置和导航信息,判断车辆是否偏航。
在本实施例中,导航信息中指示了车辆在分歧路口应该走哪条车道。如果相对位置就是导航信息指示的车道,则没有偏航,否则,车辆偏离了导航路线。In this embodiment, the navigation information indicates which lane the vehicle should take at the divergent intersection. If the relative position is the lane indicated by the navigation information, there is no yaw; otherwise, the vehicle deviates from the navigation route.
步骤407,若偏航,则输出变道提醒信息,否则输出直行提醒信息。
在本实施例中,如果偏航,则输出提醒信息,提示司机变道。否则,提示司机保持直行。In this embodiment, if the vehicle veers off course, a reminder message is output to prompt the driver to change lanes. Otherwise, prompt the driver to keep going straight.
从图4中可以看出,与图2对应的实施例相比,本实施例中的用于识别分歧路口的方法的流程400体现了利用拟合出的车道线进行导航的步骤。由此,本实施例描述的方案可以利用导流区和当前车辆的相对位置结合导航路线做出直行或变道引导。从而优化车道线识别与导航算法在分歧路口处的表现,提升导航系统的用户体验。It can be seen from FIG. 4 that, compared with the embodiment corresponding to FIG. 2 , the
进一步参考图5,作为对上述各图所示方法的实现,本公开提供了一种用于识别分歧路口的装置的一个实施例,该装置实施例与图2所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。Further referring to FIG. 5 , as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of a device for identifying divergent intersections. This device embodiment corresponds to the method embodiment shown in FIG. 2 , The device can be specifically applied to various electronic devices.
如图5所示,本实施例的用于识别分歧路口的装置500包括:获取单元501、检测单元502、处理单元503、拟合单元504。其中,获取单元501,被配置成获取车辆的位置和分歧路口的位置,并根据车辆的位置和分歧路口的位置判断车辆是否在分歧路口检测范围内;检测单元502,被配置成若在分歧路口检测范围内,则将实时获取的车辆前方的图像输入预先训练的检测模型,得到候选导流区域;处理单元503,被配置成将候选导流区域输入预先训练的后处理模型,得到分类结果和候选导流区域的关键点;拟合单元504,被配置成若分类结果为导流区域,则根据候选导流区域的关键点拟合出车道线。As shown in FIG. 5 , the
在本实施例中,用于识别分歧路口的装置500的获取单元501、检测单元502、处理单元503、拟合单元504的具体处理可以参考图2对应实施例中的步骤201、步骤202、步骤203、步骤204。In this embodiment, the specific processing of the
在本实施例的一些可选的实现方式中,装置500还包括导航单元(附图中未示出),被配置成:获取导航信息;根据车辆的位置和车道线的位置确定车辆相对于车道线的相对位置;根据相对位置和导航信息,判断车辆是否偏航;若偏航,则输出变道提醒信息,否则输出直行提醒信息。In some optional implementations of this embodiment, the
在本实施例的一些可选的实现方式中,检测单元502进一步被配置成:将实时获取的车辆前方的图像输入检测模型,检测出候选区域和路牌;若车辆的位置与分歧路口的位置的距离大于预定值,则借助路牌的位置确定候选区域是否在容错范围内;若在容错范围内,则将候选区域确定为候选导流区域。In some optional implementations of this embodiment, the
在本实施例的一些可选的实现方式中,检测单元502进一步被配置成:在预定周期的起始帧将实时获取的车辆前方的图像输入检测模型,得到候选导流区域;以及处理单元进一步被配置成:若分类结果为导流区域,则记录导流区域左右车道各3个关键点坐标作为状态变量。In some optional implementations of this embodiment, the
在本实施例的一些可选的实现方式中,处理单元503进一步被配置成:在每个预定周期内从起始帧的下一帧开始执行如下跟踪步骤:取上一帧记录的状态变量计算其外接矩形,将外接矩形扩展后作为本帧的搜索区域;将本帧的搜索区域输入后处理模型,得到本帧的分类结果和本帧的候选导流区域的关键点;若本帧的分类结果为导流区域,则用本帧的候选导流区域的关键点更新状态变量,否则,停止跟踪。In some optional implementations of this embodiment, the
在本实施例的一些可选的实现方式中,检测模型采用yolov3架构,骨干网采用shufflenet_v2。In some optional implementations of this embodiment, the detection model adopts the yolov3 architecture, and the backbone network adopts shufflenet_v2.
在本实施例的一些可选的实现方式中,后处理模型采用shufflenet_v2网络,后处理模型包括分类和回归两条分支。In some optional implementation manners of this embodiment, the post-processing model adopts a shufflenet_v2 network, and the post-processing model includes two branches of classification and regression.
下面参考图6,其示出了适于用来实现本公开的实施例的电子设备(例如图1中的控制器)600的结构示意图。图6示出的控制器仅仅是一个示例,不应对本公开的实施例的功能和使用范围带来任何限制。Referring now to FIG. 6 , it shows a schematic structural diagram of an electronic device (such as the controller in FIG. 1 ) 600 suitable for implementing embodiments of the present disclosure. The controller shown in FIG. 6 is only an example, and should not limit the functions and scope of use of the embodiments of the present disclosure.
如图6所示,电子设备600可以包括处理装置(例如中央处理器、图形处理器等)601,其可以根据存储在只读存储器(ROM)602中的程序或者从存储装置608加载到随机访问存储器(RAM)603中的程序而执行各种适当的动作和处理。在RAM 603中,还存储有电子设备600操作所需的各种程序和数据。处理装置601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(I/O)接口605也连接至总线604。As shown in FIG. 6, an
通常,以下装置可以连接至I/O接口605:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置606;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置607;包括例如磁带、硬盘等的存储装置608;以及通信装置609。通信装置609可以允许电子设备600与其他设备进行无线或有线通信以交换数据。虽然图6示出了具有各种装置的电子设备600,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。图6中示出的每个方框可以代表一个装置,也可以根据需要代表多个装置。Typically, the following devices can be connected to the I/O interface 605:
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置609从网络上被下载和安装,或者从存储装置608被安装,或者从ROM 602被安装。在该计算机程序被处理装置601执行时,执行本公开的实施例的方法中限定的上述功能。需要说明的是,本公开的实施例所述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开的实施例中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开的实施例中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts can be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product, which includes a computer program carried on a computer-readable medium, where the computer program includes program codes for executing the methods shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via communication means 609, or from storage means 608, or from
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取车辆的位置和分歧路口的位置,并根据车辆的位置和分歧路口的位置判断车辆是否在分歧路口检测范围内;若在分歧路口检测范围内,则将实时获取的车辆前方的图像输入预先训练的检测模型,得到候选导流区域;将候选导流区域输入预先训练的后处理模型,得到分类结果和候选导流区域的关键点;若分类结果为导流区域,则根据候选导流区域的关键点拟合出车道线。The above-mentioned computer-readable medium may be included in the above-mentioned electronic device, or may exist independently without being incorporated into the electronic device. The above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: acquires the position of the vehicle and the position of the divergent intersection, and according to the position of the vehicle and the position of the divergent intersection Determine whether the vehicle is within the detection range of the bifurcation intersection; if it is within the detection range of the bifurcation intersection, input the real-time image in front of the vehicle into the pre-trained detection model to obtain the candidate diversion area; input the candidate diversion area into the pre-trained The post-processing model of the classification result and the key points of the candidate diversion area are obtained; if the classification result is a diversion area, the lane line is fitted according to the key points of the candidate diversion area.
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的实施例的操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program code for carrying out operations of embodiments of the present disclosure may be written in one or more programming languages, or combinations thereof, including object-oriented programming languages—such as Java, Smalltalk, C++, Also included are conventional procedural programming languages - such as the "C" language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In cases involving a remote computer, the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through an Internet service provider). Internet connection).
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions. It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved. It should also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
描述于本公开的实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的单元也可以设置在处理器中,例如,可以描述为:一种处理器包括获取单元、检测单元、处理单元和拟合单元。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定,例如,获取单元还可以被描述为“获取车辆的位置和分歧路口的位置,并根据所述车辆的位置和所述分歧路口的位置判断所述车辆是否在分歧路口检测范围内的单元”。The units involved in the embodiments described in the present disclosure may be implemented by software or by hardware. The described units may also be set in a processor, for example, it may be described as: a processor includes an acquisition unit, a detection unit, a processing unit, and a fitting unit. Wherein, the names of these units do not constitute a limitation to the unit itself in some cases, for example, the acquisition unit can also be described as "obtaining the position of the vehicle and the position of the branch intersection, and according to the position of the vehicle and the position of the A unit for judging whether the vehicle is within the detection range of the branch intersection according to the position of the branch intersection".
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的发明范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离所述发明构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。The above description is only a preferred embodiment of the present disclosure and an illustration of the applied technical principle. Those skilled in the art should understand that the scope of the invention involved in this disclosure is not limited to the technical solution formed by the specific combination of the above-mentioned technical features, but should also cover the technical solutions made by the above-mentioned technical features without departing from the inventive concept. Other technical solutions formed by any combination of or equivalent features thereof. For example, a technical solution formed by replacing the above-mentioned features with (but not limited to) technical features with similar functions disclosed in this disclosure.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010188380.1ACN111401255B (en) | 2020-03-17 | 2020-03-17 | Method and device for identifying bifurcation junctions |
| CN202310213882.9ACN116129387A (en) | 2020-03-17 | 2020-03-17 | Method and device for identifying bifurcation junctions |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010188380.1ACN111401255B (en) | 2020-03-17 | 2020-03-17 | Method and device for identifying bifurcation junctions |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310213882.9ADivisionCN116129387A (en) | 2020-03-17 | 2020-03-17 | Method and device for identifying bifurcation junctions |
| Publication Number | Publication Date |
|---|---|
| CN111401255A CN111401255A (en) | 2020-07-10 |
| CN111401255Btrue CN111401255B (en) | 2023-05-12 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310213882.9APendingCN116129387A (en) | 2020-03-17 | 2020-03-17 | Method and device for identifying bifurcation junctions |
| CN202010188380.1AActiveCN111401255B (en) | 2020-03-17 | 2020-03-17 | Method and device for identifying bifurcation junctions |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310213882.9APendingCN116129387A (en) | 2020-03-17 | 2020-03-17 | Method and device for identifying bifurcation junctions |
| Country | Link |
|---|---|
| CN (2) | CN116129387A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| GB2599082B (en)* | 2020-09-14 | 2024-07-24 | Jaguar Land Rover Ltd | Vehicle control system and method for generating speed limit signal |
| CN113034548B (en)* | 2021-04-25 | 2023-05-26 | 安徽科大擎天科技有限公司 | Multi-target tracking method and system suitable for embedded terminal |
| CN113362421B (en)* | 2021-06-30 | 2023-11-28 | 北京百度网讯科技有限公司 | Drawing method and device of diversion area in map and electronic equipment |
| CN113763522B (en)* | 2021-09-18 | 2025-02-14 | 腾讯科技(深圳)有限公司 | Map rendering method, device, equipment and medium |
| CN114140551B (en)* | 2021-11-24 | 2024-12-27 | 武汉中海庭数据技术有限公司 | A method and system for estimating divergence and merging points on expressways based on trajectory images |
| CN114323005B (en)* | 2021-12-28 | 2023-08-11 | 上汽大众汽车有限公司 | A Locating Method for Slightly Diverging Roads |
| CN115046558B (en)* | 2022-04-28 | 2025-06-24 | 东风汽车有限公司东风日产乘用车公司 | Overhead navigation method, electronic device and storage medium |
| CN114926811A (en)* | 2022-05-07 | 2022-08-19 | 深圳汇辰软件有限公司 | Ramp port identification method and device, electronic equipment and storage medium |
| CN116007640B (en)* | 2022-12-19 | 2025-05-09 | 东风汽车集团股份有限公司 | A method, system and medium for generating guide lines at intersections based on high-precision maps |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101424538A (en)* | 2007-10-30 | 2009-05-06 | 爱信艾达株式会社 | Vehicle navigation apparatus and vehicle navigation program |
| WO2010044188A1 (en)* | 2008-10-17 | 2010-04-22 | 三菱電機株式会社 | Navigation device |
| US9081383B1 (en)* | 2014-01-22 | 2015-07-14 | Google Inc. | Enhancing basic roadway-intersection models using high intensity image data |
| JP2017068617A (en)* | 2015-09-30 | 2017-04-06 | 株式会社日本自動車部品総合研究所 | Traveling line recognition device |
| CN107179089A (en)* | 2017-05-22 | 2017-09-19 | 成都宏软科技实业有限公司 | The prevention method and system for preventing navigation crossing from missing during a kind of interactive voice |
| CN109099933A (en)* | 2018-07-12 | 2018-12-28 | 百度在线网络技术(北京)有限公司 | The method and apparatus for generating information |
| CN109241893A (en)* | 2018-08-27 | 2019-01-18 | 广州大学 | Road selection method, device and readable storage medium storing program for executing based on artificial intelligence technology |
| CN109920263A (en)* | 2019-04-22 | 2019-06-21 | 爱驰汽车有限公司 | Fork on the road based reminding method, system, equipment and storage medium |
| CN110319844A (en)* | 2019-06-14 | 2019-10-11 | 武汉理工大学 | For the method for intersection expression and bus or train route object matching under bus or train route cooperative surroundings |
| CN110428442A (en)* | 2019-08-07 | 2019-11-08 | 北京百度网讯科技有限公司 | Target determines method, targeting system and monitoring security system |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP4645429B2 (en)* | 2005-12-01 | 2011-03-09 | アイシン・エィ・ダブリュ株式会社 | Vehicle position calculation method and in-vehicle device |
| EP2466530A1 (en)* | 2010-12-16 | 2012-06-20 | Siemens Aktiengesellschaft | Method for simulating a pedestrian flow and device for generating a cellular machine for simulating a pedestrian flow |
| KR20160131222A (en)* | 2015-05-06 | 2016-11-16 | 팅크웨어(주) | Method and apparatus for vehicle position recognizing and route searching through video analysis |
| US11042157B2 (en)* | 2018-07-23 | 2021-06-22 | Baidu Usa Llc | Lane/object detection and tracking perception system for autonomous vehicles |
| CN109859513A (en)* | 2019-03-07 | 2019-06-07 | 宝能汽车有限公司 | Road junction roadway air navigation aid and device |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101424538A (en)* | 2007-10-30 | 2009-05-06 | 爱信艾达株式会社 | Vehicle navigation apparatus and vehicle navigation program |
| WO2010044188A1 (en)* | 2008-10-17 | 2010-04-22 | 三菱電機株式会社 | Navigation device |
| CN102150015A (en)* | 2008-10-17 | 2011-08-10 | 三菱电机株式会社 | Navigation device |
| US9081383B1 (en)* | 2014-01-22 | 2015-07-14 | Google Inc. | Enhancing basic roadway-intersection models using high intensity image data |
| JP2017068617A (en)* | 2015-09-30 | 2017-04-06 | 株式会社日本自動車部品総合研究所 | Traveling line recognition device |
| CN107179089A (en)* | 2017-05-22 | 2017-09-19 | 成都宏软科技实业有限公司 | The prevention method and system for preventing navigation crossing from missing during a kind of interactive voice |
| CN109099933A (en)* | 2018-07-12 | 2018-12-28 | 百度在线网络技术(北京)有限公司 | The method and apparatus for generating information |
| CN109241893A (en)* | 2018-08-27 | 2019-01-18 | 广州大学 | Road selection method, device and readable storage medium storing program for executing based on artificial intelligence technology |
| CN109920263A (en)* | 2019-04-22 | 2019-06-21 | 爱驰汽车有限公司 | Fork on the road based reminding method, system, equipment and storage medium |
| CN110319844A (en)* | 2019-06-14 | 2019-10-11 | 武汉理工大学 | For the method for intersection expression and bus or train route object matching under bus or train route cooperative surroundings |
| CN110428442A (en)* | 2019-08-07 | 2019-11-08 | 北京百度网讯科技有限公司 | Target determines method, targeting system and monitoring security system |
| Title |
|---|
| Automatic Generation of Intersection Models from Digital Maps for Vision-Based Driving on Innercity Intersections;Frank Heimes等;《Proceedings of the IEEE Intelligent Vehicles Symposium 2000 (Cat. No.00TH8511)》;498-503* |
| 特定场景下智能车的融合定位及导航策略研究;钟海兴;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》(第01期);C035-455* |
| Publication number | Publication date |
|---|---|
| CN116129387A (en) | 2023-05-16 |
| CN111401255A (en) | 2020-07-10 |
| Publication | Publication Date | Title |
|---|---|---|
| CN111401255B (en) | Method and device for identifying bifurcation junctions | |
| CN111626208B (en) | Method and device for detecting small objects | |
| US12017663B2 (en) | Sensor aggregation framework for autonomous driving vehicles | |
| CN109141464B (en) | Navigation lane change prompting method and device | |
| US10809723B2 (en) | Method and apparatus for generating information | |
| JP7525618B2 (en) | Vehicle behavior prediction method and device, electronic device, and storage medium | |
| CN112590813B (en) | Method, device, electronic device and medium for generating information of automatic driving vehicle | |
| CN110696826B (en) | Method and device for controlling a vehicle | |
| US20250206354A1 (en) | Safe and scalable model for culturally sensitive driving by automated vehicles using a probabilistic architecture | |
| CN113205088B (en) | Obstacle image presentation method, electronic device, and computer-readable medium | |
| US20200096360A1 (en) | Method for planning trajectory of vehicle | |
| EP3588007A1 (en) | Information processing method and information processing device | |
| CN116022130B (en) | Vehicle parking method, device, electronic device and computer readable medium | |
| CN110654380B (en) | Method and device for controlling a vehicle | |
| CN113946956B (en) | Method and device for simulating parking of passengers | |
| CN115235487A (en) | Data processing method and device, equipment and medium | |
| CN119428765A (en) | Automatic driving control method, device, equipment and storage medium based on potential world model guidance | |
| CN114056337A (en) | Vehicle driving behavior prediction method, device and computer program product | |
| CN111688717B (en) | Method and device for controlling vehicle traffic | |
| CN108960160B (en) | Method and device for predicting structured state quantities based on unstructured prediction model | |
| US12147232B2 (en) | Method, system and computer program product for the automated locating of a vehicle | |
| CN116434041B (en) | Error perception data mining method, device, equipment and autonomous driving vehicle | |
| CN113465626A (en) | Image output method and apparatus for navigation, medium, device, and vehicle | |
| CN120363948B (en) | Method for controlling vehicle, method and device for training multimodal model, and vehicle | |
| CN118665456A (en) | Parking method, device, vehicle and storage medium |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| TA01 | Transfer of patent application right | ||
| TA01 | Transfer of patent application right | Effective date of registration:20211011 Address after:100176 101, floor 1, building 1, yard 7, Ruihe West 2nd Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing Applicant after:Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Address before:2 / F, baidu building, No. 10, Shangdi 10th Street, Haidian District, Beijing 100085 Applicant before:BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd. | |
| GR01 | Patent grant | ||
| GR01 | Patent grant |