Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
In describing embodiments of the present disclosure, the terms "include" and its derivatives should be interpreted as being inclusive, i.e., "including but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The terms "first," "second," and the like may refer to different or the same object. Other explicit and implicit definitions are also possible below.
As mentioned above, the research direction of intelligent driving is beginning to expand from single-vehicle intelligence to assisting vehicle driving through vehicle external sensors. Taking the automatic passenger-replacing parking technology as an example, the automatic passenger-replacing parking technology is an important part of high-grade intelligent driving and has higher landing application value. The automatic passenger-replacing parking technology comprises two technical schemes: one of them is for improving the intelligent level of car end, installs high accuracy sensor equipment for the vehicle, makes it possess the ability of accurate perception surrounding environment. And the other method is to enhance the intellectualization of the parking lot, densely arrange sensor equipment at the site end, sense and position the vehicles entering the site, and transmit the key information back to the vehicles. The two methods enable the vehicle to have the ability of sensing the environment, and ensure the accuracy and the safety of the vehicle in the driving process.
In both embodiments, there are technical problems that are difficult to overcome. Among them among the car end intelligence scheme, as mentioned above, in order to solve the measurement blind area of vehicle less distance all around, need additionally to install both sides and fore-and-aft camera or laser radar. In order to realize the sensing of 360-degree dead angles of the whole vehicle, 6 cameras and 3 laser radars are generally needed. Compared with the prior art, the throughput and the processing rate of the vehicle-end computing platform on the sensor data are very poor, and the parallel processing of a plurality of devices cannot be met. In addition, no laser radar meeting the vehicle specification standard is available in the existing manufacturing technology level, and the intelligent integral scheme at the vehicle end faces the bottleneck. In the field terminal intelligent scheme, the parking lot is densely distributed with laser radar sensors, and the problem of insufficient data processing capacity of a computing platform is also faced. In other embodiments, a cloud computing scheme is adopted, and communication delay caused by the cloud computing scheme is usually as high as tens of milliseconds, so that the requirement of real-time control of the vehicle cannot be met. For the above reasons, both schemes have great technical problems and safety risks.
According to an embodiment of the present disclosure, a solution for guiding vehicle driving based on an external sensing device is proposed. In the scheme, the external sensing equipment is subjected to measurement and sensing relative to the vehicle at an objective third person weighing visual angle, so that the problem of a measurement blind area of active sensing of vehicle-mounted sensing equipment is solved; on the other hand, for the parking lot, the device for guiding the vehicle to drive belongs to the autonomous independent mobile equipment, the whole area does not need to be deployed in a covering mode, namely only one device needs to be installed in the whole parking lot, and the vehicle driving is guided in the global range. In terms of computational load, a relatively common computing platform can accomplish computational tasks due to only a single device and a small number of sensing devices on the device.
In the scheme, in the first step, scene information of a vehicle and a surrounding environment of the vehicle is acquired from sensing equipment outside the vehicle, wherein the scene information can comprise image data acquired by a camera; secondly, determining the state information of the vehicle from the scene information, wherein the state information comprises the position of the vehicle and the like; thirdly, determining obstacle state information except the vehicle in the surrounding environment of the vehicle from the scene information; fourthly, determining vehicle running information including a vehicle steering wheel and an accelerator brake state based on the state information of the vehicle, the state information of the obstacle and the destination information of the vehicle; and fifthly, transmitting the vehicle information, the obstacle information and the running information to the vehicle. Thereby completing the flow of guiding the vehicle driving.
Embodiments of the present disclosure will be described below in detail with reference to the accompanying drawings.
FIG. 1 illustrates a schematic diagram of anexample environment 100 in which embodiments of the present disclosure may be implemented. Some typical objects are schematically shown in theexample environment 100, including aroad 101, avehicle 102, asliding guide 103, a vehicle guidingdevice 104, asensing device 105, a sensingdevice measurement area 106. It should be understood that these illustrated facilities and objects are examples only, and that the presence of objects that may be present in different traffic environments will vary depending on the actual situation. The scope of the present disclosure is not limited in this respect.
In the example of fig. 1, onevehicle 101 is traveling on aroad 102.Vehicle 101 may be any type of vehicle that may carry people and/or things and be moved by a powered system such as an engine, including but not limited to a car, truck, bus, electric vehicle, motorcycle, recreational vehicle, train, etc.Vehicle 101 may be a vehicle with some autonomous driving capability, such a vehicle also being referred to as an autonomous vehicle. Of course, thevehicle 101 may also be a vehicle without autopilot capability.
In some embodiments, thesensing device 105 in theenvironment 100 may be an end-of-the-field device independent of thevehicle 101 for monitoring a condition of theenvironment 100 to obtain sensory information related to theenvironment 100. In some embodiments, thesensing device 105 may be mounted on thevehicle guidance apparatus 104. In some embodiments, thevehicle guide 104 may be slidably mounted on theslide rail 103. In some embodiments, themeasurement area 106 of thesensing device 105 covers the position of thevehicle 101. In an embodiment of the present disclosure, thesensing device 105 comprises an image sensor to acquire image information of theroad 102 and thevehicle 101 in theenvironment 100. In some embodiments, the sensing device may also include one or more other types of sensors, such as lidar, millimeter wave radar, and the like.
A process of guiding vehicle driving according to an embodiment of the present disclosure will be described below with reference to fig. 2 to 6.
Fig. 2 showsimage information 200 of thevehicle 102 and the environment around the vehicle acquired by thesensing device 105 according to the embodiment of the present disclosure. Theimage information 200 is divided into two regions by an image processing method such as an object detection and voice division algorithm, wherein theregion 202 is first region data corresponding to thevehicle 101, theregion 203 is second region data corresponding to the environment around the vehicle, and theregion 201 is a division boundary line of the first region data and the second region data. In some embodiments, the sensing device is a lidar, and the lidar measurement point cloud is correspondingly segmented intovehicle 101 first region data and vehicle surrounding scene second region data.
Fig. 3 shows a situation that the guidingdevice 104 moves a distance along the slidingguide 103 in the embodiment of the present disclosure, fig. 4 shows thescene information 400 collected in the state of fig. 3 in the embodiment of the present disclosure, and it can be seen that theimage information 400 is divided into two regions, wherein 402 is the first region data corresponding to thevehicle 101, 403 is the second region data corresponding to the environment around the vehicle, and 401 is the dividing boundary line between the first region data and the second region data.
Continuing to refer to fig. 2, 3, 4, the position of thesensing device 105 mounted on theguide 104 is also moved due to the movement of theguide 104. The collected scene information changes, and intuitively speaking, the first area corresponding to thevehicle 101 is reduced from thearea 202 to thearea 402. According to the change of the information in the first area data, the size information of the vehicle in the image can be obtained by combining the moving distance of theguide device 104, the key point data in the image is further extracted, and the three-dimensional information of the vehicle can be reconstructed by adopting a mature visual projection method. In some embodiments, the three-dimensional information is a three-dimensional model, a three-dimensional point cloud, three-dimensional point color information.
Fig. 5 illustrates a method of determining attitude information of a three-dimensional position of thevehicle 101 in an embodiment of the present disclosure. Map information of thevehicle 101 and thescene 100 area is first obtained from an external device, which in some embodiments is a cloud server, a local area network server, or some other terminal device of a communication protocol. The map information is a three-dimensional map of the scene, and in some embodiments, the map information may be a point cloud map, a geographic information map, or a map composed of color texture information.
Referring to fig. 2 and 5, a constraint between thescene 200 and the three-dimensional information of thevehicle 101 is established by using a PnP feature matching algorithm by using thefirst region data 202 of thevehicle 101 in thescene 200 and the three-dimensional information of thevehicle 101 obtained by three-dimensional reconstruction. Since thescene 200 is information acquired by thesensing device 105 in the scene shown in fig. 1, thescene 200 has a corresponding relationship with the position and orientation information of thesensing device 105. In an embodiment of the present disclosure, the correspondence is embodied as an intrinsic parameter of the camera or an intrinsic parameter of the lidar. In conjunction with the correspondence, a first constraint relationship 511 between the position and orientation 501 of thesensing device 105 and the position and orientation 502 of thevehicle 101 is obtained.
Similarly, constraints between the map information of thescene 200 are established using the map information and thesecond region data 203 of the vehicle surroundings in thescene 200 using the PnP feature matching algorithm. A second constrained relationship 512 between the position attitude 501 of thesensing device 105 and the map coordinate system 503 is further derived.
Continuing with fig. 5, according to common knowledge or rules of law, for example, during the driving of a vehicle, the tire is constantly in contact with the ground, so that the tire position point is necessarily located at the ground of the map coordinate system 503, and can be represented as a point on a plane by a method of geometric analysis. The horizontal plane of the vehicle and the horizontal plane in the map coordinate system are kept parallel at all times, and the two planes can be expressed to be parallel by a geometric analysis method. The above relationship constitutes a third constraint relationship 513 between the position and attitude 502 of the vehicle and the map coordinate system 503.
Continuing with reference to FIG. 5, first constraint relationship 511, second constraint relationship 512, and third constraint relationship 513 are each expressed by the following equation:
where x is the vehicle position and attitude 502, an accurate solution is obtained by solving the above equation set.
According to an embodiment of the present disclosure, thesensing device 105 is mounted on avehicle guiding apparatus 104 that can be moved. Referring to fig. 1 and 3, when thevehicle 101 moves the position, thevehicle guiding apparatus 104 is adapted to move so that themeasurement area 106 of thesensing device 105 can cover the position of thevehicle 101. In some embodiments, thesensing device 105 is mounted on a wheeled or other self-propelled mobile platform to accomplish the above-described process of adapting the movement.
Fig. 6 illustrates how thevehicle guiding device 104 is adapted to be moved in an embodiment of the present disclosure. The direction of travel of thevehicle 101 is forward and the next moment is to sweep through thegrey area 601, the position of thevehicle guidance device 104 is adjusted and themeasurement area 106 of thesensing device 105 is pointed at the grey area. By the adjustment as above, thesensing device 105, although not covering the rear area of the vehicle, can ensure the perception of any obstacle in the traveling direction of thevehicle 101, thereby ensuring the safety.
In some embodiments, the scene in which the vehicle is operating is located in the open road, the interior road, of the outdoor area.
According to an embodiment of the present disclosure, the electronic device includes a computing unit that can perform various appropriate actions and processes according to computer program instructions stored in a Read Only Memory (ROM) or computer program instructions loaded from the storage unit 908 into a Random Access Memory (RAM). In the RAM, various programs and data required for the operation of the device can also be stored. The computing unit, the ROM, and the RAM are connected to each other by a bus. An input/output (I/O) interface is also connected to the bus.
A plurality of components in the device are connected to I/O, including: an input unit such as a keyboard, a mouse, etc.; an output unit such as various types of displays, speakers, and the like; storage units such as magnetic disks, optical disks, and the like; and a communication unit such as a network card, modem, wireless communication transceiver, etc. The communication unit allows the device to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit may be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of computational units include, but are not limited to, Central Processing Units (CPUs), Graphics Processing Units (GPUs), various specialized Artificial Intelligence (AI) computational chips, various computational units running machine learning model algorithms, Digital Signal Processors (DSPs), and any suitable processors, controllers, microcontrollers, etc. The computing unit 901 may perform the respective methods and processes described above.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a load programmable logic device (CPLD), and the like.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Further, while operations are depicted in a particular order, this should be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.