BACKGROUND OF THE INVENTION1. Field of the Invention[0001]
The present invention relates to navigation devices and, more specifically, to a navigation device for assisting the driver of a vehicle, in driving, by detecting and displaying him/her information about what is going on around the vehicle.[0002]
2. Description of the Background Art[0003]
There has been developed such system as monitoring around a vehicle on the road, with a sensor, to see what is going on therearound, and if collision with other vehicles is considered highly possible, warning the driver. For example, disclosed in Japanese Patent Laid-Open Publication No. 11-321494 (99-321494) is such conventional technique as follows.[0004]
First of all, a video signal outputted from a camera is subjected to image processing so as to detect if any vehicle is approaching. If detected any, the driver of a vehicle is warned by a beep. Also, as for the approaching vehicle, an image thereof is square-marked and displayed on a display device. Accordingly, the driver can spot on the display which vehicle is the one warned of collision.[0005]
In the above conventional technique, however, the driver is not provided much information when he/she is in danger, but offered adequate information when no danger awaits him/her. Therefore, even when the driver hears a warning beep, he/she may be annoyed as is hardly knowing if any danger awaits him/her, and how dangerous it actually is. Further, if the driver hears any route guidance while driving, he/she may be distracted thereby and pay close attention only to ahead but not to behind. In the conventional technique, no consideration is given to such possibility.[0006]
SUMMARY OF THE INVENTIONTherefore, an object of the present invention is to provide a navigation device for helping the driver of a vehicle drive safely, without annoying the driver, by presenting him/her accurate information at the right time what is going on around his/her vehicle.[0007]
The present invention has the following features to attain the object above.[0008]
An aspect of the present invention is directed to a navigation device of a vehicle-mounted type for detecting the circumstances around a vehicle, and if considers warning a user is appropriate, arranging an applicable object model for display on a map image, and making a guidance to a destination. In the present navigation device, an external monitor part monitors the circumstances around the vehicle, and outputs resulting monitor information. Based on the monitor information, an obstacle detection part detects any obstacle observed outside of the vehicle, and outputs external information including position information of the obstacle. Based on the external information, a guiding part determines if the obstacle requires the user's attention, and if requires, generates drive assistant information including the position information of the obstacle as in the external information. Based on thus generated drive assistant information and object model display information for the obstacle, a map data arranging part creates an object model for arrangement on a map image. Further, the guiding part generates guidance information including the resulting map image outputted from a map data arranging part in response to the route selected by a route selection part, the current position detected by a position detection part, and map data from a map data storage part. Thus generated guidance information is displayed on a display part for the user.[0009]
These and other objects, features, aspects and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings.[0010]
BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a block diagram showing the structure of a navigation device according to an embodiment of the present invention;[0011]
FIG. 2 is a diagram showing the structure of the navigation device of FIG. 1, which is realized in a general computer system;[0012]
FIG. 3 is a flowchart showing a basic flow of processing in the present navigation device;[0013]
FIG. 4 is a flowchart showing the detailed process of subroutine step S[0014]54;
FIG. 5 is a flowchart showing a basic flow of processing for generating external information by an[0015]obstacle detection part8;
FIG. 6 is a flowchart showing the detailed process of subroutine step S[0016]120;
FIG. 7 is a flowchart showing the detailed process of subroutine step S[0017]130;
FIG. 8 is a flowchart showing the detailed process of subroutine step S[0018]140;
FIG. 9 is a table schematically showing the interrelation between a with-care state and a with-care vehicle;[0019]
FIG. 10 is a schematic diagram exemplarily showing what drive assistant information carries therein;[0020]
FIG. 11 is a flowchart showing the detailed process of subroutine step S[0021]55;
FIG. 12 is a block diagram showing the detailed structure of a map[0022]data arranging part4, a resulting map image generated thereby is of 2D landscape;
FIG. 13 is a schematic diagram showing an exemplary map image displayed on a[0023]display5;
FIG. 14 is a schematic diagram showing another example of map image displayed on the[0024]display5;
FIG. 15 is a schematic diagram showing still another example of map image displayed on the[0025]display5;
FIG. 16 is a block diagram showing the detailed structure of the map[0026]data arranging part4, a resulting map image generated thereby is of a bird's eye view;
FIG. 17 is a diagram demonstrating a technique for creating a bird's eye view by subjecting 2D map data to perspective transformation;[0027]
FIG. 18 shows an exemplary map image of a bird's eye view displayed on the[0028]display5;
FIG. 19 is a block diagram showing the detailed structure of the map[0029]data arranging part4, a resulting image data generated thereby is of a 3D landscape different from a bird's eye view;
FIG. 20 is a block diagram showing the detailed structure of a 3D map[0030]data generation part147;
FIG. 21 is a diagram exemplarily showing a case where displayed on the 3D landscape are 3D object models, which indicate a with-care vehicle and its direction being about to make a rightward lane change;[0031]
FIG. 22 is a block diagram showing the detailed structure of the map[0032]data arranging part4, which receives 2D data from the object model displayinformation storage part6 and 3D map data from the mapdata storage part3, and generates a map image of 3D landscape;
FIG. 23 is a diagram exemplarily showing several image files prepared as 2D shape information in object model display information; and[0033]
FIG. 24 shows an exemplary map image of 3D landscape generated by the map[0034]data arranging part4 of FIG. 22.
DESCRIPTION OF THE PREFERRED EMBODIMENTFIG. 1 is a block diagram showing the structure of a navigation device according to an embodiment of the present invention. In FIG. 1, the navigation device includes an[0035]input device2, a mapdata storage part3, a mapdata arranging part4, adisplay5, an object model displayinformation storage part6, anexternal monitor part7, anobstacle detection part8, aposition detection part9, aroute selection part10, and a guidingpart11.
The[0036]input part2 is driver-operable, and used for functional selection (e.g., processing item change, map switching, hierarchical level change), point settings, and the like. Outputted from theinput part2 is instruction information, which is forwarded to theroute selection part10.
The[0037]position detection part9 is composed of a GPS, radio beacon receiver, vehicle-speed sensor, angular velocity sensor, absolute azimuth sensor, or the like, and detects the vehicle's current position. Outputted from theposition detection part9 is information about the vehicle's current position, which is forwarded to both theroute selection part10 and the guidingpart11.
The[0038]external monitor part7 may be a CCD camera, laser radar, ultrasound sensor, or the like, and monitors around the vehicle to know, typically, whether any obstacle is observed or how vehicles behind are behaving. Theexternal monitor part7 then outputs resulting monitor information to theobstacle detection part8. Here, theexternal monitor part7 may communicate with other vehicles, a traffic control center, and the like, to monitor around its own vehicle. To realize such monitoring, however, system establishment is required and thus the cost is increased. Accordingly, theexternal monitor part7 is preferably structured by a sensor, for example. An image capture device such as camera is also a preferable possibility for theexternal monitor part7 as is competent to human eyes at perceiving things.
Based on the monitor information provided by the[0039]external monitor part7, theobstacle detection part8 analyzes an obstacle, and what type, position, speed, and the like, and outputs external information, which will be described later. Here, such obstacle includes anything requiring the driver's close attention or disturbing him/her to drive. As examples, anything lying ahead on the road, any vehicle approaching from behind, and any vehicle recklessly behaving are all regarded as an obstacle.
The map[0040]data storage part3 is composed of an optical disk (e.g., CD, DVD), hard disk, semiconductor memory card (e.g., SD card), or the like. The mapdata storage part3 inadvance stores 2D or 3D map data indicating a specific area by geographical features, and in the area, intersections and road connections are defined by coordinates, shape, attribute, regulation information, and the like. The map data stored in themap data storage3 is read as appropriate, for usage, by the mapdata arranging part4, theroute selection part10, and the guidingpart11.
The[0041]route selection part10 reads the map data from the mapdata storage part3 only for a required area according to the instruction information provided by theinput part2. Theroute selection part10 then determines a starting point and a destination based on point information included in the instruction information, and the information about the vehicle's current position provided by theposition detection part9. Thereafter, theroute selection part10 searches for a route minimum in cost between the starting point and the destination. A result obtained thereby is outputted to the guidingpart11 as route information.
Based on all of the route information from the[0042]route selection part10, the information about the vehicle's current position from theposition detection part9, the map data from the mapdata storage part3, and the external information from theobstacle detection part8, the guidingpart11 generates guidance information for guiding the vehicle to the destination. This guidance information is provided to thedisplay5 for display thereon.
The map[0043]data arranging part4 arranges object models in a map space. This arrangement is done based all on the map data stored in the mapdata storage part3, the information provided by theobstacle detection part8, and the information stored in the object model displayinformation storage part6.
The[0044]display5 is composed of a display device (e.g., liquid crystal display, CRT display), speaker, and the like, and displays the guidance information together with a resulting map image provided by the mapdata arranging part4. Alternatively, thedisplay5 may output sounds for guidance with or without performing display.
As is the map[0045]data storage part3, the object model displayinformation storage part6 is also composed of an optical disk, hard disk, or the like. Stored therein is information about a technique for presenting 2D or 3D object models on a map image according to the information provided by theobstacle detection part8 or theinput part2. About the technique and the details of the information are left for later description.
Such structured navigation device of FIG. 1 can be realized in a general computer system. The structure of a navigation device realized as such is shown in FIG. 2.[0046]
In FIG. 2, the navigation device includes a[0047]CPU342,ROM343,RAM344, anoutput part345, aninput part346, aposition detection part349, and anexternal monitor part348, all of which are interconnected by a common bus or an external bus. Here, presumably, theROM343 and theRAM344 both possibly include a storage device including an external storage medium.
In FIG. 2, the[0048]CPU342 operates in accordance with programs stored in either or both of theROM343 and theRAM344. All of the mapdata arranging part4, theobstacle detection part8, theroute selection part10, and the guidingpart11 are functionally realized by each corresponding program. In such case, a recording medium typically storing such programs is implemented in the navigation device. The program may be the one transmitted over a communications circuit.
The[0049]ROM343 typically includes the mapdata storage part3 of FIG. 1, or theRAM344 may do so entirely or partially. Similarly, theRAM344 typically includes the object model displayinformation storage part6, or theROM343 may do so.
FIG. 3 is a flowchart showing a basic flow of processing in the present navigation device. In step S[0050]51 of FIG. 3, with driver-designated destination and map region provided by theinput part2, and with a vehicle position provided by theposition detection part9, theroute selection part10 accordingly performs route search. A result obtained thereby is outputted to the guidingpart11.
Next, in step S[0051]52, the guidingpart11 requests the mapdata arranging part4 for map data arrangement to display a map which covers an area corresponding to the vehicle position detected by theposition detection part9. In step S53, the mapdata arranging part4 reads map data from the mapdata storage part3.
In subroutine step S[0052]54, the guidingpart11 reads external information from theobstacle detection part8 to see the circumstances around the vehicle, for example, whether the vehicle is about to make a right/left turn. Based on the external information and the route search result, the guidingpart11 determines if the vehicle needs any drive assistant information, and if needed, what kind of information. The details for this subroutine step S54 are left for later description.
In subroutine step S[0053]55, according to the drive assistant information and the information stored in the object model displayinformation storage part6, the mapdata arranging part4 creates a 2D or 3D object model for arrangement on the map data read from the mapdata storage part3. The details for this subroutine step S55 are also left for later description.
In step S[0054]56, theguidance part11 has thedisplay5 display the map image for guidance information or the map image wherein object models are arranged. Herein, the guidance information is not necessarily displayed on the map image, and the guidingpart11 may be functionally substituted by the mapdata arranging part4 for this operation.
Lastly, in step S[0055]57, the guidingpart11 keeps providing guidance until the vehicle reaches its destination. Thus, the procedure returns to step S52 to repeat the processing until the guidingpart11 determines the vehicle having reached its destination.
FIG. 4 is a flowchart showing the detailed process of subroutine step S[0056]54 of FIG. 3. In step S541 of FIG. 4, the guidingpart11 reads from theobstacle detection part8 the external information, which is generated as appropriate by theobstacle detection part8 based on the monitor information from theexternal monitor part7.
Described in detail now is the operation of the[0057]obstacle detection part8 for generating the external information. FIG. 5 is a flowchart showing a basic flow of processing in theobstacle detection part8 for the purpose. Herein, presumably, theexternal monitor part7 is structured by an image capture device such as CCD camera, for example, and captures image data
In step S[0058]110 of FIG. 5, theobstacle detection part8 receives image data from theexternal monitor part7. Here, the image data is typically a still picture, but may be two still pictures captured by two cameras placed with a predetermined distance therebetween, or moving pictures for a predetermined time period.
In subroutine step S[0059]120, from the received image data, theobstacle detection part8 detects any lane, which is presumably defined by a white line. By referring to FIG. 6, the detailed processing in this subroutine step S120 is now described.
In step S[0060]121 of FIG. 6, as for the received image data, theobstacle detection part8 detects a maximum luminance in a predetermined region thereof. With reference to thus detected maximum value, and in consideration of luminance distribution, theobstacle detection part8 sets a threshold value considered optimal for detection of the white line.
In step S[0061]122, theobstacle detection part8 searches the image for any pixel exceeding the threshold value. This is done on the basis of pixel line, which is drawn between two predetermined pixels on the image data, for example, from a center pixel on the far left column to that on the far right column. Any consecutive pixels all exceeding the threshold value are regarded as a part of white line. In such manner, the image data is thoroughly searched, and white lines are appropriately extracted therefrom. Here, edge extraction is also a possibility by using an edge extraction filter such as SOBEL filter.
In step S[0062]123, for linear approximation, theobstacle detection part8 sets a processing region wherein thus extracted white lines are observed. For the linear approximation, for example, a HOUGH transform algorithm is used. As a result, the white lines appear linear on the image data.
In step S[0063]124, theobstacle detection part8 detects, as a lane, a triangle region formed by any two adjacent white lines and the bottom side of the image. Assume here that two or more lanes are to be detected.
Next, in subroutine step S[0064]130 of FIG. 5, theobstacle detection part8 extracts any vehicle region from the image data received from theexternal monitor part7. Here, the vehicle region is typically defined by a closed curve, which is considered a vehicle's contour. Here, for easy understanding, a vehicle having the present navigation device mounted thereon is referred to simply as “vehicle”, and others observed therearound are as collectively “nearby vehicle”. The vehicle region is assigned a unique vehicle ID, thus even if plurally extracted, each vehicle region can be uniquely identified thereby. This subroutine step S130 is described in more detail below with reference to FIG. 7.
In step S[0065]131 of FIG. 7, with respect to each of the lanes detected in step S124, theobstacle detection part8 performs edge detection, and extracts any region wherein the nearby vehicle is observed. To be specific, first detected from the image are typically horizontal and vertical edges, and around a rectangular region defined thereby, tangent lines are drawn for extraction.
In step S[0066]132, theobstacle detection part8 searches vehicle sample model database (not shown) for a most-analogous vehicle model for overlay onto thus extracted rectangular region. If found, theobstacle detection part8 overlays the most-analogous vehicle model onto the extracted region in such manner as to coincide both barycenters.
In step S[0067]133, theobstacle detection part8 extracts the contour of the most-analogous vehicle model, and thus extracted region within the contour is the vehicle region.
In step S[0068]134, theobstacle detection part8 assigns the vehicle region a unique vehicle ID, which is utilized as a part of the external information.
In step S[0069]140 of FIG. 5, based on the vehicle region on the image, theobstacle detection part8 determines where the nearby vehicle is on which lane. Then, detected are relative distance, relative velocity, and relative acceleration to the vehicle. Here, with an active sensor such as laser radar, it is easy to measure where the nearby vehicle is, but with a camera, it requires additional processing varied in manners. In this example, two cameras are used to capture object images, and the actual distance to the object is calculated, under principle of triangular measurement, by utilizing parallax between those two images. This subroutine step S140 is described in more detail with reference to FIG. 8.
In step S[0070]141 of FIG. 8, theobstacle detection part8 detects on which lane the nearby vehicle currently is. This detection is done based on the lanes detected in step S124.
In step S[0071]142, theobstacle detection part8 first extracts any feature from each of those two object images to know the correspondence therebetween. Here, the feature is, for example, an edge or a vertex of a polyhedron. Then, with thus extracted features, correspondence points are searched for using epipolar constraint. From this pair of correspondence points, a parallax d is first measured, and then the distance D is calculated by the following equation (1).
D=L*f/d (1)
where L denotes the distance between those two cameras, and f denotes a focal distance.[0072]
In step S[0073]143, from the distance D calculated in step S142, theobstacle detection part8 calculates space coordinates on a camera coordinate system for the correspondence points so as to calculate the position of the nearby vehicle. The resulting position is temporarily stored as a historic record typically with its vehicle ID and the time of calculation.
In step S[0074]144, with reference to the historic record, theobstacle detection part8 calculates the speed and acceleration of the nearby vehicle. Here, the guidingpart11 may alternatively perform such calculation. Thus calculated nearby-vehicle's position, speed, and acceleration are included in the external information together with its corresponding vehicle ID.
Next, in step S[0075]150 of FIG. 5, as for the nearby vehicle, theobstacle detection part8 determines the vehicle type thereof by taking the vehicle region into consideration. Typically, to determine the vehicle type, theobstacle detection part8 performs matching, in shape and size, among the vehicle region and vehicle sample model in database (not shown). If the distance to the nearby vehicle can be approximately measured from the size of the vehicle region, the processing in step S140 may be omitted. Thus determined vehicle type is also included in the external information together with its corresponding vehicle ID.
In step S[0076]160, theobstacle detection part8 refers to the vehicle region to see if the nearby vehicle carries any sign calling for the driver's attention (hereinafter, referred to as “with-care” sign). Exemplified for such with-care sign is a “not-yet-skilled sign”, which is obligatory for a driver who just got his/her driver's license for a certain time period. The with-care sign reminds the drivers to be attentive to the vehicle carrying the same, and for easy recognition, each predetermined in shape and color. Accordingly, theobstacle detection part8 first extracts, from the vehicle region, any part having the same color as any existing with-care signs. Then, the extracted part is compared in shape with templates which are of the existing with-care signs previously provided, and then finds the one uniquely corresponding to any specific with-care sign. Then, the result obtained thereby is included in the external information together with the applicable vehicle ID.
In step S[0077]170, theobstacle detection part8 refers to the vehicle region if the blinkers of the nearby vehicle is on, and if so, which side of the blinkers is flashing on and off. Here, since the blinker is also predetermined in color as are the with-care signs, the processing in step S160 can be executed to make such determination. Alternatively, executed may be processing of extracting any flashing region from a plurality of images which have been captured at predetermined time intervals. The result obtained thereby is included in the external information together with the applicable vehicle ID.
In step S[0078]180, theobstacle detection part8 determines whether or not the image is thoroughly detected for every vehicle region. If not yet, the procedure returns to subroutine step S130, otherwise, this is the end of the processing. Note that, the processing of FIG. 5 is repeated at regular intervals to continually monitor the circumstances around the vehicle.
After going through such processing, the external information is generated for every vehicle ID and read into the guiding[0079]part11. This is the end of processing in step S541 of FIG. 4.
Next, in step S[0080]542, the guidingpart11 refers to the external information to determine whether or not there is any with-care vehicle around the vehicle. Here, any nearby vehicle is regarded as a with-care vehicle if seems dangerous in consideration of its vehicle type, speed, distance to the vehicle, driving manner, and the like. As typical examples, with respect to the vehicle, a with-care vehicle is any nearby vehicle (motorcycle included) rapidly approaching, staying behind, putting its blinker flashing on and off, carrying a with-care sign, and moving meanderingly. Emergency vehicles are also included, for example.
Such with-care vehicles are easily recognizable by utilizing the external information. In detail, to recognize any nearby vehicle rapidly approaching or staying behind, utilized are the relative distance, relative velocity, and the relative acceleration to the vehicle detected in step S[0081]140 of FIG. 5 with the help of a predetermined equation and table. Similarly, any nearby vehicle moving meanderingly is also easily recognized by first calculating a variation of a motion vector thereof, and then detecting its degree of swinging with respect to the heading direction. Here, as described in the foregoing, the vehicle type is detectable in step S150, the care-with sign in step S160, and the blinker in step S170.
If determined no with-care vehicle is observed, the procedure returns to the processing of FIG. 3. If determined any, the procedure goes to step S[0082]543.
In step S[0083]543, in consideration of thus detected with-care vehicle, the guidingpart11 decides whether or not the vehicle is in with-care state. Here, any state is regarded as the with-care state if the nearby vehicle is considered a threat to the vehicle. This determination is made based on how the route ahead looks like, whether the vehicle needs to make a right/left turn or decelerate, and the like. If determined that the vehicle is not in the with-care state, the procedure returns to the processing of FIG. 3. If determined Yes, the procedure goes to step S544.
Here, exemplarily described here is the interrelation between the with-care vehicle and the with-care state. FIG. 9 is a schematic table showing such interrelation. In the table, values “0” denote “basically no threat”, and values “1” denote “threat”. Although this table is exemplified for countries where vehicles drive on the right (e.g., Japan), but becomes applicable to countries where vehicles drive on the left only by switching between “right” and “left”.[0084]
In FIG. 9, shown by the column of the table are attributes relevant to the threat type of the with-care vehicle. To be specific, the with-care vehicle is defined as approaching the vehicle, its blinker is on, carrying any with-care sign, moving meanderingly, and by vehicle type other than vehicle such as motorcycle and emergency vehicle. If the with-care vehicle is defined as approaching, the table further indicates from which lane the with-care vehicle is approaching, specifically, from the right lane, the left lane, or right behind the vehicle. Similarly, if the with-care vehicle is defined as putting its blinker on, the table further indicates which side of the blinkers is flashing on and off.[0085]
Shown by the row of the table are various with-care states, specifically, if the vehicle needs to move to a right/left lane, make a right/left turn, or brake or decelerate, and if the route ahead is narrowed or curved, and if so, in which side.[0086]
Here, for easy prediction if the route ahead is narrowed or curved, the map data in the map[0087]data storage part3 may be referred to only for a certain range of the route selected by theroute selection part10. As such, by knowing in advance what the route ahead looks like, the vehicle can be ready for other nearby vehicles' possible behaviors, for example, lane change and sharp turn.
Further, for easy prediction if the vehicle is changing lanes, making a right/left turn, braking, or decelerating, realtime monitoring of the vehicle's current position, steering wheel, accelerator, brake, and the like, will do. Similarly, realtime monitoring of the vehicle's blinker helps predict to which lane the vehicle moves next. Further, the route selected by the[0088]route selection part10 is analyzed to know the vehicle's possible behavior.
FIG. 9 shows, for example, if there is any near by vehicle approaching from right lane, collision is considered possible if the vehicle moves to the right lane or makes a right turn. Therefore, applicable boxes in the table all show “1”. Similarly, in the case that the route selected by the[0089]route selection part10 is narrowed ahead and the right lane is ended, any nearby vehicle approaching from right lane may aggressively move into the same lane and thus collision is considered likely. Also in the case that the route is curved to the right with a certain curvature and more, any nearby vehicle driving fast on the right lane may slide therearound into the same lane. Accordingly, applicable boxes in the table all show “1”.
When there is any nearby vehicle approaching from behind, there seems no harm if the vehicle makes a right/left turn. However, the driver usually decreases the vehicle's speed to make a right/left turn. Therefore, depending on the vehicle's relative position, relative velocity, and relative acceleration to the nearby vehicle, collision is considered likely. Thus, applicable boxes in the table show all “1”. On the other hand, even if any nearby vehicle is approaching on the same lane from behind when the vehicle is stationary, the nearby vehicle is expected to stop and usually do so. Therefore, such case is considered no threat, and thus applicable boxes in the table all show “0”.[0090]
Considered next is a motorcycle positioning behind the vehicle or on the left lane. When the vehicle makes a left turn, such motorcycle may be easily in the vehicle's blind spot, and thus requires some attention. Also, if the driver of the vehicle opens the left door without paying much attention, the door might hit the motorcycle. Thus, applicable boxes in the table all show “1”. Here, alternatively, the driver's hand detected on either door of the vehicle may be possibly considered a threat.[0091]
As for the emergency vehicle, the vehicle is expected to give way thereto, and in the course, the emergency vehicle may cause some harm to the vehicle. Thus, applicable boxes in the table all show “1”. Here, alternatively, regardless of with-care states, the drive assistant information may be generated whenever the emergency vehicle is detected.[0092]
Here, FIG. 9 is by way of example only, and any other various with-care states, with-care vehicles, and combination thereof are surely possible. For example, any nearby vehicle is regarded as a with-care vehicle if changing lanes without putting its blinker on, or continuously increasing and decreasing the vehicle's speed. Also, the[0093]obstacle detection part8 may additionally recognize the nearby vehicle's plate and vehicle type, and if the nearby vehicle is a luxury type, the driver of the vehicle may be warned, and if is found as a stolen vehicle or a wanted vehicle, a call may automatically be made to the police.
In such manner, the interrelation between the with-care vehicle and the with-care state can be measured. Here, FIG. 9 is by way of example only, and thus such generic table is not always necessary if some other techniques are applicable to measure the above interrelation.[0094]
To be specific, the above-described threat level correspondingly varies depending on the vehicle's and the nearby vehicle's position, speed, acceleration, making turns or changing lanes, and road shape, road surface condition, and the like. Therefore, from such interrelation, derived is a predetermined equation or a complex conditional expression with various imaginable cases given into consideration. By using such equation or complex expression, the interrelation between the with-care vehicle and the with-care state can be measured.[0095]
As such, in step S[0096]543 of FIG. 4, the guidingpart11 decides whether or not the vehicle is in with-care state in consideration of the with-care vehicle.
Next, in step S[0097]544, to deal with the with-care vehicle, the guidingpart11 generates drive assistant information. Here, the drive assistant information is typically used to arrange an image of the with-care vehicle onto the map image for display.
FIG. 10 is a schematic diagram showing what the drive assistant information carries. In FIG. 10, the drive assistant information includes a[0098]nearby vehicle ID551,vehicle type information552,color information553,relative position554, and one or more attributes relevant to threat level (hereinafter, threat attributes)555. Thenearby vehicle ID551 is an identification number uniquely assigned to each nearby vehicle for identification. Thevehicle type information552 and thecolor information553 are determined based on the external information detected by theobstacle detection part8. Here, thevehicle type information552 and thecolor information553 are mainly used for image display, and thus not necessarily included. Also, therelative position554 is also not always necessary if the navigation device warns the driver only by sound without display on the map image.
Here, the threat attributes[0099]555 are the ones selectively determined by the guidingpart11 as harmful in consideration of the interrelation between the with-care vehicle and the with-care state (for example, the value “1” in the table of FIG. 9).
Note that, when the guiding[0100]part11 selectively determines as such, the with-care state determined in step S543 is not the only concern. For example, if the vehicle changes lanes to make a turn while decreasing its speed, the vehicle is in three types of with-care states. In such case, selected are every possible threat attribute in consideration of the interrelation among those with-care states and with-care vehicles.
Further, the with-care vehicle may accompany several threat attributes. As an example, if a nearby vehicle with a with-care sign is meanderingly approaching, the number of threat attributes is at least three. In such case, selected are every possible threat attribute relevant to the with-care vehicle.[0101]
As such, in step S[0102]544, the guidingpart11 selectively determines every possible threat attribute for a certain with-care vehicle by taking every known with-care state into consideration, and correspondingly generates drive assistant information.
Next, in step S[0103]545, the guidingpart11 refers to the external information to determine if there is any other with-care vehicle. If determined Yes, the procedure returns to step S543 and repeats the same processing as above until the drive assistant information is generated for every with-care vehicle. If determined No, the procedure returns to the processing of FIG. 3, and goes to step S55.
FIG. 11 is a flowchart showing the detailed process of subroutine step S[0104]55 of FIG. 3. In step S551 of FIG. 11, the mapdata arranging part4 determines whether or not there is the drive assistant information generated by the guidingpart11 in subroutine step S54. If determined No, the procedure returns to the processing of FIG. 3, otherwise goes to step S552.
In step S[0105]552, the mapdata arranging part4 reads, from the object model displayinformation storage part6, object model display information corresponding to certain drive assistant information. Here, the object model display information is used to display object models corresponding to the with-care vehicle and its threat attributes.
In step S[0106]553, the mapdata arranging part4 creates an object model corresponding to thus read object model display information, and dimensionally appropriately arranges the object model on a map image in consideration of a display scale and map space. The resulting map image is displayed by thedisplay5.
Here, the display scale is so set that the vehicle and other with-care vehicles are displayed on the map image with appropriate size and space in consideration of actual relative distance. For example, in the present navigation device, four display scales are provided. The first display scale is used for displaying a map image covering 1.6 to 50 kilometers square, and such map image is called a 3D satellite map. The second display scale is used for a map image covering 100 to 800 meters square, and the map image is generally called a 2D map. A map image for the third display scale covers 25 to 100 meters square, and is called a virtual city map. A map image for the fourth display scale covers 25 to 50 meters square, and is called a front view map. In the virtual city map and the front view map among those four, the vehicle and the with-care vehicles look appropriate in size. Those four maps are generally switched thereamong as appropriate. Accordingly, the drive assistant information is presented with higher accuracy to the driver of the vehicle in easy-to-see manner.[0107]
Here, the display scale is not limited to those four, and may be continuously changed so that the vehicle and the with-care vehicles look always appropriate in space thereamong. After setting the display scale as such, another processing is carried out to arrange thus created object model on the map image for display. The details thereof are left for later description.[0108]
In step S[0109]554, the mapdata arranging part4 determines whether or not there is other drive assistant information generated by the guidingpart11. If determined No, the procedure returns to the processing of FIG. 3. If Yes, the procedure returns to step S552 to repeat the same processing until every drive assistant information is thoroughly displayed.
Described next is step S[0110]553 of FIG. 11 about how the mapdata arranging part4 generates a map image. First, described is a case where a resulting map image is of a 2D landscape. FIG. 12 is a diagram showing the detailed structure of the mapdata arranging part4 for such case. In FIG. 12, the mapdata arranging part4 includes a 2D objectmodel creation part145, and a 2Ddata arranging part146.
The 2D object[0111]model creation part145 receives the object model display information from the object model displayinformation storage part6, and creates a 2D object model. The 2Ddata arranging part146 receives thus created 2D object model and 2D map data from the mapdata storage part3, and generates a map image by arranging those in accordance with 2D coordinates included in each of those. FIG. 13 is a schematic diagram exemplarily showing an exemplary map image displayed as such on thedisplay5.
In FIG. 13, on a road with two lanes each in the 2D map image, arranged are a[0112]vehicle object model301, anearby vehicle object302, which is regarded as a with-care vehicle, and anarrow object model303, which corresponds to the threat attribute. Here, the vehicle type and color of thenearby vehicle object302 are preferably displayed according to the drive assistant information. Thenearby object model302 may be emphatically displayed to indicate the near by vehicle is the with-care vehicle. For example, thenearby object model302 may be in red, flashing on and off, and changing colors. Any manner will do as long as the driver is warned thereby.
Assume here that the with-care vehicle is putting its front right blinker on, and the vehicle is also about to move to the right lane. In such case, as described in FIG. 9, collision between those two vehicles is likely. Therefore, to warn the driver of the vehicle that the with-care vehicle behind is moving to the right, such[0113]arrow object model303 as shown in FIG. 13 is displayed.
Here, even if the nearby vehicle is approaching from the right lane, as shown in FIG. 9, the nearby vehicle is considered no threat unless the vehicle moves to the right, makes a turn, and the road is narrowed on the right side or curved to the right. As such, if the nearby vehicle is determined as harmless, no drive assistant information is generated, and thus no[0114]arrow object model303 is displayed.
FIG. 14 is a schematic diagram exemplarily showing another example of map image displayed on the[0115]display5. In FIG. 14, arranged on the road are avehicle object model311, a nearbyvehicle object model312, which is regarded as a with-care vehicle, and a with-caresign object model313, which corresponds to the threat attribute. Assume here that the vehicle is about to make a left turn and the with-care vehicle therebehind carries a with-care sign. In such case, there seems some threat, and thus such with-caresign object model313 as shown in FIG. 14 is displayed to warn the driver of the vehicle that the with-care vehicle behind is with the with-care sign.
Here, the threat attribute may be indicated in plurality for the nearby vehicle object model, and two or more object models may be provided to indicate one threat attribute. FIG. 15 is a schematic diagram exemplarily showing still another example of map image displayed on the[0116]display5. In FIG. 15, arranged on the road are avehicle object model321, a nearbyvehicle object model322, which is regarded as a with-care vehicle, a meanderingsign object model323 and a speechbubble object model324, both of which correspond to the threat attribute. Assume here that the vehicle is about to make a left turn and the with-care vehicle therebehind is meandering. In such case, there seems some threat, and thus such meanderingsign object model323 and the speechbubble object model324 as shown in FIG. 15 are displayed to warn the driver of the vehicle that the with-care vehicle behind is meandering. Here, the speechbubble object model324 has words of warning displayed therein.
As such, by appropriately arranging the vehicle model object and the nearby vehicle object models on a map image, the driver can instantaneously understand the positional relationship thereamong. Also, by creating each appropriate object model for every possible threat attribute, the driver can instantaneously acknowledge its threat level. Accordingly, information offered by the present navigation device can appropriately help, in driving, the driver of the vehicle with higher accuracy.[0117]
Described next is a case where a resulting map image generated in the map[0118]data arranging part4 is of a 3D landscape. In such case, there is no need for an object model created from the object model display information and map data stored in the mapdata storage part3 to be 3D. Exemplified now is a case where the data provided by the object model displayinformation storage part6 to the mapdata arranging part4 is 3D, and the map data by the mapdata storage part3 is 2D, and a resulting map image is of a 3D landscape.
FIG. 16 is a block diagram showing the detailed structure of the map[0119]data arranging part4, which receives 3D data from the object model displayinformation storage part6 and 2D map data from the mapdata storage part3. A resulting map image generated thereby is of a bird's eye view.
In FIG. 16, the map[0120]data arranging part4 includes a bird's eyeview transformation part141, a 3D objectmodel creation part142, and a 3Ddata arranging part143.
The bird's eye[0121]view transformation part141 receives the 2D map data from the mapdata storage part3, and then transforms the data to a bird's eye view. A technique for transforming 2D data to a bird's eye view is disclosed in detail in “Development of a Car Navigation System with a Bird's-eye View Map Display” (Society of Automotive Engineers of Japan, Inc, Papers, 962 1996-5), for example. Next below, a technique for transforming 2D data to a bird's eye view is described.
FIG. 17 is a diagram demonstrating a technique for creating a bird's eye view by subjecting 2D map data to perspective transformation. In FIG. 17, a point V(Vx, Vy, Vz) indicates viewpoint coordinates. A point S(Sx, Sy) indicates coordinates of a bird's eye view image on a monitor, and a point M(Mx, My, Mz) indicates coordinates on a 2D map image. Here, since the map data is 2D data, Mz is 0. Points Ex, Ey, and Ez each indicate a relative position to the point M in the viewpoint coordinates system. A reference character θ denotes a look-down angle, while φ indicates a direction angle of viewpoint. A reference character DS indicates a theoretical distance between the viewpoint and the image.[0122]
Here, with the viewpoint coordinates V(Vx, Vy, Vz), look-down angle θ, and direction angle φ specified in value, the coordinates S(Sx, Sy) of the bird's eye view image can be calculated with respect to the coordinates M(Mx, My, Mz) on the 2D map image. An equation (2) therefor is as follows:
[0123]With the above equation (2), for example, the bird's eye[0124]view transformation part141 transforms the 2D map data provided by the mapdata storage part3 to a bird's eye view. The bird's eye view of 3D data is forwarded to the 3Ddata arranging part143.
The 3D object[0125]model creation part142 receives the 3D data, and then creates a 3D object model with the processing in subroutine step S553 of FIG. 11. Thus created 3D object model is forwarded to the 3Ddata arranging part143.
The 3D[0126]data arranging part143 arranges thus received 3D data and object model data together for output to thedisplay5. FIG. 18 shows exemplary data thus generated and displayed on thedisplay5.
In FIG. 18, on the map image of the bird's eye view, there include a vehicle model object[0127]331, a nearby vehicle object model332, which is regarded as a with-care vehicle, and a with-caresign object model333, which corresponds to the threat attribute. Assumption made herein is the same as the case of FIG. 14, and thus is not described again. In FIG. 18, presumably, these object models are 3D of a type changing in shape with varying viewpoints even if looked as 2D.
Exemplified now is a case where the data provided by the object model display[0128]information storage part6 is 3D, and the data by the mapdata storage part3 is 2D, and a resulting map image is of a 3D landscape, which looks different from the bird's eye view.
FIG. 19 is a block diagram showing the detailed structure of the map[0129]data arranging part4, which receives 3D data from the object model displayinformation storage part6 and 2D map data from the mapdata storage part3. A resulting map image generated thereby is of a 3D landscape, which is different from a bird's eye view.
In FIG. 19, the map[0130]data arranging part4 includes a 3D mapdata generation part147, the 3D objectmodel creation part142, and the 3Ddata arranging part143.
In FIG. 19, the 3D object[0131]model creation part142 and the 3Ddata arranging part143 are similar in structure and operation to those in FIG. 16. Thus, the 3D mapdata generation part147 is mainly described in structure and operation below.
FIG. 20 is a block diagram showing the detailed structure of the 3D map[0132]data generation part147. In FIG. 20, the 3D mapdata generation part147 includes a height/widthinformation supply part1471, and a 3Dpolygon creation part1472. The height/widthinformation supply part1471 supplies information about height and width to the 3Dpolygon creation part1472 responding to 2D map data provided by the mapdata storage part3. The 3Dpolygon creation part1472 then creates a 3D object model.
The height/width[0133]information supply part1471 analyzes the 3D shape of a road, for example, with the help of the link type (e.g., side-road link, elevated link) and information about branching node included in the 2D map data, typically by applying a predetermined pattern. With the analyzed result, the height/widthinformation supply part1471 adds information about height and width to the 2D data of the road, for example, so as to generate 3D map data.
The 3D[0134]polygon creation part1472 receives thus generated 3D map data, and creates a 3D object model with a general technique therefor. In the above manner, the mapdata arranging part4 of FIG. 19 generates a map image of 3D landscape, which looks different from a bird's eye view.
FIG. 21 is a diagram exemplarily showing a case where displayed on the 3D landscape are 3D object models, which indicate a with-care vehicle and its direction being about to move to the right lane. As shown in FIG. 21, by displaying both the nearby vehicle object model moving to the right lane and the arrow object model indicating its moving direction, the driver of the vehicle can intuitively understand what the nearby vehicle behind is about to do.[0135]
Exemplified next is a case where the data provided by the object model display[0136]information storage part6 to the mapdata arranging part4 is 2D, and the data by the mapdata storage part3 is 3D, and a resulting map image is of a 3D landscape.
FIG. 22 is a block diagram showing the detailed structure of the map[0137]data arranging part4, which receives 2D data from the object model displayinformation storage part6 and 3D map data from the mapdata storage part3. A resulting map image generated thereby is a map image of 3D landscape.
In FIG. 22, the map[0138]data arranging part4 includes a 2D objectmodel creation part145, a 2D/3D coordinatetransformation part144, and the 3Ddata arranging part143.
In FIG. 22, the 2D object[0139]model creation part145 receives 2D data from the object model displayinformation storage part6, and then creates a 2D object model by going through subroutine step S553 of FIG. 11.
To be specific, as already described, a plurality of image files are prepared as 2D shape information included in the object model display information. FIG. 23 is a diagram exemplarily showing several image files prepared as such. In FIG. 23, images are classified into “meandering vehicle”, “motorcycle”, and “vehicle with with-care sign”. Such image type corresponds to the object model display information, and further classified into “close-range”, “medium-range”, and “long-range”.[0140]
The 2D object[0141]model creation part145 first determines the image type by referring to the object model display information. The 2D objectmodel creation part145 then selects a distance range for the determined image type among from those “close-range”, “medium-range”, and “long-range”. Here, as described above, the object model display information includes position information indicating the position of the object model by 3D coordinates. In FIG. 23, selecting a distance range for each image is based on a distance between such 3D coordinates and viewpoint coordinates. Therefore, typically, the 2D objectmodel creation part145 calculates such distance to determine to which distance range thus calculated distance applies.
As for a resulting 2D object model, the 2D/3D coordinate[0142]transformation part144 transforms 2D coordinates thereof to 3D coordinates based on the corresponding position information. Then, the resulting 3D object data is inputted into the 3Ddata arranging part143.
The 3D[0143]data arranging part143 receives 3D map data from the mapdata storage part3. The 3Ddata arranging part143 then arranges the map data together with the 3D object model data provided by the 2D/3D coordinatetransformation part144 to generate a map image of 3D landscape. Thus generated map image is forwarded to thedisplay5.
Here, in the map[0144]data arranging part4 as above structured, the 2D object model created by the 2D objectmodel creation part145 is transformed to 3D data by the 2D/3D coordinatetransformation part144, and then arranged together with the 3D map data in the 3Ddata arranging part143. This is not restrictive, and the 2D/3D coordinatetransformation part144 may be omitted, and a 2D/3D image arranging part may be provided as an alternative to the 3Ddata arranging part143. If this is the case, the 2D/3D image arranging part pastes a 2D object model created by the 2D objectmodel creation part145 onto a map image of 3D landscape. In more detail, the 2D/3D image arranging part first generates a map image of 3D landscape by transforming 3D map data to screen coordinates, calculates screen coordinates of a 2D object model, and arranges 2D data as it is on a resulting map image of 3D landscape. With such modified structure, an object model looks the same even if viewed from various positions, and is displayed always the same. Therefore, better viewability is offered.
FIG. 24 shows an exemplary map image of 3D landscape generated by the map[0145]data arranging part4 of FIG. 22. In FIG. 24, the map image has an object model indicative of the vehicle displayed in the middle, and on the right side thereof, object models indicative of meandering vehicles on a road. As to those object models indicative of meandering vehicles, the size thereof is changed based on a distance from the viewpoint coordinates as described above, thereby adding depth to the map image of 3D landscape even if the object models are 2D.
Lastly, exemplified is a case where the data provided by the object model display[0146]information storage part6 to the mapdata arranging part4 is 2D, and the map data by the mapdata storage part3 is 2D, and a resulting map image is of a 3D landscape.
If this is the case, the map[0147]data arranging part4 of FIG. 22 is additionally provided with the bird's eyeview transformation part141 of FIG. 16, or the 3D mapdata generation part147 of FIG. 19, both of which convert 2D map data into 3D map data. Also, in such mapdata arranging part4, the 3Ddata arranging part143 performs data arrangement of the map data and the 2D object model data from the 2D/3D coordinatetransformation part144. Here, the components included therein operate similarly to those described above.
In such case as a map image of 3D landscape being generated from 2D data, the 2D data stored in the object model display[0148]information storage part6 is less in amount than 3D data. Therefore, if storing object model data varying in type, the object model displayinformation storage part6 can store the larger number of types, and if storing the same type of object model data, the capacity thereof can be reduced.
Further, in such case as a map image of 3D landscape being generated from 2D data, the driver of the vehicle can intuitively understand information even if object models are 2D. For example, if there is a nearby vehicle meanderingly approaching, the driver of the vehicle can easily and intuitively understand how the nearby vehicle is behaving only by seeing an object model indicative of meandering vehicle behind his/her vehicle.[0149]
While the invention has been described in detail, the foregoing description is in all aspects illustrative and not restrictive. It is understood that numerous other modifications and variations can be devised without departing from the scope of the invention.[0150]