NAVIGATION SYSTEM
The present invention relates to a navigation system, and in particular to a navigation system for a mobile robotic device. The invention relates specifically, but not exclusively, to a navigation system for a domestic robotic device, for example an autonomous cleaner or lawn mower.
Existing domestic robots, for example autonomous cleaners and lawn mowers, are primitive and typically require a large number of basic sensors in order to operate autonomously. Even so, the operation of such domestic robots is simplistic. By way of example, conventional autonomous cleaners move in a random motion over an area to be cleaned. The motion of the cleaner is typically controlled by a random-motion algorithm stored within the device. In use, the cleaner will implement the random- motion algorithm in response to internal events (elapse of a random or predetermined time period, battery condition etc.) and external stimuli (for example, detection of obstacles such as walls and furniture using simple bump sensors and detection of precipices, such as the top of a flight of stairs, using an infrared emitter and detector etc.).
Despite their unsophisticated nature, autonomous cleaners utilising random motion can provide an acceptable cleaning coverage .given enough cleaning time. However, such operation is inefficient since the cleaner has no knowledge of where it has cleaned and therefore needs to operate for an extended period to achieve an acceptable coverage of the area to be cleaned.
The efficiency of an autonomous cleaner may be enhanced by incorporating a deterministic component to the otherwise random motion of the device. In deterministic cleaning, the autonomous cleaner follows a defined path that is calculated to completely cover the area to be cleaned while minimising redundant cleaning.
In order to implement deterministic cleaning, the autonomous cleaner must have a preprogrammed map of the area to be cleaned or create such a map by exploring the environment within which it is located. This prerequisite reduces the ease of use of the cleaner and requires the map to be updated to reflect changes in the environment. The autonomous cleaner must also maintain a knowledge of its current position and remember where it has cleaned, which in turn requires a sophisticated, accurate positioning and navigation system. A suitable positioning system might rely on marker beacons positioned around the space to be cleaned, a differential Global Positioning System (GPS), scanning laser ranging systems or other sophisticated systems.
By way of further example, a laser ranging system could be used in conjunction with simultaneous localisation and mapping (SLAM) software to construct a detailed map of the space to be cleaned.
Simultaneous localisation and mapping (SLAM) is a process of concurrently building a map of an environment based on stationary features or landmarks within the environment and using this map to obtain estimates of the location of a vehicle (an autonomous cleaner in this example). Simultaneous localisation and mapping is already used in the field of mobile robotics as a tool to enable fully autonomous navigation of a vehicle. In essence, the vehicle relies on its ability to extract useful navigation information from data returned by sensors mounted on the vehicle. Typical sensors might include a dead reckoning system (for example an odometry sensor or inertial measurement system) in combination with a laser rangefinder.
The vehicle starts at an unknown location with no a priori knowledge of landmark locations. From relative observations of landmarks, it simultaneously computes an estimate of vehicle location and an estimate of landmark locations. While continuing in motion, the vehicle builds a complete map of landmarks and uses these to provide continuous estimates of the vehicle location. By tracking the relative position between the vehicle and identifiable features in the environment, both the position of the vehicle and the position of the features can be estimated simultaneously. In the absence of external information about the vehicle's position, this algorithm presents an autonomous system with the tools necessary to navigate in unknown environments.
The prospect of deploying a system that can build a map of its environment while simultaneously using that map to localise itself promises to allow robotic vehicles to operate autonomously in unknown environments. However, this is an expensive solution and the data processing overhead is high. Alternatively, a vision based system could be used in conjunction with SLAM software to form a navigation system for a deterministic cleaner. This technique, known as visual SLAM (VSLAM) uses passive sensing to provide a low power and dynamic localisation system. VSLAM uses a simple video camera as the input device and pattern processing algorithms to locate features in the images acquired by the video camera. The features are input to the SLAM algorithm which is then able to accurately compute the three-dimensional location of each feature and hence start to build a three- dimensional map as the cleaner moves around the space to be cleaned. However, to do this in sufficient detail requires a large number of feature points in the image. In common with conventional SLAM, the data processing load is high (proportional to the square of the number of features selected).
Accordingly, the abovementioned navigation systems have hitherto been considered to be prohibitively complex and expensive for domestic robotic devices.
It is an object of the invention to provide a navigation system which mitigates at least some of the disadvantages of the conventional navigation systems described above.
According to a first aspect of the present invention there is now proposed a navigation system comprising a primary mapping apparatus adapted to detect features within an environment and to create a summary map of the environment including an estimate of a point of current location within the environment; a secondary mapping apparatus adapted to provide a detailed three-dimensional map of the local environment in the vicinity of the point of current location; and a processor adapted to determine navigable points within the environment by combining information from the summary map and the detailed map.
Contrary to conventional navigation theory, the present navigation system utilises a summary map of the environment (a sparse map having few details therein) rather than a detailed map of the environment. Deliberately degrading the quality of the environmental map in this way is counter intuitive, since conventional navigation systems typically aim to maximise environmental details in a map.
The present navigation system is predicated on the realisation that, for many applications, adequate navigational performance may be achieved using an approximate knowledge of the global environment in conjunction with a detailed map of the environment in the vicinity of the point of current location.
The present navigation system confers an unexpected advantage in terms of providing reliable navigation within the environment despite the paucity of detail within the environmental map.
In this respect, the summary map of the environment and the detailed three- dimensional map in the vicinity of the point of current location mutually support each other to such an extent that a new technical result is achieved.
The foregoing navigation system is advantageous in that the data processing load is reduced in comparison with conventional navigation systems having detailed maps of the environment. The hardware requirements of the navigation system (processor specification, memory etc.) are correspondingly reduced. In a preferred embodiment, the processor is configured to provide instructions to a motion control system so as to navigate from the point of current location to another navigable point within the environment.
Advantageously, the primary mapping apparatus has an optical sensor adapted to detect the features within the environment and wherein the primary mapping apparatus utilises a visual simultaneous localisation and mapping (VSLAM) process to create the summary map of the environment.
As an enhancement of the conventional SLAM process, visual simultaneous localisation and mapping (VSLAM) uses a passive optical sensor, for example a simple video camera, and dead reckoning to make visual measurements of an environment. VSLAM uses pattern processing algorithms to locate visual features in the images acquired by the video camera. The features are input to the SLAM algorithm which is then able to accurately compute the three-dimensional location of the visual features and hence start to build a three-dimensional map of the environment. Navigation within the environment is based on recognised visual features or landmarks on the VSLAM map.
Upon entering a new environment, the VSLAM-enabled navigation system starts creating a new map through exploration of the environment. By this process, the navigation system detects features in the environment, creates landmarks there-from, and corrects its position information as necessary using its VSLAM map.
The secondary mapping apparatus may comprise any suitable means of simply generating a localised three-dimensional map. For example, the secondary mapping apparatus may comprise a passive system, e.g. a stereoscopic imaging system. By way of further example, the secondary mapping apparatus may comprise an imaging apparatus having a structured light generator capable of creating a structured pattern of light within the environment. The imaging apparatus typically includes an imaging sensor, for example a camera, arranged to capture images of the environment including the structured pattern of light projected therein. The camera is capable of detecting the position relative thereto of a plurality of points in the pattern projected within the environment, for example using triangulation / trigonometry. The secondary mapping apparatus preferably includes a pattern processor arranged in use to determine from the detected position of a point in the pattern within the environment, the range to that point. The detected position and range information are then used to create a detailed three-dimensional map of the local environment in the vicinity of the point of current location.
Conveniently, the imaging apparatus comprises at least one of a spot projector and a pattern projector adapted to project a geometric shape. Where the imaging apparatus comprises a spot projector, the spots may comprise intersections between continuous lines. In this case, the projector may project two sets of regularly spaced lines, the two sets preferably being substantially orthogonal. By way of further example, the projector may comprise at least one of a kaleidoscope and a holographic element through which light is transmitted into the environment.
Conventional SLAM navigation systems if used with a secondary mapping means would traditionally seek to produce an exhaustive environmental map by augmenting the summary information from the primary mapping apparatus with detailed information captured by the secondary mapping apparatus. The present navigation system overcomes this technical prejudice by maintaining a summary map in respect of the global environment and only seeking further detail where necessary, i.e. in the vicinity of the point of current location.
This may be more easily understood by considering the type of information contained respectively within the summary and detailed three-dimensional maps. By way of brief explanation, the summary VSLAM map merely comprises a collection of features or landmarks within the environment with associated vectors defining their relative positions with respect to the point of current location and with respect to each other. In this respect the summary map is an abstract representation of the environment since it contains little or no information regarding the environment between the sparse features or landmarks within the map.
In order to navigate within the environment, the navigation system infers information about the environment based on the relationship(s) between features or landmarks in the summary map. For example, a plurality of features lying within a common plane may be indicative of a solid planer feature within the environment, e.g. a floor, a wall etc. However, ambiguities remain in the summary map which give rise to uncertainties as to whether points within the environment are navigable. The information produced by the secondary mapping apparatus is a detailed three-dimensional representation of the environment local to the point of current location. Hence, the detailed three- dimensional map is complementary to the summary map and may be used to remove ambiguities in the summary map by confirming navigable points in the summary map.
It should be noted that the information from the detailed three dimensional map is not incorporated into the summary map; the information from the former is merely used to refine the latter in terms of identifying navigable and non-navigable points therein.
Preferably, the imaging sensor of the secondary mapping apparatus and the optical sensor of the primary mapping apparatus comprise a single common sensor. It is noteworthy that the level of detail held within the global environmental map remains low at all times, despite the use of optical sensor(s) for both primary and secondary mapping apparatuses and the availability of detailed environmental information from the latter.
Even more preferably, the common sensor is one of a video camera, a CMOS camera and a charge-coupled device (CCD).
In a preferred embodiment, the optical sensor may be arranged to have a field of view which includes an upward direction. In this configuration the optical sensor may be arranged, in use, to detect features disposed on a substantially planar surface within the environment, for example a portion of a ceiling or roof. In this case, the primary mapping apparatus is adapted to create from said detected features a summary map of the environment underlying said planar surface.
According to a second aspect of the present invention there is now proposed a vehicle having a navigation system according to the first aspect of the invention.
According to a third aspect of the present invention there is now proposed a mobile robotic device having a navigation system according to the first aspect of the invention.
In one embodiment, the mobile robotic device comprises any of a domestic robotic device, an industrial robotic device, a security robotic device, a floor cleaner, a vacuum cleaner, a lawn mower, a robotic entertainment device, a maintenance device, a surveying device, a fork lift truck, and a car.
The present navigation system represents a cost-effective way of delivering information about the immediate and general environment in which a robot, particularly a domestic mobile robot, operates to allow it to undertake navigation and therefore domestic and other tasks (e.g. cleaning, mowing, repair work, maintenance, surveillance, surveying tasks etc.) more quickly and effectively.
In the case of robotic cleaner, the present navigation systems enables deterministic cleaning of an area to be cleaned. In deterministic cleaning, the robotic cleaner follows a defined path that is calculated to completely cover the area to be cleaned while minimising redundant cleaning. When applied to a robotic cleaner, the present navigation system confers benefits in terms of reduced cost, reduction in the number of sensors required, operational simplicity, faster cleaning of a given area, cleaning of larger areas, lower power consumption (or in the case of a vacuum cleaner increased power for vacuuming), lower use of consumables, less noise, reduced wear and use of consumables. The secondary mapping apparatus ensures safe operation by helping to determine whether the robotic cleaner can move into a given space. This is particularly important within environments inhabited by people or pets, in which case the secondary mapping apparatus facilitates safe and intelligent movements around inhabitants within the environment.
Additionally, the secondary mapping apparatus provides a capability to detect precipices within the environment; a vital safety feature when used in an environment having different levels separated by stairs.
Hence, the secondary mapping apparatus enables the robotic cleaner to avoid precipices and stop in the vicinity of an object without colliding therewith. This capability confers a high degree of confidence when navigating within the environment, allowing the robotic cleaner to accelerate and decelerate gracefully as it moves within the environment.
Moreover, deterministic cleaning allows a robotic cleaner to have a. shape, e.g. a flat or acutely shaped front with corners, which facilitates cleaning of edges and corners. In addition, the robotic cleaner may be arranged to have a substantially semi-circular rear portion. In contrast, conventional robotic cleaners which move in a random motion are typically circular so that they can extricate themselves from small spaces by merely rotating about their own axis.
It is estimated that the present navigation system enables the performance of a domestic robotic cleaner to be improved by a factor of approximately twelve over conventional devices (based on the following assessments: the robot cleans a given area six times faster than a cleaner employing random motion; the size of the battery is halved owing to the reduction in redundant cleaning). Alternatively, the time taken to clean a given area may be reduced by a factor of approximately twelve over conventional devices (based on the following assessments: the robot cleans a given area six times faster than a cleaner employing random motion; the velocity at which the cleaner moves within the environment can be doubled due to the high degree of confidence with which the cleaner navigates within the environment; the size of the battery is maintained). Hence, the time taken to clean a room may be reduced from approximately one hour using a conventional device to approximately five minutes using the present robotic cleaner.
In a preferred embodiment, the mobile robotic device may be adapted to receive a detachable accessory. With the accessory fitted the robotic device may adopt a supplementary mode of operation.
For example, in the case of an robotic entertainment device (e.g. a robotic pet), the detachable accessory may be a lead or a leash. The robotic pet could then be safely taken for a walk with the lead attached thereto.
Preferably, the mobile robotic device comprises a vacuum cleaner adapted to receive a detachable accessory.
Conveniently, the mobile robotic device has a vacuum pressure variable between a minimum and a maximum, wherein maximum vacuum pressure may be automatically selected in the supplementary mode of operation. Hence, the robotic device may be configured to operate with maximum suction (maximum vacuum pressure) in the supplementary mode of operation. Advantageously, the accessory includes an accessory hose adapted to engage in fluid communication with a vacuum inlet on the vacuum cleaner.
In a preferred embodiment, the accessory hose is manoeuvrable by a user and the navigation system may be adapted to provide instructions to the motion control system so as to follow the user, while preferably concurrently avoiding obstacles and inhabitants within the environment.
In another embodiment, the navigation system is disabled in the supplementary mode of operation. In this embodiment, the accessory may include a handle. This facilitates manual cleaning of areas inaccessible to the robot when under autonomous control, for example stairs etc. Hence, the robotic vacuum cleaner of the invention can be a primary cleaner in a domestic dwelling, obviating an additional manual cleaner. Similarly, where the mobile robotic device comprises a lawnmower, the ability to manually mow areas of the lawn obviates an additional manual lawnmower.
According to a fourth aspect of the present invention there is now proposed a method of controlling a robotic device within an area to be traversed, the robotic device having a variable power requirement and a navigation system adapted to map features in an environment, the method comprising the steps of:
(i) in a first mode of operation, moving the robotic device in a substantially random motion within the area to be traversed whilst concurrently mapping the environment so as to create a summary map of the area to be traversed, wherein the robotic device is configured to use a minimum power consumption during said first mode of operation,
(ii) in a second mode of operation, moving the robotic device in at least one direction so as to map the environment in greater detail and to create a complete summary map of the area to be traversed, wherein the device is configured to use a increased power consumption during said second mode of operation. (iii) in a third mode of operation, moving the robotic device in a deterministic motion so as to provide optimum traversing of the space to be traversed, wherein the device is configured to use increased power consumption during said third mode of operation.
The secondary mapping apparatus as hereinbefore described with respect to the first aspect of the invention ensures safe operation of the robotic device during all operating modes, by helping to determine whether the robotic device can move into a given space. As will be readily appreciated, this is of particular importance during the first and second modes of operation while the summary map of the area to be traversed is being created and refined. Hence, the robot device is able to avoid precipices, to stop in the vicinity of objects within said area without colliding with them, and move in an intelligent manner around inhabitants within the area to be traversed.
In a preferred embodiment, the device may be configured only to use sufficient power to traverse the area and map the environment during said first mode of operation.
In use, the robotic device may operate in the first, second and third modes of operation in numerical sequence.
Alternatively, or in addition, the phase within which the robotic device operates is selected in response to a status condition.
In a preferred embodiment, the status condition is derived from a plurality of variables, each variable having a changeable weighting factor applied thereto so as to optimise the behaviour of the robotic device.
Advantageously, the variables are selected from: exploration of the area to be traversed, operation of the device, localisation within the environment, efficiency of operation and operating time.
In another preferred embodiment, the robotic device reverts to the first mode of operation in the event of a failure in the navigation system, or alternatively may switch itself off. The device may be a cleaner or mower or any other device mentioned above and the power consumption may relate to suction power, for example, for a cleaner or to brush action or dispensation of cleaning fluid. In the case of a vacuum cleaner, minimum suction or no suction may be used when operating in the first mode of operation (in particular where the device is configured only to use sufficient power to traverse the area and map the environment during said first mode of operation), and maximum suction may be used when operating in at least one of the second or third operating modes. Similarly, the power consumption may relate to the speed at which the device traverses the area. For a mower, power consumption may relate to operation of a cutting blade, strimmer or the like. For maintenance and repair and survey equipment power consumption may relate to operation of features on the device for carrying out such operations.
According to a further aspect of the present invention, there is now proposed the use of a visual simultaneous localisation and mapping (VSLAM) process to create a summary map of an environment, and a localised three-dimensional imaging technique to generate a local map of the environment, for autonomous navigation.
The invention will now be described, by example only, with reference to the accompanying drawings in which;
Figure 1 shows a block diagram of a navigation system according to one embodiment of the present invention.
Figure 2 shows a schematic diagram of a mobile robotic device incorporating the navigation system shown in figure 1.
Figure 3 illustrates the architecture of a robotic vacuum cleaner incorporating the navigation system according to an embodiment of the present invention.
Figure 4 shows an example of a map of an area to be cleaned created by the robotic vacuum cleaner of figure 3.
Referring now to the drawings wherein like reference numerals identify corresponding or similar elements throughout the several views, figure 1 shows a block diagram of a navigation system 2 according to one embodiment of the present invention. The navigation system 2 incorporates a primary mapping apparatus comprising a visual simultaneous localisation and mapping (VSLAM) apparatus and a secondary mapping apparatus consisting of an accurate three-dimensional mapping apparatus. The VSLAM apparatus and the three-dimensional mapping apparatus work in parallel and have complementary functions within the navigation system 2.
The VSLAM apparatus comprises a camera 4 arranged to capture images of the environment. The images are passed from the camera 4 to a feature detector 6 which processes the image to detect naturally occurring features within the environment, e.g. edges etc. The VSLAM apparatus uses a sophisticated vision algorithm 8 to build a summary map of the environment consisting of a set of unique "landmarks" derived from the features observed in the environment. The VSLAM summary map typically comprises a vector map of landmarks within environment.
The skilled person would be aware of a variety of VSLAM apparatuses. However, it should be noted that conventional VSLAM apparatuses are aimed at high accuracy applications, whereas the present VSLAM apparatus is configured to produce a summary map of the environment (a sparse map having few landmarks therein) rather than a detailed map of the environment. This is based on the realisation that, for many applications, adequate navigational performance may be achieved using an approximate knowledge of the global environment in conjunction with a separate detailed map of the local environment in order to assess whether the immediate environment is navigable, for example is free from encumbrances, obstacles, inhabitants etc.
It is noteworthy that, although the VSLAM apparatus produces a summary map of the environment having few landmarks, the location of each landmark within the VSLAM map is accurately known.
The accurate three-dimensional mapping apparatus provides the requisite detailed map of the local environment. The three-dimensional mapping apparatus comprises three-dimensional sensors 10 arranged to capture detailed information about the local environment. Electromechanical sensors and / or optical sensors are used. Typically, the sensors 10 could comprise scanning laser rangefinders. Preferably the sensors 10 are low cost, simple and eye safe.
The accurate three-dimensional mapping apparatus incorporates a processor 12 for processing the raw information from the three-dimensional sensor 10. The processor 12 provides a detailed map of the local environment in the vicinity of a point of current location within the environment. The detailed map typically comprises a collection of points within the environment measured with respect to the three-dimensional sensor 10 (also referred to hereinafter as point cloud information).
The summary map produced by the VSLAM apparatus and the detailed map from the accurate three-dimensional mapping apparatus are passed to a navigation controller 14 which provides instructions to a motion control system 16 in order to control movement of a mobile platform, for example a robot, within the mapped environment.
When used in a mobile robotics application, the navigation system 2 uses recognised landmarks to estimate the position of the robot within the environment. Furthermore, the VSLAM apparatus updates and refines the summary map as the robot revisits mapped areas, allowing it to adapt to changes in the environment. In the case of a domestic mobile robot, such changes in the environment may arise due to rearrangement of furniture or objects within a room etc.
Referring to figure 2, a schematic block diagram of a mobile robotic device 20 incorporating the navigation system of figure 1 is shown. The robotic device 20 incorporates the elements of the navigation system described above. However, in this particular embodiment, the separate camera 4 and three-dimensional sensors 10 are replaced by a single common sensor in the form of a camera 22. The camera is a conventional video camera, typically a CMOS camera, alternatively a charge-coupled device (CCD).
The camera is mounted on the robotic device 20 so as to capture images of the environment in front of the robotic device within the field of view 24 of the camera 22. The robotic device 20 also incorporates a structured light generator comprising a spot projector 26, operating at visible or infrared wavelengths, to provide information about the local environment to the accurate three-dimensional mapping apparatus. The principle of operation of the accurate three-dimensional mapping apparatus is as follows. During use, the spot projector 26 projects a plurality of spots into the environment. The camera 22 is arranged to capture images of the environment including the array of spots projected therein. The processor 12 analyses the images from the camera 22 and determines the position with respect to the camera of each projected spot within the environment, for example using triangulation / trigonometry. The processor 12 preferably includes a pattern processor adapted to determine from the detected position of a spot in the environment, the range to that spot. The detected position and range information are then used to create a detailed three-dimensional map of the environment local to the robotic device.
In one embodiment, the spot projector 26 is adapted to project a two dimensional array of spots into the environment. A suitable spot projector is described in PCT patent application publication WO 2004/044523 the content of which is hereby incorporated by reference thereto.
By way of a brief explanation, the spot projector 26 comprises a light source arranged to illuminate part of an input face of a light guide, the light guide comprising a tube having substantially reflective sides and being arranged together with projection optics so as to project an array of distinct images of the light source into the environment. The light guide in effect operates as a kaleidoscope. Light from the source is reflected from the sides of the tube and can undergo a number of reflection paths within the tube. The result is that multiple images of the light source are produced and projected into the environment. Thus the environment is illuminated with an array of images of the light source. Where the source is a simple light emitting diode the scene is therefore illuminated with an array of spots of light. The light guide kaleidoscope gives very good image replication characteristics and projects images of the input face of the light guide in a wide angle, i.e. a large number of spots are projected in all directions. Further the kaleidoscope produces a large depth of field and so delivers a large operating window. The light guide comprises a tube with substantially reflective walls and a constant cross section which is conveniently a regular polygon. Having a regular cross section means that the array of images of the light source will also be regular which is advantageous for ensuring the whole environment is covered and eases processing. A square section tube is most preferred. Typically, the light guide has a cross sectional area in the range of a few square millimetres to a few tens of square millimetres, for instance the cross sectional area may be in the range of 1 - 50mm2. As mentioned the light guide preferably has a regular shape cross section with a longest dimension of a few millimetres, say 1 - 5mm. One embodiment as mentioned is a square section tube having a side length of 2-3mm. The light guide may have a length of a few tens of millimetres, a light guide may be between 10 and 70mm long. Such light guides can generate a grid of spots over an angle of 50-100 degrees (typically about twice the total internal angle within the light guide).
The tube comprises a hollow tube having reflective internal surfaces, i.e. mirrored internal walls. Alternatively the tube is fabricated from a solid material and arranged such that a substantial amount of light incident at an interface between the material of the tube and surrounding material undergoes total internal reflection. The tube material is either coated in a coating with a suitable refractive index or designed to operate in air, in which case the refractive index of the light guide material should be such that total internal reflection occurs at the material air interface.
Using a tube like this as a light guide results in multiple images of the light source being generated which can be projected to the environment to form the array of spots. The light guide is easy to manufacture and assemble and couples the majority of the light from the source to the scene. Thus low power sources such as light emitting diodes can be used. As the exit aperture can be small, the apparatus also has a large depth of field which makes it useful for ranging applications which require spots projected that are separated over a wide range of distances.
Either individual light sources are used close to the input face of the light guide to illuminate just part of the input face or one or more light sources are used to illuminate the input face of the light guide through a mask. Using a mask with transmissive portion for passing light to a part of the light guide can be easier than using individual light sources. Accurate alignment of the mask is required at the input face of the light guide but this may be easier than accurately aligning an LED or LED array.
Where a mask is used the illumination means comprises a homogeniser located between the light source and the mask so as to ensure that the mask is evenly illuminated. The light source may therefore be any light source giving an acceptable level of brightness and does not need accurate alignment. Alternatively an LED with oversized dimensions is used to relax tolerances in manufacture/alignment.
The projection optics comprise a projection lens. The projection lens is located adjacent the output face of the light guide. In some embodiments where the light guide is solid the lens may be integral to the light guide, i.e. the tube may be shaped at the output face to form a lens.
All beams of light projected by the spot projector 26 pass through the end of the light guide and can be thought of as originating from the point at the centre of the end face of the light guide. The projection optics can then comprise a hemispherical lens and if the centre of the hemisphere coincides with the centre of the light guide output face the apparent origin of the beams remains at the same point, i.e. each projected image has a common projection origin. In this arrangement the projector does not have an axis as such as it can be thought of a source of beams radiating across a wide angle. What matters for the imaging apparatus therefore is the geometrical relationship between the point of origin of the beams and the principal point of the lens of the camera 22.
The spot projector 26 is controlled by a master clock 32 which provides synchronisation between the monochromatic camera 22, the spot projector 26 and a de-multiplexer 30 connected to the output of the camera 22.
During operation, the output from the camera 22 is switched between the primary mapping apparatus (the VSLAM apparatus) denoted by numerals 6 and 8 and a secondary mapping apparatus (the accurate three-dimensional mapping apparatus denoted by numerals 12 and 14). The clock 32 controls the spot projector'26 and the de-multiplexer 30 such that a pattern of spots is projected into the local environment at the same time as the output from camera 22 is routed to the accurate three- dimensional mapping apparatus. The clock 32 and the de-multiplexer 30 are configured such that images (frames) produced by the camera 22 are sent alternately to the VSLAM apparatus 6, 8 and the accurate three-dimensional mapping apparatus 12, 14.
In order to move autonomously within the environment, the robotic device 20 includes a motion control system 16 comprising a driving unit having wheels and at least one motor arranged to provide a motive force to the wheels. The motion control system 16 provides control signals to the motor(s) in response to instructions received from the navigation controller 14 so as to move the robotic device 20 within the environment.
To constrain the computational power required, it is helpful if the VSLAM algorithm 8 has some knowledge of the movement of the robotic vehicle 20. Accordingly, information about the robot's movement, either from the use of a stepper motor or a wheel rotation encoder, is fed back to the VSLAM algorithm 8.
When the mobile robotic device is used indoors, the field of view 24 of the camera optionally extends to image a portion of the ceiling or roof of the room or internal space within which the robot is located. In the event that the field of view 24 of the forward looking camera is insufficient, a separate camera may be used to capture images of the ceiling or roof.
Where the ceiling or roof is visible, the VSLAM apparatus maps it and indirectly determines the floor plan which is directly below. This hitherto unknown technique provides the benefit that individual rooms can more easily be defined, for example since door openings do not normally extend to the ceiling. Also, as the ceiling of a room is typically large, plain and white it may be an easy object to see even with a low performance camera. In use, doors can now be open or shut, as the user desires, with the robot still able to map and navigate one room before knowingly moving on to the next one.
Using a ceiling to create a VSLAM map of a room overcomes other disadvantages; for example when trying to map a room from a video image taken at floor level, the obstacles (chairs, tables, etc) obstruct a large portion of the view of the walls making it hard to immediately generate a map, of room boundaries. Also, from floor level it may not be obvious to the navigation system whether a gap is a doorway or merely a space between two pieces of furniture. Hence it would not be clear where the room boundaries were if the doors are left open. This becomes important in the case of robotic cleaner which should logically clean rooms one at a time, and not wander accidentally out of the room before it has finished cleaning it. The foregoing obviates the use of light beam units which are traditionally placed across a doorway in the prior art and which trigger a sensor on the robot to stop it leaving the room.
Referring to Figure 3, the architecture of a robotic vacuum cleaner incorporating the navigation system according to an embodiment of the present invention is illustrated.
The robotic vacuum cleaner architecture is based broadly on that shown in figure 2 for a generic mobile robotic device. In addition, the present robotic vacuum cleaner comprises a vacuum unit (not shown in figure 3) installed on a main body of the cleaner. The vacuum unit is arranged to collect particles in an underlying floor surface by drawing-in air. The vacuum unit can be of known construction, for example including a vacuum motor and a vacuum chamber for collecting particles from the air which is drawn in through a vacuum hole or pipe positioned adjacent the floor surface.
Optionally, the vacuum unit includes a supplementary vacuum inlet adapted to engage in fluid communication with an accessory, for example an external accessory hose.
Optionally, the suction unit has a dirt sensor (56) adapted to provide information about the level and rate of uptake of dirt from the floor, thereby giving a measure of how dirty is the area of floor currently being cleaned.
With regard to the navigation system used in the robotic vacuum cleaner according to the present embodiment, the VSLAM algorithm 8 is configured to extract features from the environment using mechanisms such as scale invariant feature transforms (Sl FTs) and Harris Corners. These mechanisms can be extended to make use of contextual information, such as plane constrained features from ceilings and walls, to reduce the computational complexity (and hence reduce the data processing load) and potentially increase reliability.
When instructed to commence cleaning, the robotic vacuum cleaner initially adopts a first mode of operation, moving in a substantially random motion within the area to be cleaned whilst concurrently mapping the environment and creating a summary VSLAM map of the area to be cleaned. The robotic vacuum cleaner is configured to use a minimum vacuum pressure (minimum suction) during said first mode of operation. Obstacles within the area to be cleaned are detected by the three dimensional mapping apparatus 12 during this first mode of operation and consequently avoided.
The foregoing un-delayed initialisation technique gives a quick, robust and efficient three-dimensional estimate of the environment from the two-dimensional images produced by the camera 22 while also allowing fast low memory filter initialisation. Optionally, odometry sensing (for example from wheel rotation encoders etc.) is used to reduce the computational complexity of the VSLAM system.
The VSLAM system 8 produces an estimate of the position of the robot within the environment with associated confidences, and a summary map of spatially located features within the environment. The position estimate is used to fuse sensor information into the internal map 44, to aid trajectory following, and, using information theory, to provide trajectory constraints on the planning algorithm 46 to maximise map and localisation quality whilst the vacuum cleaner is cleaning.
Optionally, an optical flow system 40 uses passive sensing to provide an estimate of the relative distances of objects moving relative to the camera 22. The optical flow system 40 monitors the movement of objects or texture within the images from the camera 22 that result from the camera's movements within the environment, and derives motion cues there-from. The optical flow system 40 uses these motion cues to provide an estimate of the distance to an object within the environment based on the rate at which the object moves in the images produced by the camera. By way of a simple example, objects close to camera will appear to move within the images at a higher rate than objects distant there-from. This relative information is used in conjunction with the VSLAM summary map by a position registration system 42 to produce a more accurate, absolute estimate of the position and orientation of surfaces within the sensor field of view.
An internal mapping system 44 provides an estimate of the state of the environment within which the robotic vacuum cleaner is operating; it is updated by fusing point cloud information from the three dimensional mapping apparatus 12 and the optical flow system 40 with contextual information from the VSLAM summary map. In this respect, the internal mapping system 44 creates an internal map of the environment . This set of sensor information provides a variable accuracy estimate of conditions within the environment dependent on the locality of the robot; this allows high accuracy trajectory control at short range, and room size estimation.
As pointed out previously, the information from the detailed three dimensional mapping apparatus 12 is not incorporated into the VSLAM summary map; rather the information from the detailed three dimensional mapping apparatus is used in the internal map 44 to identifying navigable and non-navigable points within the VSLAM summary map.
The foregoing confers additional benefits in terms of automatic timing and power control. For example, the robotic vacuum cleaner will rapidly know the size of the room within which it is operating (and the location of any obstacles within the room) and will have a navigation strategy to cover the floor space. Accordingly, it can calculate how long it will take to clean a particular room. So, instead of moving randomly about the room until the power runs out, a robotic vacuum cleaner having the present navigation system can display an estimate of how many minutes are left before it finishes cleaning the room.
Also, as the robotic vacuum cleaner knows how much power it has in its battery, it can optimise the use of that power to clean a given area. For example, if it is in a small room, then rather than finish the room at normal suction and still have some capacity left in its battery, it could automatically boost the suction power so its completes the job in the same time, but cleans better. It can then fully recharge its battery, thereby avoiding any memory effect which could reduce battery capacity.
The robotic cleaner includes a utility calculation system 48 which uses information relating to current and predicted states of the robotic vacuum cleaner and the environment within which it is operating, and calculated optimal strategy, to produce an estimate of current and future system utility costs. The use of system utility costs provides a means by which the robotic vacuum cleaner can assess and prioritise a set of available actions, and select an appropriate course of action there-from.
The above-mentioned utility costs are calculated for the variables exploration, cleaning, localisation, efficiency, and cleaning time. These utility costs are then combined as per Equation 1 below to determine the appropriate set of actions (e.g. route) to take next. The use of a utility based selection scheme allows the behaviour of the robotic vacuum cleaner to be optimised by switching between exploration at system start-up and cleaning once localised by altering the performance weights. The utility based selection scheme also enables fine control of the behaviour within those strategies to be automatically controlled while the robot is operating.
System Utility Cost = ax .by .cz (Equation 1)
(where a, b, c are individual utility costs and x, y, z are relative performance weights)
The trajectory controller 16 (also referred to hereinbefore as the motion control system) is enabled by a robot localisation estimate provided by the VSLAM system 8; and provides local motion control of the robotic cleaner for negotiation of obstacles and emergency collision avoidance; this provides a second level of avoidance of all detectable obstacles. In the interests of clarity, the function of the navigation controller 14 referred to previously with respect to figures 1 and 2 is performed collectively by the internal mapping system 44, the global planner 46, the utility calculator 48, and the command fusion system 50.
Figure 4 shows an example of'a map of an area to be cleaned created by the robotic vacuum cleaner illustrated in figure 3.
The information contained within the internal map includes cleaned, un-cleaned, cleanable, un-cleanable, unknown, un-passable areas, cleaning and exploration frontiers, and optionally dirt level. The optional dirt sensor 56 provides information about the level and rate of uptake of dirt, thereby giving a measure of how dirty is the area of floor currently being cleaned. This information is used during a single cleaning cycle to optimise the intensity or repetition of cleaning a particular area, and used over multiple cycles to predict both when and where the floor needs cleaning.
A detailed knowledge of the area to be cleaned and delineation of the space into separate rooms enables the user interface to comprise merely two buttons; one button labelled "clean this room", and the other one "clean the apartment" / "clean the floor", i.e. all rooms on this floor. The navigation system according to the present invention has been described herein primarily in terms of its applications to domestic robotic devices, in particular with reference to an autonomous domestic vacuum cleaner. However, the present navigation system is not restricted to such applications and the skilled person will readily appreciate alternative applications of the present navigation system. By way of further example, a robotic device incorporating the present navigation system could be programmed to act as a guard, searching rooms, looking for activity out of hours, etc. Alternatively, such a robotic device could be used to move material around a warehouse, if connected to a trolley or forklift, for example. Additional domestic robotic applications include infotainment / entertainment, robotic pets, etc. Further uses envisaged for the navigation system of the present invention are for robots used for painting work, for repair and maintenance work, for surveying, e.g. topology and for flaw and crack detection in difficult-to-access areas.
In view of the foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the invention.
The scope of the present disclosure includes any novel feature or combination of features disclosed therein either explicitly or implicitly or any generalisation thereof irrespective of whether or not it relates to the claimed invention or mitigates any or all of the problems addressed by the present invention. The applicant hereby gives notice that new claims may be formulated to such features during the prosecution of this application or of any such further application derived there from. In particular, with reference to the appended claims, features from dependent claims may be combined with those of the independent claims and features from respective independent claims may be combined in any appropriate manner and not merely in the specific combinations enumerated in the claims.