CROSS REFERENCE TO RELATED APPLICATIONSThis nonprovisional application claims a benefit of, and a priority to, Provisional Application No. 62/972,563 application entitled “Self-actuated Autonomous Vacuum for Cleaning Various Mess Types,” which was filed on Feb. 10, 2020, and Provisional application No. 63/121,842 entitled “Self-Actuated Autonomous Vacuum for Cleaning Various Mess Types,” which was filed on Dec. 4, 2020, the contents of each of which is incorporated by reference herein.
This application is related to U.S. application Ser. No. ______ (Atty. Docket No. 34832-46052/US), titled “Self-Actuated Cleaning Head for an Autonomous Vacuum,” which was filed on an even date herewith and incorporated herein by reference in its entirety.
This application is related to U.S. application Ser. No. ______ (Atty. Docket No. 34832-48226/US), titled “Mapping an Environment around an Autonomous Vacuum,” which was filed on an even date herewith and incorporated herein by reference in its entirety.
This application is related to U.S. application Ser. No. ______ (Atty. Docket No. 34832-48228/US), titled “Waste Bag with Absorbent Dispersion Sachet,” which was filed on an even date herewith and incorporated herein by reference in its entirety.
TECHNICAL FIELDThis disclosure relates to autonomous cleaning systems. More particularly, this disclosure describes an autonomous cleaning system for identifying and automatically cleaning various surface and mess types using automated cleaning structures and components.
BACKGROUNDConventional autonomous floor cleaning systems are limited in their capabilities. Due to the lack of capabilities, the autonomous floor cleaning systems only provide rudimentary cleaning solutions. Without the use of a plurality of sensors and better algorithms, the autonomous floor cleaning systems are unable to adapt to efficiently clean a variety of messes with optimal mobility and require manual adjustment to complete cleaning tasks. For example, conventional autonomous floor cleaning systems use cleaning heads to improve cleaning efficiency by agitating and loosening dirt, dust, and debris. If the cleaning head of a vacuum or sweeper is too low, the autonomous floor cleaning system may be unable to move over an obstacle or may damage the floor, and if the cleaning head is too high, the autonomous floor cleaning system may miss some of the mess. Even if a user manually sets the cleaning head at an optimal height, mobility of the cleaning head within the environment without getting stuck may be sacrificed for cleaning efficacy, which may still be nonoptimal for a variety of surface types and messes in the environment.
Aside from shortcomings as a vacuum cleaning system, conventional autonomous floor cleaning systems also have challenges with cleaning stains on hard surface flooring. A conventional floor cleaning system may include a mop roller for cleaning the floor. While light stains may be relatively easy to clean and can be done in one continuous pass, a tough stain dried onto a surface might require multiple passes of the autonomous floor cleaning system to remove. Further, autonomous floor cleaning systems are unable to inspect whether a stain has been cleaned or if another pass is required.
For some hard surface floorings, an autonomous floor cleaning system with a mop roller may need to apply pressure with the mop roller to remove a tough stain, and when pressure is applied to a microfiber cloth of the mop roller, the microfiber cloth may be unable to retain water as effectively as without pressure. For instance, the microfiber cloth contains voids that fill with water, and when pressure is applied to the microfiber cloth, the voids shrink in size, limiting the microfiber cloth's ability to capture and retain water.
Furthermore, another problem with conventional autonomous floor cleaning systems is a need for a place to store waste as it cleans an environment. Some conventional autonomous floor cleaning systems use a waste bag to collect and store the waste that the cleaning system picks up. However, conventional waste bags are limited to solid waste in their storage capabilities and may become saturated upon storage of liquid waste, resulting in weak points in the waste bag prone to tearing, filter performance issues, and leaks. Other waste storage solutions to handle both liquid and solid waste include waste containers, but liquid waste may adhere to the inside of the waste container, requiring extensive cleaning on the part of a user to empty the waste container.
Yet another issue with conventional autonomous floor cleaning systems is navigation. To navigate the environment, the conventional autonomous floor cleaning system may need a map of the environment. Though an autonomous floor cleaning system could attempt to create a map of an environment as it moves around, environments constantly change and are associated with unpredictability in where objects will be located in the environment on a day-to-day basis. This makes navigating the environment to clean up messes difficult for an autonomous floor cleaning system.
Further, interacting with the autonomous floor cleaning system to give commands for cleaning relative to the environment can be difficult. A user may inherently know where the objects or messes are within the environment, but the autonomous floor cleaning system may not connect image data of the environment to the specific wording a user uses in a command to direct the autonomous cleaning system. For example, if a user enters, via a user interface, a command for the autonomous floor cleaning system to “clean kitchen,” without the user being able to confirm via a rendering of the environment that the autonomous floor cleaning system knows where the kitchen is, the autonomous floor cleaning system may clean the wrong part of the environment or otherwise misunderstand the command. Thus, a user interface depicting an accurate rendering of the environment is necessary for instruction in the autonomous floor cleaning system.
SUMMARYAn autonomous cleaning robot described herein uses an integrated, vertically-actuated cleaning head to increase cleaning efficacy and improve mobility. For ease of discussion and by way of one example, the autonomous cleaning robot will be described as an autonomous vacuum. However, the principles described herein may be applied to other autonomous cleaning robot configurations, including an autonomous sweeper, an autonomous mop, an autonomous duster, or an autonomous cleaning robot that may combine two or more cleaning functions (e.g., vacuum, sweep, dust, mop, move objects, etc.).
The autonomous vacuum may optimize the height of the cleaning head for various surface types. Moving the cleaning head automatically allows the user to remain hands-off in the cleaning processes of the autonomous vacuum while also increasing the autonomous vacuum's mobility within the environment. By adjusting the height of the cleaning head based on visual data of the environment, the autonomous vacuum may prevent itself from becoming caught on obstacles as it cleans an area of an environment. Another advantage of self adjusting the height of the cleaning head, such as for the size of debris in the environment ((e.g., when vacuuming a popcorn kernel, the autonomous vacuum moves the cleaning head vertically to at least to the size of that popcorn kernel), is that the autonomous vacuum may maintain a high cleaning efficiency while still being able to vacuum debris of various sizes. The cleaning head may include one or more brush rollers and one or more motors for controlling the brush rollers. Aside from the integrated cleaning head, the autonomous vacuum may include a solvent pump, vacuum pump, actuator, and waste bag. To account for liquid waste, the waste bag may include an absorbent for coagulating the liquid waste for ease of cleaning waste out of the autonomous vacuum.
Further, the cleaning head may include a mop roller comprising a mop pad. The mop pad may have surface characteristics such as an abrasive material to enable a scrubbing type action. The abrasive material may be sufficiently abrasive to remove, for example, a stained or sticky area, but not so abrasive as to damage (e.g., scratch) a hard flooring surface. In addition, the mop pad may be structured from an absorbent material, for example, a microfiber cloth. The autonomous vacuum may use the mop roller to mop and scrub stains by alternating directional velocities of the mop roller and the autonomous vacuum. The autonomous vacuum may dock at a docking station for charging and drying the mop pad using a heating element incorporated into the docking station.
Along with the physical components of the autonomous vacuum, the autonomous vacuum employs audiovisual sensors in a sensor system to detect user interactivity and execute tasks. The sensor system may include some or all of a camera system, microphone, inertial measurement unit, infrared camera, lidar sensor, glass detection sensor, storage medium, and processor. The sensor system collects visual, audio, and inertial data (or, collectively, sensor data). The autonomous vacuum may use the sensor system to collect and interpret user speech inputs, detect and map a spatial layout of an environment, detect messes of liquid and solid waste, determine surface types, and more. The data gathered by the sensor system may inform the autonomous vacuum's planning and execution of complex objectives, such as cleaning tasks and charging. Further, the data may be used to generate a virtual rendering of the physical environment around the autonomous vacuum, which may be displayed in user interfaces on a client device. A user may interact with the user interfaces and/or give audio-visual commands to transmit cleaning instructions to the autonomous vacuum based on objects in the physical environment.
In one example embodiment, an autonomous vacuum creates a two-dimensional (2D) or three-dimensional (3D) map of a physical environment as it moves around the floor of the environment and collects sensor data corresponding to that environment. For example, the autonomous vacuum may segment out three-dimensional versions of objects in the environment and map them to different levels within the map based on the observed amount of movement of the objects. The levels of the map include a long-term level, intermediate level, and immediate level. The long-term level contains mappings of static objects in the environment, which are objects that stay in place long-term, such as a closet or a table, and the intermediate level contains mappings of dynamic objects in the environment. The immediate level contains mappings of objects within a certain vicinity of the autonomous vacuum, such as the field of view of the cameras integrated into the autonomous vacuum. The autonomous vacuum uses the long-term level to localize itself as it moves around the environment and the immediate level to navigate around objects in the environment. As the autonomous vacuum collects visual data, the autonomous vacuum compares the visual data to the map to detect messes in the environment and create cleaning tasks to address the messes. The autonomous vacuum may additionally or alternatively use a neural network to detect dirt within the environment.
The features and advantages described in the specification are not all inclusive and, in particular, many additional features and advantages will be apparent to one of ordinary skill in the art in view of the drawings, specification, and claims. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter.
BRIEF DESCRIPTION OF DRAWINGSFigure (“FIG.”)1 is a block diagram of an autonomous vacuum, according to one example embodiment.
FIG. 2 illustrates the autonomous vacuum from various perspective views, according to one example embodiment.
FIGS. 3A-3G illustrate various spatial arrangements of components of the autonomous vacuum, according to one example embodiment.
FIG. 4 is a block diagram of a sensor system of the autonomous vacuum, according to one example embodiment.
FIG. 5 is a block diagram of a storage medium of the sensor system, according to one example embodiment.
FIG. 6 illustrates a block diagram of a camera system, according to one example embodiment.
FIG. 7 illustrates a positioning of cameras on the autonomous vacuum, according to one example embodiment.
FIG. 8 illustrates levels of a map used by the autonomous vacuum, according to one example embodiment.
FIG. 9 illustrates an immediate mapping by the autonomous vacuum, according to one example embodiment.
FIGS. 10A-10C illustrate cleaning head positions, according to one example embodiment.
FIG. 11A illustrates a waste bag with a liquid-solid filter system, according to one example embodiment.
FIG. 11B illustrates a waste bag with porous and nonporous portions, according to one example embodiment.
FIG. 11C illustrates a waste bag interlaced with absorbent strings, according to one example embodiment.
FIG. 11D illustrates a waste bag with an absorbent dispensing system, according to one example embodiment.
FIG. 11E illustrates an enclosed sachet in a waste bag enclosure, according to one embodiment.
FIG. 11F illustrates a conical insert for use with a waste bag, according to one embodiment.
FIG. 11G illustrates a conical insert in a waste bag enclosure, according to one embodiment.
FIG. 12 is a flowchart illustrating a charging process for the autonomous vacuum, according to one example embodiment.
FIG. 13 is a flowchart illustrating a cleaning process for the autonomous vacuum, according to one example embodiment.
FIG. 14 illustrates a behavior tree used to determine the behavior of theautonomous vacuum100, according to one example embodiment.
FIG. 15 is a flowchart illustrating an example process for beginning a cleaning task based on user speech input, according to one example embodiment.
FIG. 16A illustrates a user interface depicting a virtual rendering of the autonomous vacuum scouting an environment, according to one example embodiment.
FIG. 16B illustrates a user interface depicting a 3D rendering of an environment, according to one example embodiment.
FIG. 16C illustrates a user interface depicting an obstacle icon in a rendering of an environment, according to one example embodiment.
FIG. 17A illustrates a user interface depicting locations of detected messes and obstacles in an54environment, according to one example embodiment.
FIG. 17B illustrates a user interface depicting an obstacle image, according to one example embodiment.
FIG. 18A illustrates a user interface depicting a route of an autonomous vacuum in an environment, according to one example embodiment.
FIG. 18B illustrates a user interface depicting detected clean areas in an environment, according to one example embodiment.
FIG. 19A illustrates an interaction with a user interface with a direct button, according to one example embodiment.
FIG. 19B illustrates selecting a location in a rendering of an environment via a user interface according to one example embodiment.
FIG. 19C illustrates a waste bin icon in a user interface, according to one example embodiment.
FIG. 19D illustrates a selected area in a user interface, according to one example embodiment.
FIG. 19E illustrates a selected area in a user interface including a rendering with room overlays, according to one example embodiment.
FIG. 20A illustrates a user interface depicting instructions for giving an autonomous vacuum voice commands, according to one example embodiment.
FIG. 20B illustrates a user interface depicting instructions for setting a waste bin icon in a rendering of an environment, according to one example embodiment.
FIG. 20C illustrates a user interface depicting instructions for adjusting a cleaning schedule of an autonomous vacuum, according to one example embodiment.
FIG. 21 is a flowchart illustrating an example process for rendering a user interface for an autonomous vacuum traversing a physical environment, according to one example embodiment.
FIG. 22 is a mop roller, according to one example embodiment.
FIG. 23A illustrates a mop roller being wrung, according to one example embodiment.
FIG. 23B shows the cleaning head of the autonomous vacuum including the mop roller, according to one embodiment.
FIG. 23C shows a selection flap in an upward position, according to one embodiment.
FIG. 23D shows a selection flap in a downward position, according to one embodiment.
FIG. 23E shows a mop cover not covering a mop roller, according to one embodiment.
FIG. 23F shows a mop cover covering a mop roller, according to one embodiment
FIG. 24A illustrates a mop roller rotating counterclockwise as the autonomous vacuum moves forward, according to one embodiment.
FIG. 24B illustrates a mop roller rotating counterclockwise as the autonomous vacuum moves backward, according to one embodiment.
FIG. 25 illustrates a mop roller over a docking station, according to one example embodiment.
FIG. 26 illustrates a flat wringer for a mop roller, according to one example embodiment.
FIG. 27 is a high-level block diagram illustrating physical components of a computer used as part or all of the client device fromFIG. 4, according to one embodiment.
The figures depict embodiments of the disclosed configurations for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the disclosed configurations described herein.
DETAILED DESCRIPTIONThe figures (FIGS.) and the following description relate to preferred embodiments by way of illustration only. It should be noted that from the following discussion, alternative embodiments of the structures and methods disclosed herein will be readily recognized as viable alternatives that may be employed without departing from the principles of what is claimed.
Reference will now be made in detail to several embodiments, examples of which are illustrated in the accompanying figures. It is noted that wherever practicable similar or like reference numbers may be used in the figures and may indicate similar or like functionality. The figures depict embodiments of the disclosed system (or method) for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles described herein.
OverviewAutonomous cleaning system may run into a host of problems while attempting to complete clean messes within an environment. In particular, some stains and dirt particles, which may stick to the floor when below a certain size, cannot be cleaned effectively with dry vacuums or other non-contact cleaning methods. Other messes may involve larger components, such as chunks of food or small items, which can get in the way of an autonomous cleaning system that is setup to clean messes lower in height.
The following detailed description describes an autonomous cleaning robot. As previously noted, for ease of discussion and by way of one example, the autonomous cleaning robot will be described as an autonomous vacuum. The principles described herein are not intended to be limited to an autonomous vacuum and it is understood that the principles describe may be applied to other autonomous cleaning robot configurations, including an autonomous sweeper, an autonomous mop, an autonomous duster, or an autonomous cleaning robot that may combine two or more cleaning functions (e.g., vacuum and sweep or dust and mop).
In one example embodiment, an autonomous vacuum may include a self-actuated head that can account for some of these common cleaning issues. The autonomous vacuum roams around an environment (such as a house) to map the environment and detect messes within the environment. The autonomous vacuum includes an automated cleaning head that adjusts its height for cleaning a mess based on the mess type, surface type, and/or size of the mess. The autonomous vacuum may include a waste bag for collecting both liquid and solid waste, a camera sensor system for capturing visual-inertial data, and a variety of sensors in a sensor system for collecting other visual, audio, lidar, IR, time of flight, and inertial data (i.e., sensor data) about the environment. The autonomous vacuum may use this sensor data to map the environment, detect messes, compile and execute a task list of cleaning tasks, receive user instructions, and navigate the environment.
System ArchitectureFIG. (“FIG.”)1 is a block diagram of anautonomous vacuum100, according to one example embodiment. Theautonomous vacuum100 in this example may include acleaning head105,waste bag110,vacuum pump115,solvent pump120,actuator assembly125,sensor system175, andbattery180. The components of theautonomous vacuum100 allow theautonomous vacuum100 to intelligently clean as it traverses an area within an environment. In some embodiments, the architecture of theautonomous vacuum100 include more components for autonomous cleaning purposes. Some examples include a mop roller, a solvent spray system, a waste container, and multiple solvent containers for different types of cleaning solvents. It is noted that theautonomous vacuum100 may include functions that include cleaning functions that include, for example, vacuuming, sweeping, dusting, mopping, and/or deep cleaning.
Theautonomous vacuum100 uses thecleaning head105 to clean up messes and remove waste from an environment. In some embodiments, the cleaninghead105 may be referred to as a roller housing, and thecleaning head105 has acleaning cavity130 that contains abrush roller135 that is controlled by abrush motor140. In some embodiments, theautonomous vacuum100 may include two ormore brush rollers135 controlled by two ormore brush motors140. Thebrush roller135 may be used to handle large particle messes, such as food spills or small plastic items like bottle caps. In some embodiments, the brush roller is a cylindrically-shaped component that rotates as it collects and cleans messes. The brush roller may be composed of multiple materials for collecting a variety of waste, including synthetic bristle material, microfiber, wool, or felt. For further cleaning capabilities, the cleaninghead105 also has aside brush roller145 that is controlled by aside brush motor150. Theside brush roller145 may be shaped like a disk or a radial arrangement of whiskers that can push dirt into the path of thebrush roller135. In some embodiments, theside brush roller145 is composed of different materials than thebrush roller135 to handle different types of waste and mess. Further, in embodiments where in theautonomous vacuum100 also includes a mop roller, thebrush roller135,side brush roller145, and mop roller may each be composed of different materials and operate at different times and/or speeds, depending on a cleaning task being executed by theautonomous vacuum100. Thebrush roller135,side brush roller145, mop roller, and any other rollers on theautonomous vacuum100 may collectively be referred to as cleaning rollers, in some embodiments.
The cleaninghead105 ingestswaste155 as theautonomous vacuum100 cleans using thebrush roller135 and theside brush roller145 and sends thewaste155 to thewaste bag110. Thewaste bag110 collects and filters waste155 from the air to send filteredair165 out of theautonomous vacuum100 through thevacuum pump115 asair exhaust170. Various embodiments of thewaste bag110 are further described in relation toFIGS. 11A-11D. Theautonomous vacuum100 may also use solvent160 combined with pressure from the cleaninghead105 to clean a variety of surface types. The autonomous vacuum may dispense solvent160 from thesolvent pump120 onto an area to remove dirt, such as dust, stains, and solid waste and/or clean up liquid waste. Theautonomous vacuum100 may also dispense solvent160 into a separate solvent tray, which may be part of a charging station (e.g., docking station185), described below, clean thebrush roller135 and theside brush roller145.
Theactuator assembly125 includes one or more actuators (henceforth referred to as an actuator for simplicity), one or more controllers and/or processors (henceforth referred to as a controller for simplicity) that operate in conjunction with thesensor system175 to control movement of thecleaning head105. In particular, thesensor system175 collects and uses sensor data to determine an optimal height for thecleaning head105 given a surface type, surface height, and mess type. Surface types are the material the floor of the environment is made of and may include carpet, wood, and tile. Mess types are the form of mess in the environment, such as smudges, stains, and spills. It also includes the type of phase the mess embodies, such as liquid, solid, semi-solid, or a combination of liquid and solid. Some examples of waste include bits of paper, popcorn, leaves, and particulate dust. A mess typically has a size/form factor that is relatively small compared to obstacles that are larger. For example, spilled dry cereal may be a mess but the bowl it came in would be an obstacle. Spilled liquid may be a mess, but the glass that held it may be an obstacle. However, if the glass broke into smaller pieces, the glass would then be a mess rather than an obstacle. Further, if thesensor system175 determines that theautonomous vacuum100 cannot properly clean up the glass, the glass may again be considered an obstacle, and thesensor system175 may send a notification to a user indicating that there is a mess that needs user cleaning. The mess may be visually defined in some embodiments, e.g., visual characteristics. In other embodiments it may be defined by particle size or make up. When defined by size, in some embodiments, a mess and an obstacle may coincide. For example, a small LEGO brick piece may be the size of both a mess and an obstacle. Thesensor system175 is further described in relation toFIG. 4.
Theactuator assembly125 automatically adjusts the height of thecleaning head105 given the surface type, surface height, and mess type. In particular, the actuator controls vertical movement and rotation tilt of thecleaning head105. The actuator may vertically actuate thecleaning head105 based on instructions from the sensor system. For example, the actuator may adjust thecleaning head105 to a higher height if thesensor system175 detects thick carpet in the environment than if the processor detects thin carpet. Further, the actuator may adjust thecleaning head105 to a higher height for a solid waste spill than a liquid waste spill. In some embodiments, the actuator may set the height of thecleaning head105 to push larger messes out of the path of theautonomous vacuum100. For example, if theautonomous vacuum100 is blocked by a pile of books, thesensor system165 may detect the obstruction (i.e., the pile of books) and the actuator may move the cleanings head105 to the height of the lowest book, and theautonomous vacuum100 may move the books out of the way to continue cleaning an area. Furthermore, theautonomous vacuum100 may detect the height of obstructions and/or obstacles, and if an obstruction or obstacle is over a threshold size, theautonomous vacuum100 may use the collected visual data to determine whether to climb or circumvent the obstruction or obstacle by adjusting the cleaning head height using theactuator assembly125.
The controller of theactuator assembly125 may control movement of theautonomous vacuum100. In particular, the controller connects to one more motors connected to one or more wheels that may be used to move theautonomous vacuum100 based on sensor data captured by the sensor system175 (e.g., indicating a location of a mess to travel to). The controller may cause the motors to rotate the wheels forward/backward or turn to move theautonomous vacuum100 in the environment. The controller may additionally control dispersion of solvent via thesolvent pump120, turning on/off thevacuum pump115, instructing thesensor system175 to capture data, and the like based on the sensor data.
The controller of theactuator assembly125 may also control rotation of the cleaning rollers. The controller also connects to one or more motors (e.g., the brush motor(s)140,side brush motor150, and one or more mop motors) positioned at the ends of the cleaning rollers. The controller can toggle rotation of the cleaning rollers between rotating forward or backward or not rotating using the motors. In some embodiments, the cleaning rollers may be connected to an enclosure of thecleaning head105 via rotation assemblies each comprising one or more of pins or gear assemblies that connect to the motors to control rotation of the cleaning rollers. The controller may rotate the cleaning rollers based on a direction needed to clean a mess or move a component of theautonomous vacuum100. In some embodiments, thesensor system175 determines an amount of pressure needed to clean a mess (e.g., more pressure for a stain than for a spill), and the controller may alter the rotation of the cleaning rollers to match the determined pressure. The controller may, in some instances, be coupled to a load cell at each cleaning roller used to detect pressure being applied by the cleaning roller. In another instance, thesensor system175 may be able to determine an amount of current required to spin each cleaning roller at a set number of rotations per minute (RPM), which may be used to determine a pressure being exerted by the cleaning roller. The sensor system may also determine whether theautonomous vacuum100 is able to meet an expected movement (e.g., if a cleaning roller is jammed) and adjust the rotation via the controller if not. Thus, thesensor system175 may optimize a load being applied by each cleaning roller in a feedback control loop to improve cleaning efficacy and mobility in the environment.
Theautonomous vacuum100 is powered with aninternal battery180. Thebattery180 stores and supplies electrical power for theautonomous vacuum100. In some embodiments, thebattery180 consists of multiple smaller batteries that charge specific components of theautonomous vacuum100. Theautonomous vacuum100 may dock at adocking station185 to charge thebattery180. The process for charging thebattery180 is further described in relation toFIG. 12. Thedocking station185 may be connected to an external power source to provide power to thebattery180. External power sources may include a household power source and one or more solar panels. Thedocking station185 also may include processing, memory, and communication computing components that may be used to communicate with theautonomous vacuum100 and/or a cloud computing infrastructure (e.g., via wired or wireless communication). These computing components may be used for firmware updates and/or communicating maintenance status. Thedocking station185 also may include other components, such as a cleaning station for theautonomous vacuum100. In some embodiments, the cleaning station includes a solvent tray that theautonomous vacuum100 may spray solvent into and roll thebrush roller135 or theside brush roller145 in for cleaning. In other embodiments, the autonomous vacuum may eject thewaste bag110 into a container located at thedocking station185 for a user to remove.
FIG. 2 illustrates theautonomous vacuum100 from various perspective views, according to one example embodiment. In this example embodiment, theautonomous vacuum100 includes awaste container200 instead of thewaste bag110. In some embodiments, thewaste container200 may contain thewaste bag110. Both angles of theautonomous vacuum100 in the figure show thecleaning head105 and at least one wheel210, among other components. In this embodiment, theautonomous vacuum100 has two wheels210 for movement that rotate via one or more motors controlled by the controller, but in other embodiments, theautonomous vacuum100 may have more wheels or a different mechanism for movement including forward/backward rotation or side-to-side movement (e.g., for turning the autonomous vacuum100).
FIGS. 3A-3E illustrate various spatial arrangements of some components of theautonomous vacuum100, according to one example embodiment.FIG. 3A shows the cleaning head at the front300A of theautonomous vacuum100. The cleaninghead105 may include acylindrical brush roller135 and a cylindricalside brush roller150. Above thecleaning head105 is thesolvent pump120, which dispenses solvent160 from asolvent container320 to thecleaning head105 for cleaning messes. Thesolvent container320 is at the back310A of theautonomous vacuum100 next to thewaste container200 and thevacuum pump115, which pullswaste155 into thewaste container200 as the cleaninghead105 moves over thewaste155.
FIG. 3B illustrates a t-shaped330 spatial configuration of components of theautonomous vacuum100. For simplicity, the figure shows asolvent volume340B and awaste volume350B. Thesolvent volume340B may contain thesolvent pump120 andsolvent container320 ofFIG. 3A, and thewaste volume350B may contain the waste container200 (and/orwaste bag110, in other embodiments) andvacuum pump115 ofFIG. 3A. In this configuration, the cleaninghead105B is at the front300B of theautonomous vacuum100 and is wider than thebase360B. Thesolvent volume340B is at the back310B of theautonomous vacuum100, and thewaste volume350B is in between the cleaninghead105B and thesolvent volume340B. Both thesolvent volume340B and thewaste volume350B each have the same width as thebase360B.
FIG. 3C illustrates a tower370 spatial configuration of components of theautonomous vacuum100. For simplicity, the figure shows asolvent volume340C and awaste volume350C. Thesolvent volume340C may contain thesolvent pump120 andsolvent container320 ofFIG. 3A, and thewaste volume350C may contain the waste container200 (and/orwaste bag110, in other embodiments) andvacuum pump115 ofFIG. 3A. In this configuration, the cleaning head105C is at the front300C of theautonomous vacuum100 and is the same width as thebase360B. Thesolvent volume340C is at the back310C of theautonomous vacuum100, and thewaste volume350C is in between the cleaning head105C and thesolvent volume340C. Both thesolvent volume340C and thewaste volume350C are smaller in width than thebase360C and are taller than thesolvent volume340B and thewaste volume350B of the t-shapedconfiguration330 inFIG. 3B.
FIG. 3D illustrates acover375A ofautonomous vacuum100. In particular, the cover is an enclosed structure that covers the solvent volume340 and waste volume350. In this configuration, thecleaning head105D is at the front300D of theautonomous vacuum100 and is the same width as thebase360D. The cover is at the back310D of theautonomous vacuum100, and includes anopening flap380 that a user can open or close to access the solvent volume340 and waste volume350 (e.g., to add more solvent, remove thewaste bag110, or put in a new waste bag110). The cover may also house a subset of the sensors of thesensor system175 and theactuator assembly125, which may be configured at a front of thecover375A to connect to thecleaning head105D.
In some embodiments, such as the spatial configuration ofFIGS. 3A-3D, the cleaninghead105 has a height of less than 3 inches (or e.g., less than 75 millimeters (mm)) at each end of thecleaning head105. This maximum height allows theautonomous vacuum100 to maneuver thecleaning head105 under toe kicks in a kitchen. A toe kick is a recessed area between a cabinet and the floor in the kitchen and traditionally poses a challenge to clean with conventional autonomous vacuums due to their geometries. By keeping the height of thecleaning head105 below 3 inches (or below 75 mm), theautonomous vacuum100 can clean under toe kicks without height constraints reducing the amount of waste that theautonomous vacuum100 can collect (i.e., not limiting the size of the waste volume350).
In some embodiments, as shown inFIG. 3E, theautonomous vacuum100 may be configured using four-bar linkages395 that connect thecleaning head105 to thecover375B. In some embodiments, the four-bar linkages may connect thecleaning head105 directly to thecover375B (also referred to as the body of the autonomous vacuum100) or one or more components housed by thecover375B. The four-bar linkages are connected to the actuator of theactuator assembly125 such that the actuator can control movement of the cleaning head with the four-bar linkages. The four-bar linkages395 allow thecleaning head105 to maintain an unconstrained vertical degree of freedom and control rotation movement of thecleaning head105 to reduce slop (e.g., side-to-side rotation from the top of thecleaning head105, from the front of thecleaning head105, and from each side of the cleaning head105) upon movement of the autonomous vacuum. The four-bar linkages395 also allow thecleaning head105 to have a constrained rotational (from front300E to back310E) degree of freedom. This is maintained by leaving clearance between pins and bearings that hold the four-bar linkages395 in place between the cleaninghead105 and thecover375B.
The four-bar linkages395 allow theautonomous vacuum100 to keep the cleaninghead105 in consistent contact with theground396 by allowing for vertical and rotational variation without allowing thecleaning head105 to flip over, as shown inFIG. 3E. Thus, if theautonomous vacuum100 moves over an incline, the cleaninghead105 may adjust to the contour of theground396 by staying flat against the ground386. This may be referred to as passive articulation, which may be applied to keep theautonomous vacuum100 from becoming stuck on obstacles within the environment. Theautonomous vacuum100 may leverage the use of the four-bar linkages to apply pressure to thebrush roller135 with the actuator to deeply clean carpets or other messes.
The connection using the four-bar linkages also allow theautonomous vacuum100 to apply pressure to amop roller385 to clean various messes. Themop roller385 may be partially composed of microfiber cloth that retains water (or other liquids) depending on pressure applied to themop roller385. In particular, if themop roller385 is applied to the ground386 with high pressure, themop roller385 cannot retain as much water as when themop roller385 is applied to theground396 with low pressure. Themop roller385 may have higher cleaning efficacy when not retaining water than when retaining water. For example, if theautonomous vacuum100 moves forward (i.e., towards its front300E), themop roller385 will apply a low pressure and take in more water since it is uncompressed, as shown inFIG. 3F. Further, if theautonomous vacuum100 moves backward, themop roller385 will apply a high pressure due to backward title of thecleaning head105 from the four-bar linkages, resulting in a high cleaning efficacy, as shown inFIG. 3G. Theautonomous vacuum100 may leverage these aspects of using the four-bar linkages to clean messes detected by thesensor system175 with the mop roller385 (e.g., such as alternating between moving forward and backward to suck in water and scrub a stain, respectively). The mop roller is further described in relation toFIGS. 22-25.
Sensor SystemFIG. 4 is a block diagram of asensor system175 of theautonomous vacuum100, according to one example embodiment. Thesensor system175 receives data from, for example, camera (video/visual), microphone (audio), lidar, infrared (IR), and/or inertial data (e.g., environmental surrounding or environment sensor data) about an environment for cleaning and uses the sensor data to map the environment and determine and execute cleaning tasks to handle a variety of messes. Thesensor system175 may communicate with one or more client devices410 via anetwork400 to send sensor data, alert a user to messes, or receive cleaning tasks to add to the task list.
Thenetwork400 may comprise any combination of local area and/or wide area networks, using wired and/or wireless communication systems. In one embodiment, thenetwork400 uses standard communications technologies and/or protocols. For example, thenetwork400 includes communication links using technologies such as Ethernet, 802.11 (WiFi), worldwide interoperability for microwave access (WiMAX), 3G, 4G, 5G, code division multiple access (CDMA), digital subscriber line (DSL), Bluetooth, Near Field Communication (NFC), Universal Serial Bus (USB), or any combination of protocols. In some embodiments, all or some of the communication links of thenetwork400 may be encrypted using any suitable technique or techniques.
The client device410 is a computing device capable of receiving user input as well as transmitting and/or receiving data via thenetwork400. Though only two client devices410 are shown inFIG. 4, in some embodiments, more or less client devices410 may be connected to theautonomous vacuum100. In one embodiment, a client device410 is a conventional computer system, such as a desktop or a laptop computer. Alternatively, a client device410 may be a device having computer functionality, such as a personal digital assistant (PDA), a mobile telephone, a smartphone, a tablet, an Internet of Things (IoT) device, or another suitable device. A client device410 is configured to communicate via thenetwork400. In one embodiment, a client device410 executes an application allowing a user of the client device410 to interact with thesensor system175 to view sensor data, receive alerts, set cleaning settings, and add cleaning tasks to a task list for theautonomous vacuum100 to complete, among other interactions. For example, a client device410 executes a browser application to enable interactions between the client device410 and theautonomous vacuum100 via thenetwork400. In another embodiment, a client device410 interacts withautonomous vacuum100 through an application running on a native operating system of the client device410, such as iOS® or ANDROID™.
Thesensor system175 includes acamera system420,microphone430, inertial measurement device (IMU)440, a glass detection sensor445, alidar sensor450, lights455, astorage medium460, and a processor47. Thecamera system420 comprises one or more cameras that capture images and or video signals as visual data about the environment. In some embodiments, the camera system includes an IMU (separate from theIMU440 of the sensor system175) for capturing visual-inertial data in conjunction with the cameras. The visual data captured by thecamera system420 may be used by storage medium for image processing, as described in relation toFIG. 5. The camera system is further described in relation toFIGS. 6 and 7.
Themicrophone430 captures audio data by converting sound into electrical signals that can be stored or processed by other components of thesensor system175. The audio data may be processed to identify voice commands for controlling functions of theautonomous vacuum100, as described in relation toFIG. 5. In an embodiment,sensor system175 uses more than onemicrophone430, such as an array of microphones.
TheIMU440 captures inertial data describing the autonomous vacuum's100 force, angular rate, and orientation. TheIMU440 may comprise of one or more accelerometers, gyroscopes, and/or magnetometers. In some embodiments, thesensor system175 employsmultiple IMUS440 to capture a range of inertial data that can be combined to determine a more precise measurement of the autonomous vacuum's100 position in the environment based on the inertial data.
The glass detection sensor445 detects glass in the environment. The glass detection sensor445 may be an infrared sensor and/or an ultrasound sensor. In some embodiments, the glass detection sensor445 is coupled with thecamera system420 to remove glare from the visual data when glass is detected. For example, thecamera system420 may have integrated polarizing filters that can be applied to the cameras of thecamera system420 to remove glare. This embodiment is further described in relation toFIG. 7. In some embodiments, the glass sensor is a combination of an IRsensor and neural network that determines if an obstacle in the environment is transparent (e.g., glass) or opaque.
Thelidar sensor450 emits pulsed light into the environment and detects reflections of the pulsed light on objects (e.g., obstacles or obstructions) in the environment. Lidar data captured by thelidar sensor450 may be used to determine a 3D representation of the environment. The lights455 are one or more illumination sources that may be used by theautonomous vacuum100 to illuminate an area around theautonomous vacuum100. In some embodiments, the lights may be white LEDs.
Theprocessor470 operates in conjunction with the storage medium460 (e.g., a non-transitory computer-readable storage medium) and the actuator assembly125 (e.g., by being communicatively coupled to the actuator assembly125) to carry out various functions attributed to theautonomous vacuum100 described herein. For example, thestorage medium460 may store one or more modules or applications (described in relation toFIG. 5) embodied as instructions executable by theprocessor470. The instructions, when executed by theprocessor470, cause theprocessor470 to carry out the functions attributed to the various modules or applications described herein or instruct the controller and/or actuator to carry out movements and/or functions. For example, instruction may include when to take the sensor data, where to move theautonomous vacuum100 to, and how to clean up a mess. In one embodiment, theprocessor470 may comprise a single processor or a multi-processor system.
FIG. 5 is a block diagram of thestorage medium460 of thesensor system175, according to one example embodiment. Thestorage medium460 includes amapping module500, anobject module505, a3D module510, amap database515, afingerprint database520, adetection module530, atask module540, atask list database550, anavigation module560, and alogic module570. In some embodiments, thestorage medium460 includes other modules that control various functions for theautonomous vacuum100. Examples include a separate image processing module, a separate command detection module, and an object database.
Themapping module500 creates and updates a map of an environment as theautonomous vacuum100 moves around the environment. The map may be a two-dimensional (2D) or a three-dimensional (3D) representation of the environment including objects and other defining features in the environment. For simplicity, the environment may be described in relation to a house in this description, but theautonomous vacuum100 may be used in other environments in other embodiments. Example environments include offices, retail spaces, and classrooms. For a first mapping of the environment, themapping module500 receives visual data from thecamera system420 and uses the visual data to construct a map. In some embodiments, themapping module500 also uses inertial data from theIMU440 and lidar and IR data to construct the map. For example, themapping module500 may use the inertial data to determine the position of theautonomous vacuum100 in the environment, incrementally integrate the position of theautonomous vacuum100, and construct the map based on the position. However, for simplicity, the data received by themapping module500 will be referred to as visual data throughout the description of this figure.
In another embodiment, themapping module500 may capture a 360 degree “panorama view” using thecamera system420 while theautonomous vacuum100 rotates around a center axis. Themapping module500 applies a neural network to the panorama view to determine a boundary within the environment (e.g., walls), which themapping module500 may use for the representation of the environment. In other embodiments, themapping module500 may cause theautonomous vacuum100 to trace the boundary of the environment by moving close to walls or other bounding portions of the environment using thecamera system100. Themapping module500 uses the boundary for the representation.
In another embodiment,mapping module500 may use auto-detected unique key points and descriptions of these key points to create a nearest neighborhood database in themap database510. Each key point describes a particular feature of the environment near theautonomous vacuum100 and the descriptions describe aspects of the features, such as color, material, location, etc. As theautonomous vacuum100 moves about the environment, themapping module500 uses the visual data to extract unique key points and descriptions from the environment. In some embodiments, themapping module500 may determine key points using a neural network. Themapping module500 estimates which key points are visible in the nearest neighborhood database by using the descriptions as matching scores. After themapping module500 determines there are a threshold number of key points within visibility, themapping module500 uses these key points to determine a current location of theautonomous vacuum100 by triangulating the locations of the key points with both the image location in the current visual data and the known location (if available) of the key point from themap database515. In another embodiment, the mapping module uses the key points between a previous frame and a current frame in the visual data to estimate the current location of theautonomous vacuum100 by using these matches as reference. This is typically done when theautonomous vacuum100 is seeing a new scene for the first time, or when theautonomous vacuum100 is unable to localize using previously existing key points on the map. Using this embodiment, themapping module500 can determine the position of theautonomous vacuum100 within the environment at any given time. Further, themapping module500 may periodically purge duplicate key points and add new descriptions for key points to consolidate the data describing the key points. In some embodiments, this is done while theautonomous vacuum100 is at thedocking station185.
Themapping module500 processes the visual data when theautonomous vacuum100 is at thedocking station185. Themapping module500 runs an expansive algorithm to process the visual data to identify the objects and other features of the environment and piece them together into the map. The mapping module stores the map in themap database515 and may store the map as a 3D satellite view of the environment. Themapping module500 may update the map in themap database515 to account for movement of objects in the environment upon receiving more visual data from theautonomous vacuum100 as it moves around the environment over time. By completing this processing at the docking station, theautonomous vacuum100 may save processing power relative to mapping and updating the map while moving around the environment. Themapping module500 may use the map to quickly locate and/or determine the location of theautonomous vacuum100 within the environment, which is faster than when computing the map at the same time. This allows theautonomous vacuum100 to focus its processing power while moving on mess detection, localization, and user interactions while saving visual data for further analysis at the docking station.
Themapping module500 constructs a layout of the environment as the basis of the map using visual data. The layout may include boundaries, such as walls, that define rooms, and themapping module500 layers objects into this layout to construct the map. In some embodiments, themapping module500 may use surface normals from 3D estimates of the environment and find dominant planes using one or more algorithms, such as RANSAC, which themapping module500 uses to construct the layout. In other embodiments, themapping module500 may predict masks corresponding to dominant planes in the environment using a neural network trained to locate the ground plane and surface planes on each side of theautonomous vacuum100. If such surface planes are not present in the environment, the neural network may output an indication of “no planes.” The neural network may be a state-of-the-art object detection and masking network trained on a dataset of visual data labeled with walls and other dominant planes. Themapping module500 also uses the visual data to analyze surfaces throughout the environment. Themapping module500 may insert visual data for each surface into the map to be used by thedetection module530 as it detects messes in the environment, described further below. For each different surface in the environment, themapping module500 determines a surface type of the surface and tags the surface with the surface type in the map. Surface types include various types of carpet, wood, tile, and cement, and, in some embodiments, themapping module500 determines a height for each surface type. For example, in a house, the floor of a dining room may be wood, the floor of a living room may be nylon carpet, and the floor of a bedroom may be polyester carpet that is thicker than the nylon carpet. The mapping module may also determine and tag surface types for objects in the room, such as carpets or rugs.
Themapping module500 further analyzes the visual data to determine the objects in the environment. Objects may include furniture, rugs, people, pets, and everyday household objects that theautonomous vacuum100 may encounter on the ground, such as books, toys, and bags. Some objects may be barriers that define a room or obstacles that theautonomous vacuum100 may need to remove, move, or go around, such as a pile of books. To identify the objects in the environment, themapping module500 predicts the plane of the ground in the environment using the visual data and removes the plane from the visual data to segment out an object in 3D. In some embodiments, themapping module500 uses an object database to determine what an object is. In other embodiments, themapping module500 retrieves and compares visual data retrieved from an external server to the segmented objects to determine what the object is and tag the object with a descriptor. In further embodiments, themapping module500 may use thepretrained object module505, which may be neural network based, to detect and pixel-wise segment objects such as chairs, tables, books, shoes. For example, themapping module500 may tag each of 4 chairs around a table as “chair” and the table as “table” and may include unique identifiers for each object (i.e., “chair A” and “chair B”). In some embodiments, themapping module500 may also associate or tag an object with a barrier or warning. For example, themapping module500 may construct a virtual border around the top of a staircase in the map such that theautonomous vacuum100 does not enter the virtual border to avoid falling down the stairs. As another example, themapping module500 may tag a baby with a warning that the baby is more fragile than other people in the environment.
The map includes three distinct levels for the objects in the environment: a long-term level, an intermediate level, and an immediate level. Each level may layer onto the layout of the environment to create the map of the entire environment. The long-term level contains a mapping of objects in the environment that are static. In some embodiments, an object may be considered static if theautonomous vacuum100 has not detected that the object moved within the environment for a threshold amount of time (e.g., 10 days or more). In other embodiments, an object is static if theautonomous vacuum100 never detects that the object moved. For example, in a bedroom, the bed may not move locations within the bedroom, so the bed would be part of the long-term level. The same may apply for a dresser, a nightstand, or an armoire. The long-term level also includes fixed components of the environment, such as walls, stairs, or the like.
The intermediate level contains a mapping of objects in the environment that are dynamic. These objects move regularly within the environment and may be objects that are usually moving, like a pet or child, or objects that move locations on a day-to-day basis, like chairs or bags. Themapping module500 may assign objects to the intermediate level upon detecting that the objects move more often than a threshold amount of time. For example, themapping module500 may map chairs in a dining room to the intermediate level because the chairs move daily on average, but map the dining room table to the long-term level because the visual data has not shown that the dining room table has moved in more than 5 days. However, in some embodiments, themapping module500 does not use the intermediate level and only constructs the map using the long-term level and the immediate level.
The immediate level contains a mapping of objects within a threshold radius of theautonomous vacuum100. The threshold radius may be set at a predetermined distance (i.e., 5 feet) or may be determined based on the objects theautonomous vacuum100 can discern using thecamera system420 within a certain resolution given the amount of light in the environment. For example, the immediate level may contain objects in a wider vicinity around theautonomous vacuum100 around noon, which is a bright time of day, than in the late evening, which may be darker if no indoor lights are on. In some embodiments, the immediate level includes any objects within a certain vicinity of theautonomous vacuum100.
In other embodiments, the immediate level only includes objects within a certain vicinity that are moving, such as people or animals. For each person within the environment, themapping module500 may determine a fingerprint of the person to store in thefingerprint database520. A fingerprint is a representation of a person and may include both audio and visual information, such as an image of the person's face (i.e., a face print), an outline of the person's body (i.e., a body print), a representation of the clothing the person is wearing, and a voice print describing aspects of the person's voice determined through voice print identification. Themapping module500 may update a person's fingerprint in thefingerprint database520 each time theautonomous vacuum500 encounters the person to include more information describing the person's clothing, facial structure, voice, and any other identifying features. In another embodiment, when themapping module500 detects a person in the environment, themapping module500 creates a temporary fingerprint using the representation of the clothing the person is currently wearing and uses the temporary fingerprint to track and follow a person in case this person interacts with theautonomous vacuum100, for example, by telling theautonomous vacuum100 to “follow me.” Embodiments using temporary fingerprints allow theautonomous vacuum100 to track people in the environment even without visual data of their faces or other defining characteristics of their appearance.
Themapping module500 updates the mapping of objects within these levels as it gathers more visual data about the environment over time. In some embodiments, themapping module500 only updates the long-term level and the intermediate level while theautonomous vacuum100 is at the docking station, but updates immediate level as theautonomous vacuum100 moves around the environment. For objects in the long-term level, themapping module500 may determine a probabilistic error value about the movement of the object indicating the chance that the object moved within the environment and store the probabilistic error value in themap database515 in association with the object. For objects in the long-term map with a probabilistic error value over a threshold value, themapping module500 characterizes the object in the map and an area that the object has been located in the map as ambiguous.
In some embodiments, the (optional)object module505 detects and segments various objects in the environment. Some examples of objects include tables, chairs, shoes, bags, cats, and dogs. In one embodiment, theobject module505 uses a pre-trained neural network to detect and segment objects. The neural network may be trained on a labeled set of data describing an environment and objects in the environment. Theobject module505 also detects humans and any joint points on them, such as knees, hips, ankles, wrists, elbows, shoulders, and head. In one embodiment, theobject module505 determines these joint points via a pre-trained neural network system on a labeled dataset of humans with joint points.
In some embodiments, themapping module500 uses theoptional 3D module510 to create a 3D rendering of the map. The3D module510 uses visual data captured by stereo cameras on theautonomous vacuum100 to create an estimated 3D rendering of a scene in the environment. In one embodiment, the3D module510 uses a neural network with an input of two left and right stereo images and learns to produce estimated 3D renderings of videos using the neural network. This estimated 3D rendering can then be used to find 3D renderings of joint points on humans as computed by theobject module505. In one embodiment, the estimated 3D rendering can then be used to predict the ground plane for themapping module500. To predict the ground plane, the3D module510 uses a known camera position of the stereo cameras (from the hardware and industrial design layout) to determine an expected height ground plane. The3D module510 uses all image points with estimated 3D coordinates at the expected height as the ground plane. In another embodiment, the3D module510 can use the estimated 3D rendering to estimate various other planes in the environment, such as walls. To estimate which image points are on a wall, the3D module510 estimates clusters of image points that are vertical (or any expected angle for other planes) and groups connected image points into a plane.
In some embodiments, themapping module500 passes the 3D rendering through a scene-classification neural network to determine a hierarchical classification of the home. For example, a top layer of the classification decomposes the environment into different room types (e.g., kitchen, living room, storage, bathroom, etc.). A second layer decomposes each room according to objects (e.g., television, sofa, and vase in the living room and bed, dresser, and lamps in the bedroom). Theautonomous vacuum100 may subsequently use the hierarchical model in conjunction with the 3D rendering to understand the environment when presented with tasks in the environment (e.g., “clean by the lamp”). It is noted that the map ultimately may be provided for rendering on a device (e.g., wirelessly or wired connected) with an associated screen, for example, a smartphone, tablet, laptop or desktop computer. Further, the map may be transmitted to a cloud service before being provided for rendering on a device with an associated screen.
Thedetection module530 detects messes within the environment, which are indicated by pixels in real-time visual data that do not match the surface type. As theautonomous vacuum100 moves around the environment, thecamera system420 collects a set of visual data about the environment and sends it to thedetection module530. From the visual data, thedetection module530 determines the surface type for an area of the environment, either by referencing the map or by comparing the collected visual data to stored visual data from a surface database. In some embodiments, thedetection module530 may remove or disregard objects other than the surface in order to focus on the visual data of the ground that may indicate a mess. Thedetection module530 analyzes the surface in the visual data pixel-by-pixel for pixels that do not match the pixels of the surface type of the area. For areas with pixels that do not match the surface type, thedetection module530 segments out the area from the visual data using a binary mask and compares the segmented visual data to the long-term level of the map. In some embodiments, since the lighting when the segmented visual data was taken may be different from the lighting of the visual data in the map, thedetection module530 may normalize the segmented visual data for the lighting. For areas within the segmented visual data where the pixels do not match the map, thedetection module530 flags the area as containing a mess and sends the segmented visual data, along with the location of the area on the map, to thetask module540, which is described below. In some embodiments, thedetection module530 uses a neural network for detecting dust in the segmented visual data.
For each detected mess, thedetection module530 verifies that the surface type for the area of the mess matches the tagged surface type in the map for that area. In some embodiments, if the surface types do not match to within a confidence threshold, thedetection module530 re labels the surface in the map with the newly detected surface type. In other embodiments, thedetection module530 requests that theautonomous vacuum100 collect more visual data to determine the surface type to determine the surface type of the area.
Thedetection module530 may also detect messes and requested cleaning tasks via user interactions from a user in the environment. As theautonomous vacuum100 moves around the environment, thesensor system175 captures ambient audio and visual data using themicrophone430 and thecamera system420 that is sent to thedetection module530. In one embodiment, where themicrophone430 is an array ofmicrophones430, thedetection module430 may process audio data from each of themicrophones430 in conjunction with one another to generate one or more beamformed audio channels, each associated with a direction (or, in some embodiments, range of directions). In some embodiments, thedetection module530 may perform image processing functions on the visual data by zooming, panning, de-warping.
When theautonomous vacuum100 encounters a person in the environment, thedetection module530 may use face detection and face recognition on visual data collected by thecamera system420 to identify the person and update the person's fingerprint in thefingerprint database540. Thedetection module530 may use voice print identification on a user speech input a person (or user) to match the user speech input to a fingerprint and move to that user to receive further instructions. Further, thedetection module530 may parse the user speech input for a hotword that indicates the user is requesting an action and process the user speech input to connect words to meanings and determine a cleaning task. In some embodiments, thedetection module530 also performs gesture recognition on the visual data to determine the cleaning task. For example, a user may ask theautonomous vacuum100 to “clean up that mess” and point to a mess within the environment. Thedetection module530 detects and processes this interaction to determine that a cleaning task has been requested and determines a location of the mess based on the user's gesture. To detect the location of the mess, thedetection module530 obtains visual data describing the user's hands and eyes from theobject module505 and obtains an estimated 3D rendering of the user's hands and eyes from3D module510 to create a virtual 3D ray. Thedetection module530 intersects the virtual 3D ray with an estimate of the ground plane to determine the location the user is pointing to. Thedetection module540 sends the cleaning task (and location of the mess) to thetask module540 to determine a mess type, surface type, actions to remove the mess, and cleaning settings, described below. The process of analyzing a user speech input is further described in relation toFIG. 15.
In some embodiments, thedetection module530 may apply a neural network to visual data of the environment to detect dirt in the environment. In particular, thedetection module530 may receive real-time visual data captured by the sensor system175 (e.g., camera system and/or infrared system) and input the real-time visual data to the neural network. The neural network outputs a likelihood that the real-time visual data includes dirt, and may further output likelihoods that the real-time visual data includes dust and/or another mess type (e.g., a pile or spill) in some instances. For each of the outputs from the neural network, if the likelihood for any mess type is above a threshold, thedetection module530 flags the area as containing a mess (i.e., an area to be cleaned).
Thedetection module530 may train the neural network on visual data of floors. In some embodiments, thedetection module530 may receive a first set of visual data from thesensor system175 of an area in front of theautonomous vacuum100 and a second set of visual data of the same area from behind theautonomous vacuum100 after theautonomous vacuum100 has cleaned the area. Theautonomous vacuum100 can capture the second set of visual data using cameras on the back of the autonomous vacuum or by turning around to capture the visual data using cameras on the front of the autonomous vacuum. Thedetection module530 may label the first and second sets of visual data as “dirty” and “clean,” respectively, and train the neural network on the labeled sets of visual data. Thedetection module530 may repeat this process for a variety of areas in the environment to train the neural network for the particular environment or for a variety of surface and mess types in the environment.
In another embodiment, thedetection module530 may receive visual data of the environment as theautonomous vacuum100 clean the environment. Thedetection module530 may pair the visual data to locations of theautonomous vacuum100 determined by themapping module500 as the autonomous vacuum moved to clean. Thedetection module530 estimates correspondence between the visual data to pair visual data of the same areas together based on the locations. Thedetection module530 may compare the paired images in the RGB color space (or any suitable color or high-dimensional space that may be used to compute distance) to determine where the areas were clean or dirty and label the visual data as “clean” or “dirty” based on the comparison. Alternatively, thedetection module530 may compare the visual data to the map of the environment or to stored visual data for the surface type shown in the visual data. Thedetection module530 may analyze the surface in the visual data pixel-by-pixel for pixels that do not match the pixels of the surface type of the area and label pixels that do not match as “dirty” and pixels that do match as “clean.” Thedetection module530 trains the neural network on the labeled visual data to detect dirt in the environment.
In another embodiment, thedetection module530 may receive an estimate of the ground plane for a current location in the environment from the3D module510. Thedetection module530 predicts a texture of the floor of the environment based on the ground plane as theautonomous vacuum100 moves around and labels visual data captured by theautonomous vacuum100 with the floor texture predicted while theautonomous vacuum100 moves around the environment. Thedetection module530 trains the neural network on the labeled visual data to predict if a currently predicted floor texture maps to a previously predicted floor texture. Thedetection module530 may then apply the neural network to real-time visual data and a currently predicted floor texture, and if the currently predicted floor texture does not map a previously predicted floor texture, thedetection module530 may determine that the area being traversed is dirty.
Thetask module540 determines cleaning tasks for theautonomous vacuum100 based on user interactions and detected messes in the environment. Thetask module540 receives segmented visual data from thedetection module530 the location of the mess from thedetection module530. Thetask module540 analyzes the segmented visual data to determine a mess type of the mess. Mess types describe the type and form of waste that comprises the mess and are used to determine what cleaning task theautonomous vacuum100 should do to remove the mess. Examples of mess types include a stain, dust, a liquid spill, a solid spill, and a smudge and may be a result of liquid waste, solid waste, or a combination of liquid and solid waste.
Thetask module540 retrieves the surface type for the location of the mess from the map database and matches the mess type and surface type to a cleaning task architecture that describes the actions for theautonomous vacuum100 to take to remove the mess. In some embodiments, thetask module540 uses a previous cleaning task from the task database for the given mess type and surface type. In other embodiments, thetask module540 matches the mess type and surfaces to actions theautonomous vacuum100 can take to remove the mess and creates a corresponding cleaning task architecture of an ordered list of actions. In some embodiments, thetask module540 stores a set of constraints that it uses to determine cleaning settings for the cleaning task. The set of constraints describe what cleaning settings cannot be used for each mess type and surface type and how much force to apply to clean the mess based on the surface type. Cleaning settings include height of thecleaning head105 and rotation speed of thebrush roller135 and the use of solvent160. For example, the set of constraints may indicate that the solvent160 can be used on wood and tile, but not on carpet, and the height of thecleaning head105 must be at no more than 3 centimeters off the ground for cleaning stains in the carpet but at least 5 centimeters and no more than 7 centimeters off the ground to clean solid waste spills on the carpet.
Based on the determined actions and the cleaning settings for the mess, thetask module540 adds a cleaning task for each mess totask list database550. Thetask list database550 stores the cleaning tasks in a task list. Thetask list database550 may associate each cleaning task with a mess type, a location in the environment, a surface type, a cleaning task architecture, and cleaning settings. For example, the first task on the task list in thetask list database550 may be a milk spill on tile in a kitchen, which theautonomous vacuum100 may clean using solvent160 and thebrush roller135. The cleaning tasks may be associated with a priority ranking that indicates how to order the cleaning tasks in the task list. In some embodiments, the priority ranking is set by a user via a client device410 or is automatically determined by theautonomous vacuum100 based on the size of the mess, the mess type, or the location of the mess. For example, theautonomous vacuum100 may prioritize cleaning tasks in heavily trafficked areas of the environment (i.e., the living room of a house over the laundry room) or the user may rank messes with liquid waste with a higher priority ranking than messes with solid waste.
In some embodiments, thetask module540 adds cleaning tasks to the task list based on cleaning settings entered by the user. The cleaning settings may indicate which cleaning tasks the user prefers theautonomous vacuum100 to complete on a regular basis or after a threshold amount of time has passed without a mess resulting in that cleaning task occurring. For example, thetask module540 may add a carpet deep cleaning task to the task list once a month and a hard wood polishing task to the task list if the hard wood has not been polished in more than some predetermined time period, e.g., 60 days.
Thetask module540 may add additional tasks to the task list if theautonomous vacuum100 completes all cleaning tasks on the task list. Additional tasks include charging at thedocking station185, processing visual data for the map via themapping module500 at thedocking station185, which may be done in conjunction with charging, and moving around the environment to gather more sensor data for detecting messes and mapping. Thetask module540 may decide which additional task to add to the task list based on the amount of charge thebattery180 has or user preferences entered via a client device410.
Thetask module540 also determines when theautonomous vacuum100 needs to be charged. If thetask module540 receives an indication from thebattery180 that the battery is low on power, the task module adds a charging task to the task list in thetask list database550. The charging task indicates that theautonomous vacuum100 should navigate back to thedocking station185 and dock for charging. In some embodiments, thetask module540 may associate the charging task with a high priority ranking and move the charging task to the top of the task list. In other embodiments, thetask module540 may calculate how much power is required to complete each of the other cleaning tasks on the task list and allow theautonomous vacuum100 to complete some of the cleaning tasks before returning to thedocking station185 to charge. The charging process is further described in relation toFIG. 12.
Thenavigation module560 determines the location of theautonomous vacuum100 in the environment. Using real-time sensor data from thesensor system175, thenavigation module560 matches the visual data of the sensor data to the long-term level of the map to localize theautonomous vacuum100. In some embodiments, thenavigation module560 uses a computer vision algorithm to match the visual data to the long-term level. Thenavigation module560 sends information describing the location of theautonomous vacuum100 to other modules within thestorage medium460. For example, thedetection module530 may use the location of theautonomous vacuum100 to determine the location of a detected mess.
Thenavigation module560 uses the immediate level of the map to determine how to navigate the environment to execute cleaning tasks on the task list. The immediate level describes the locations of objects within a certain vicinity of theautonomous vacuum100, such as within the field of view of each camera in thecamera system420. These objects may pose as obstacles for theautonomous vacuum100, which may move around the objects or move the objects out of its way. The navigation module interlays the immediate level of the map with the long-term level to determine viable directions of movement for theautonomous vacuum100 based on where objects are not located. Thenavigation module560 receives the first cleaning task in thetask list database550, which includes a location of the mess associated with the cleaning task. Based on the location determined from localization and the objects in the immediate level, thenavigation module100 determines a path to the location of the mess. In some embodiments, thenavigation module560 updates the path if objects in the environment move while theautonomous vacuum100 is in transit to the mess. Further, thenavigation module560 may set the path to avoid fragile objects in the immediate level (e.g., a flower vase or expensive rug).
Thelogic module570 determines instructions for theprocessor470 to control theautonomous vacuum100 based on the map in themap database515, thetask list database550, and the path and location of theautonomous vacuum100 determined by thenavigation module560. The instructions describe what each physical feature of theautonomous vacuum100 should do to navigate an environment and execute tasks on the task list. Some of the physical features of theautonomous vacuum100 include thebrush motor140, theside brush motor150, thesolvent pump175, theactuator assembly125, thevacuum pump115, and the wheels210. Thelogic module570 also controls how and when thesensor system175 collects sensor data in the environment. For example,logic module570 may receive the task list from thetask list database550 and create instructions on how to navigate to handle the first cleaning task on the task list based on the path determined by the navigation module, such as rotating the wheels210 or turning theautonomous vacuum100. The logic module may update the instructions if thenavigation module560 updates the path as objects in the environment moved. Once theautonomous vacuum100 has reached the mess associated with the cleaning task, thelogic module570 may generate instructions for executing the cleaning task. These instructions may dictate for theactuator assembly125 to adjust the cleaning head height, thevacuum pump115 to turn on, thebrush roller135 and/orside brush roller145 to rotate at certain speeds, and thesolvent pump120 to dispense an amount of solvent160, among other actions for cleaning. Thelogic module570 may remove the cleaning task from the task list once the cleaning task has been completed and generate new instructions for the next cleaning task on the task list.
Further, thelogic module570 generates instructions for theprocessor470 to execute the flowcharts and behavior tree ofFIGS. 12-15. The instructions may include internal instructions, such as when to tick a clock node or gather sensor data, or external instructions, such as controlling theautonomous vacuum100 to execute a cleaning task to remove a mess. Thelogic module570 may retrieve data describing the map of the environment stored in themap database515,fingerprint database520, andtask list database550, or from other modules in thestorage medium460, to determine these instructions. Thelogic module570 may also receive alerts/indications from other components of theautonomous vacuum100 or from an external client device410 that it uses to generate instructions for theprocessor470.
It is appreciated that althoughFIG. 5 illustrates a number of modules according to one embodiment, the precise modules and resulting processes may vary in different embodiments. For example, in some embodiments, thestorage medium460 may include a cleaning module that controls theautonomous vacuum100 to complete cleaning tasks. The cleaning module may control functions of thecleaning head105, such as controlling thebrush motor140 and theside brush motor150 to change the speed of thebrush roller135 andside brush roller145, respectively. The cleaning module may also control a speed of theautonomous vacuum100 and speed of thesolvent pump120. The cleaning module may also control how theautonomous vacuum105 moves to clean up a mess and ingestwaste155 and move theautonomous vacuum105 to retrieve anywaste155 that may have moved during execution of the cleaning task.
Camera SystemFIG. 6 illustrates a block diagram of acamera system420, according to one embodiment. To improve accuracy of the visual-inertial data gathered by thesensor system175, thecamera system420 synchronizes a plurality of cameras via a common clock and anIMU550 via a common clock. In some embodiments, thecamera system420 includes more than the three cameras610 shown inFIG. 6. In other embodiments, thecamera system420 only includes two cameras610. The cameras610 are connected to a field programmable gate array620 (or FPGA). Amicrocontroller640 coordinates the setup and timing of the cameras610,FPGA620, and inertial measurement unit650. Thecamera system420 communicates with ahost660 via aUSB interface630 connected to theFPGA620. Thecamera system420 may gather visual-inertial data at set time steps, and, in some embodiments, may handle frame drops by dropping sampled visual-inertial data if thehost660 has not downloaded the visual-inertial data before thecamera system620 gathers new visual-inertial data at a new time. Thesensor system175 may use the visual-inertial data from thecamera system620 for localizing theautonomous vacuum100 in the environment based on the map.
In some embodiments, the camera system includes a photodiode for detecting lighting and LED lights around each camera610 for illuminating the environment. Because mapping is difficult in low light, thecamera system420 may illuminate the LED lights around one or more of the cameras610 based on where theautonomous vacuum100 is moving to improving the mapping capabilities.
In further embodiments, each camera610 includes a polarizing filter to remove excess light from shiny floors or glass in the environment. Each polarizing filter may be positioned to remove light in the horizontal direction or may be attached to a motor for rotating the polarizing filter to remove different directions of light. For this, thecamera system420 may include photodiodes for detecting light and use data from the photodiodes to determine rotations for each polarizing filter.
FIG. 7 illustrates a positioning of cameras610 on theautonomous vacuum100, according to one embodiment. In this embodiment, theautonomous vacuum100 includes afisheye camera700 on the top of theautonomous vacuum100 andstereo cameras710 on the front and back of theautonomous vacuum100. The fisheye camera may be used to detect the position of theautonomous vacuum100 in the environment based on localization using visual data describing the ceiling of the environment. Thestereo cameras710 may be used to gather visual data from in front of and behind theautonomous vacuum100. In some embodiments, thestereo cameras710 may also be used to detect the position of theautonomous vacuum100 in the environment based on key points determined by themapping module500. In other embodiments,autonomous vacuum100 may have more cameras610 on the sides, or may use different types of cameras than the ones shown in the figure.
PerceptionFIG. 8 illustrates levels of a map used by theautonomous vacuum100, according to one example embodiment. The levels include a long-term level800, anintermediate level810, and animmediate level820. Each level contains mappings of objects in the environment that are tagged830 with labels describing the objects. The long-term level800 contains objects that are static or do not move often in the environment, and in some embodiments, the long-term level includes walls in the environment. The intermediate level80 contains objects that change position within the environment for often. In some embodiments, themapping module500 determines a level for an object based on how much time has passed since the object moved. For example, objects that have not moved in 10 days or more may be mapped to the long-term level800, while other objects are mapped to the intermediate level. In this embodiment, theimmediate level820 only includes objects within a certain vicinity of theautonomous vacuum100 that are consistently dynamic, like living beings such as a person or pet, but in other embodiments, the immediate level includes any object within a certain vicinity of theautonomous vacuum100. This embodiment is further described in relation toFIG. 9.
FIG. 9 illustrates animmediate level820 of theautonomous vacuum100, according to one embodiment. In this embodiment, the only objects included in theimmediate level820 are within the field ofview900 of the cameras on the front and back of theautonomous vacuum100, such as “Person A,” “Chair B,” “Dog,” and “Table B.” Theautonomous vacuum100 analyzes the pixels from visual data in the field ofview900 to find mess pixels910 that do not match the expectations for the area of the environment. Based on these mess pixels910, theautonomous vacuum100 may determine that a mess exists and add a cleaning task to the task list to address the mess.
FIGS. 10A-10C illustrate cleaninghead105 positions, according to one embodiment. Theautonomous vacuum100 may position the cleaninghead105 according to a surface type of thesurface1000. Each surface type may be associated with a different height for thecleaning head105 to properly clean a mess on thatsurface1000. For example, the cleaninghead105 may need to be positioned exactly against carpet to clean it properly, while it should be just above wood to clean with wood without scratching the wood. In addition, carpet is thicker than wood, so the height may change depending on the thickness of thesurface1000. In the embodiment shown byFIGS. 10A-10C, thesurface1000 is a carpet composed ofcarpet strands1005.FIG. 10A illustrates the cleaninghead105 positioned too high above thesurface1000 for proper cleaning. In this position, the cleaninghead105 may not be able to contact the mess and could leavewaste155 behind after cleaning.FIG. 10B illustrates the cleaninghead105 positioned at the proper height for cleaning thesurface1000, andFIG. 10C illustrates the cleaninghead105 positioned too low on thesurface1000 for proper cleaning, which could result in theautonomous vacuum100 merely pushingwaste155 further into thesurface1000 rather than removing thewaste155 or becoming stuck due to high resistance to motion from the waste.
Example Waste BagTo account for all types of waste that theautonomous vacuum100 may encounter while cleaning,FIGS. 11A-11E illustrates waste bags (also referred to as a waste collection bag) that employ an absorbent for congealing liquid waste in the waste bag. The absorbent may be distributed in the waste bag in various ways to create a semi-solid when mixed with liquid waste. The absorbent may have a particle size larger than the pore of the waste bag such that the waste bag may still filter air out while retaining waste inside of the waste bag. In some embodiments, the absorbent is sodium polyacrylate, which has the ability to absorb 300-800 times its mass in water, depending upon its purity.
The waste bag may be composed of filtering material that is porous or nonporous. The waste bag may be placed in a cavity of theautonomous vacuum100, such as in thewaste volume350B or thewaste container200, which may include a hinged side that opens to access the cavity and waste bag. The waste bag may be removed and disposed of when fill of waste or may be cleaned out and reused. Further, in some embodiments, the waste bag may be replaced by a structured waste enclosure that is within or is the cavity of theautonomous vacuum100.
The waste bag may include the absorbent in various fashions to ensure that liquid waste is congealed inside of the waste bag, preventing tearing or other issues with the waste bag. In some embodiments, the absorbent is distributed throughout the waste bag. In other embodiments, the absorbent may be incorporated into the plies of the waste bag. The absorbent may be layered between nonwoven polypropylene and polyethylene, or any other flexible filtration materials used for the waste bag.
FIG. 11A illustrates awaste bag110 with a liquid-solid filter system, according to one embodiment. Aswaste155 from a mess enters thewaste bag110, a net1105 captures solid waste moving in the direction of gravity1115 while allowing liquid waste to fall through to the bottom of thewaste bag110 where the absorbent1100 is. The absorbent1100 may congeal with the liquid waste to form a semi solid so that thevacuum pump115 only pulls filteredair165 out from thewaste bag110 that is expelled asair exhaust170.
FIG. 11B illustrates awaste bag110 with porous and nonporous portions, according to one example embodiment.Waste155 falls to the bottom of the bag from upon entering theautonomous vacuum100. As the vacuum pump works to pull filteredair165 out of thewaste bag110 through the porous portion1115, the liquid waste can move to the porous portion1115 where the absorbent1100 is located while the solid waste is captured by the nonporous portion1110. The absorbent1100 may congeal with the liquid waste to form a semi solid so that thevacuum pump115 only pulls filtered air, and not the absorbent or the liquid waste, out from the porous portion1115 of thewaste bag110 and expels the filteredair165 asair exhaust170.
FIG. 11C illustrates awaste bag110 interlaced with absorbent strings, according to one example embodiment. The waste bag is composed of a porous membrane. In some embodiments, the absorbent1100 is made intostrings1120 that traverse thewaste bag110 from top to bottom. In other embodiments, thestrings1120 are cloth, paper, or any other flexible material and are coated with the absorbent1100. This coating may be one layer of absorbent1100 distributed across thestrings1120 or groupings of the absorbent1100 at various points on the strings, as depicted inFIG. 11C. Aswaste155 enters thewaste bag110, the waste intermingles with thestrings1120 such that the absorbent may interact with liquid waste to congeal as it moves through thewaste bag110. Thevacuum pump115 may pull out filteredair165 without removing the congealed liquid waste and expel the filteredair165 asair exhaust170.
FIG. 11D illustrates awaste bag110 with an absorbent dispensing system, according to one example embodiment. In this embodiment, amotor1125 expels absorbent1100 around afeed screw1130 into thewaste bag110 aswaste155 enters thewaste bag110. In some embodiments the motor may be attached to a processor that analyzes sensor data aboutwaste155 entering thewaste bag110 to determine how much absorbent to expel. Themotor1125 may be activated when theautonomous vacuum100 is cleaning or only when theautonomous vacuum100 detects liquid waste. In some embodiments, theautonomous vacuum100 detects the amount of liquid waste such that themotor1125 activates to express a specific amount ofabsorbent100 proportional to thewaste155. The liquid waste can then congeal with the absorbent1100 so only filteredair165 is pumped out of the waste bag by thevacuum pump115 intoair exhaust170.
FIG. 11E illustrates an enclosed sachet in awaste bag110, according to one embodiment. The waste bag is composed of a porous membrane. Thesachet1131 is composed of dissolvable material and filled with the absorbent1100. The exterior of thesachet1131 dissolves to expose the absorbent material when exposed to liquid. The absorbent material “captures” the liquid waste that enters thewaste bag110 and begins to form a congealed mass of the liquid waste that the absorbent contacts.
Thesachet1131 may be tethered or otherwise attached to a portion of thewaste bag110 from which material (e.g., liquid) enters (e.g., lower portion of the bag). Alternately, thesachet1131 may may sit in thewaste bag110 without being attached to thewaste bag110, and hence, may settle along a lower portion of the bag, which is where liquid may drop to as it initially enters the bag.
Aswaste155 enters thewaste bag110, thewaste155 intermingles with thesachet1131. If the waste includes liquid waste, thesachet1131 dissolves upon coming in contact with the liquid waste, which is absorbed by the absorbent1100 and turned into congealed liquid waste. Thevacuum pump115 may pull out filteredair165 without removing the congealed liquid waste and expel the filteredair165 asair exhaust170. In some embodiments, thewaste bag110 may include more than onesachet1131 attached to different sections of an inner portion of thewaste bag110. It is noted that once the absorbent material within the sachet is exposed, it may allow for continued congealing of liquid waste until a particular density or ratio threshold is reached between the chemical priorities of the absorbent and the liquid waste is reached at which point no further congealing may occur. Hence, the bag may allow for multiple periodic uses of picking up liquid waste before having to be discarded and thereafter replaced.
Example Waste Bag Enclosure (or Cavity)FIG. 11F illustrates aconical insert1130 for use with awaste bag110, according to one example embodiment. Theconical insert1130 includes abase ring1132 and three protruding arms1134a-c. Each arm is a rigid member (e.g., a hardened plastic or metal). A first end of the arm1134a-cconnects equidistance from each other along a circumference of thebase ring1132. A second end for each arm1134a-cis opposite the first end of each arm1134a-cand converges at atip1138. Thebase ring1132 may include one or more connection points1136a-c. An opening formed by the base ring optionally may be covered with a mesh (or screen) that may prevent certain particles from entering the air outlet. The connection points1136a-cmay be used to fasten to a surface such that thebase ring1132 is positioned around an opening of an air outlet of theautonomous vacuum100. Thetip1138 protrudes outward from the air outlet and the overall rigidity of theconical insert1130 prevents collapse of a malleable vacuum bag from blocking the air outlet.
FIG. 11G illustrates aconical insert1130 in awaste bag enclosure1140, according to one example embodiment. Thewaste bag enclosure1140 is the portion of theautonomous vacuum100 the waste bag is contained within and includes awaste inlet1135 from the cleaninghead105 thatwaste155 enters thewaste bag110 through and a filteredair outlet1145 that thevacuum pump115 pulls filteredair165 through. By placing the conical insert in front of the filteredair outlet1145, as shown inFIG. 11G where theconnection points1135a-cattach to a wall of the inside surface and thebase ring1132 surrounds the air outlet, theconical insert1130 rigidity keeps thewaste bag110, which is malleable, from being pulled into the filteredair outlet1145 while thevacuum pump115 is in operation. This allows thewaste bag110 to not clog the filtered air outlet11145 and fill up thewaste bag enclosure1140, maximizing the amount ofwaste155 thewaste bag110 can hold.
Though referred to as aconical insert1130 in this description, in other embodiments, theconical insert1130 may be cylindrically shaped, spherically shaped, or a combination of a cylinder and a sphere. Theconical insert1130 may be placed inside of theautonomous vacuum100 near thewaste bag110 to prevent the bag from becoming stuck in an outlet forfiltered air165 as thevacuum pump115 operates.
Charging ProcessFIG. 12 is a flowchart illustrating a charging process for theautonomous vacuum100, according to one example embodiment. While charging at thedocking station185, theautonomous vacuum100 receives1200 an indication that thebattery180 is charged. Theautonomous vacuum100 leaves1210 the docking station and automatically begins1220 performing cleaning tasks on the task list. In some embodiments, theautonomous vacuum100 may add more cleaning tasks to the task list as it detects messes or user interactions in the environment. In some embodiments, theautonomous vacuum100 may move around the environment to gather sensor data if the task list does not have any more cleaning tasks or may dock at the docking station for processing sensor data. If theautonomous vacuum100 receives1230 an indication that thebattery180 is low when theautonomous vacuum100 is not at the docking station, theautonomous vacuum100 adds and prioritizes1240 charging on the task list. Theautonomous vacuum100moves1250 to the docking station and docks at the docking station to charge thebattery1260 until receiving1200 an indication that the battery is charged.
ThoughFIG. 12 illustrates a number of interactions according to one embodiment, the precise interactions and/or order of interactions may vary in different embodiments. For example, in some embodiments, theautonomous vacuum100 may leave1210 the docking station once thebattery180 is charged enough to complete the cleaning tasks on the task list, rather than once thebattery180 is fully charged. Further, the docking station may be configured to use a handshake system with theautonomous vacuum100. In such a configuration, thedocking station185 may keep a key corresponding to a particularautonomous vacuum100, and theautonomous vacuum100 will keep a reciprocal key. Thedocking station185 may be configured to only charge anautonomous vacuum100 if it matches the reciprocal key. Further, thedocking station185 can track multipleautonomous vacuums100 where there may be more than one using a key system as described and/or a unique identifier tracker where a unique identifier for anautonomous vacuum100 is kept in a memory of thedocking station185. The key and/or unique identifier configurations can allow for tracking of autonomous vacuum activity that can be uploaded to the cloud (e.g., activity of cleaning and area cleaned for further analysis) and/or downloading of information (e.g., firmware or other instructions) from the cloud to theautonomous vacuum100.
Cleaning ProcessesFIG. 13 is a flowchart illustrating a cleaning process for the autonomous vacuum, according to one embodiment. In this embodiment, the cleaning process involves user speech input indicating a cleaning task for theautonomous vacuum100, but other cleaning processes may not involve user speech input. Theautonomous vacuum100 begins1300 the first cleaning task at the top of the task list. To begin1300 the cleaning task, theautonomous vacuum100 may navigate to the mess associated with the cleaning task or may ingestwaste155 orspray solvent160. Theautonomous vacuum100 receives1320 a first user speech input via real-time audio data from themicrophone430. In some embodiments, since the audio data may include ambient audio signals from the environment, theautonomous vacuum100 analyzes the audio data for a hotword that indicates that a user is speaking to theautonomous vacuum100. Theautonomous vacuum100 determines where the user who delivered the first user speech input is in the environment and moves1320 to the user.
Theautonomous vacuum100 receives a second user speech input describing a second cleaning task. In some embodiments, the second user speech input may indicate multiple cleaning tasks. In other embodiments, the user speech input is coupled with a gesture. The gesture may indicate some information about the second cleaning task, such as where the task is. Theautonomous vacuum100 prioritizes1340 the second cleaning task on the task list by moving the second cleaning task to the top of the task list and moving the first cleaning task down in the task list to below the second cleaning task. In some embodiments, if theautonomous vacuum100 receives a user speech input indicating multiple cleaning tasks, theautonomous vacuum100 may determine priorities for each of the cleaning tasks based on the mess types, surface types, and locations of the mess for the cleaning tasks in the environment. Theautonomous vacuum100 begins1350 the second cleaning task and, in response to finishing the second cleaning task, removes the second cleaning task from the task list and continues1370 with the first cleaning task. This process may repeat if theautonomous vacuum100 receives more user speech inputs.
ThoughFIG. 13 illustrates a number of interactions according to one example embodiment, the precise interactions and/or order of interactions may vary in different embodiments. For example, in some embodiments, theautonomous vacuum100 rotates to face the user rather than moving1320 to the user to receive1330 the second user speech input.
FIG. 14 illustrates abehavior tree1400 used to determine the behavior of theautonomous vacuum100, according to one example embodiment. Thebehavior tree1400 consists ofbranches1405 of nodes, tasks, and conditions. Thelogic module570 uses the behavior tree to generate instructions to control theautonomous vacuum100 to execute tasks within an environment, such as cleaning tasks or charging tasks. Thebehavior tree1400 takes synchronizedsensor data1420 as input from async node1410. Thesync node1415stores sensor data1420 from thesensor system175 for a time interval dictated by aclock node1405, which ticks at regular time intervals. With each tick, the sync node storesnew sensor data1415 taken as theclock node1405 ticks to be used as input to thebehavior tree1400.
Thebehavior tree1400 is encompassed in atree node1420. Thetree node1420 sendssensor data1415 from thesync node1410 to other nodes in thebehavior tree1400 from left to right in thebehavior tree1400. Thebehavior tree1400 also includes other nodes that dictate the flow of decisions through thebehavior tree1400. A sequence node1430 executesbranches1405 connected to the sequence node1430 from left to right until a branch fails (i.e., a task is not completed or a condition is not met). A fallback node1435 executesbranch1405 connected to the fallback node1435 from left-to right until a branch succeeds (i.e., a task is completed or a condition is met). Thelogic module570 cycles through thebranches1405 of thebehavior tree1400 until it reaches a charging task, which causes thelogic module570 to instruct theautonomous vacuum100 to move1470 to thedocking station185.
For a tick of theclick node1410 withsynchronized sensor data1420 from thesync node1415, thelogic module570 cycles through thebehavior tree1400. For example, starting atsequence node1430A, thelogic module570 moves down theleft-most branch1405 connected to thesequence node1430A since sequence nodes1430 indicate for thelogic module570 to executeconnected branches1405 until a branch fails. The left-most branch connected to sequencenode1430A isfallback node1435A. Fallback nodes1435 indicate for thelogic module570 to execute thebranches1405 connected to thefallback node1435A from left to right until aconnected branch1405 succeeds. At thefallback node1435A, thelogic module570 cycles between determining if a user is not interacting1440, which is a condition, and processing1445 the user interaction until one thebranches1405 succeeds (i.e., the user is not interaction with the autonomous vacuum100). Examples of user interactions include user speech input or a user's gestures.
Thelogic module570 moves to the next branch connected to sequencenode1430B, which indicates for theautonomous vacuum100 to run1450 the task scheduler. The task scheduler is internal to thelogic module570 and retrieves the next cleaning task in thetask list database550, along with a location in the environment, a cleaning task architecture, and cleaning settings. The task scheduler converts the cleaning task architecture, which lists the actions for theautonomous vacuum100 to take to remove the mess associated with the cleaning task, into a sub tree. For each new cleaning task, the task scheduler generates a new sub tree and inserts the sub tree into thebehavior tree1400.
Thelogic module570 moves tofallback node1435B and executes thebranches1405 fromfallback node1435B from left to right until abranch1405 connected tofallback node1435B succeeds. Theleft-most branch1405 is connected to sequencenode1430B, which executes itsconnected branches1405 from left to right until aconnected branch1405 fails. Thelogic module570 determines if there is a cleaning task on thetask list1450, as determined by the task scheduler. If not, thebranch1405 has failed since the condition of a cleaning task being on thetask list1450 was not met, and theautonomous vacuum100moves1470 to thedocking station185 to charge. In some embodiments, if the first task on the task list is a charging task, the branch fails so theautonomous vacuum100 can move1470 to thedocking station185 for charging.
If the task list has a cleaning task on it, thelogic module570 generates instructions for theautonomous vacuum100 to execute1455 the first cleaning task on the task list. In some embodiments, if theautonomous vacuum100 is not already located at the mess associated with the cleaning task,logic module570 generates instructions for theautonomous vacuum100 to move to the location of the mess. Thelogic module570 runs1460 the sub tree retrieved by the task scheduler to clean the mess and removes1465 the first cleaning task from the task list. Thelogic module570 repeats cycling through these branches stemming fromsequence node1430B until there are no more cleaning tasks on the task list. Thelogic module570 then generates instructions for theautonomous vacuum100 to move1470 to thedocking station185.
Once thelogic module570 has finished executing thebehavior tree1400, thelogic module570 receives astate1475 of theautonomous vacuum100. The state includes thesynchronized sensor data1420 used for executing thebehavior tree1400, as well as new sensor data collected as theautonomous vacuum100 performed the cleaning tasks. This new sensor data may include linear and angular velocities from the autonomous vacuum's100 movement as it completed the cleaning tasks and an angle relative to the direction of theautonomous vacuum100 before thebehavior tree1400 was executed. In some embodiments, thesynchronized sensor data1420 and the new sensor data are sent to a client device410 associated with theautonomous vacuum100, which may display graphs describing the movement and cleaning tasks completed by theautonomous vacuum100.
In some embodiments, thebehavior tree1400 includes more nodes and tasks than shown inFIG. 14. For example, in one embodiment, the behavior tree includes a branch before the last branch offallback node1435B that indicates for thelogic module570 to generate instructions for theautonomous vacuum100 to roam the environment to detect messes and map the environment.
FIG. 15 is a flowchart illustrating an example process for beginning a cleaning task based on a user speech input and gesture, according to one example embodiment. Theautonomous vacuum100 receives1500 a user speech input via themicrophone430 including a hotword. The hotword may be a word or phrase set by the user or may be a name attributed to theautonomous vacuum100, such as “Jarvis.” In embodiments with more than onemicrophone430, the autonomous vacuum determines the direction the user speech input came from by using beam-forming of themultiple microphones430 to compute the approximate location of the origin of the user speech input. Theautonomous vacuum100 then detects people in visual data from the fish-eye camera700 and uses the angle provided by beam-forming (assuming ±10-15° error in beam-forming) as the estimated range for the direction of the user speech input. In embodiments with multiple people in the estimated range, theautonomous vacuum100 can prompt users to instruct which person to give control of theautonomous vacuum100. Theautonomous vacuum100 then rotates1505 to face the user. In yet another embodiment, theautonomous vacuum100 analyzes the user speech input using voice print identification to determine if the voice print of the user speech input matches that of a fingerprint in thefingerprint database520. If a match exists in thefingerprint database520, theautonomous vacuum100 receives1525 an image input of visual data including the user. Theautonomous vacuum100 extracts out a face print from the image input and identifies1530 the user from the face print using face prints stored as fingerprints in thefingerprint database520. Once the user has been identified1530, theautonomous vacuum100 moves1535 to the user.
If a match was not found in thefingerprint database520, theautonomous vacuum100 receives1540 an image input of the user and extracts information from the image input such as body print, face print, and a representation of the clothing the person is wearing. Theautonomous vacuum100 uses this information, along with the voice print from the user speech input, to attempt to match the user to potential users1545 already stored in thefingerprint database520. If a matching fingerprint is identified, theautonomous vacuum100 stores the voice print and the face print as part of the fingerprint in thefingerprint database520 and moves1535 to the user. In some embodiments, theautonomous vacuum100 also stores the body print and representation of the clothing with the fingerprint. If no potential user1545 is found, theautonomous vacuum100 sends1555 a query to the user for clarification of who the user is. In some embodiments, theautonomous vacuum100 sends1555 the query through a client device410 associated with theautonomous vacuum100 and receives the clarification from a message from the client device410. In other embodiments, theautonomous vacuum100 outputs the query through an internal speaker in thesensor system175 and receives a user speech input for the clarification. Once clarified, theautonomous vacuum100 stores the voice print and the face print as part of the fingerprint in thefingerprint database520 and moves1535 to the user.
Theautonomous vacuum100 receives more visual data of the user and analyzes a gesture from the user with the user speech input to determine a cleaning task. For example, a user speech input of “Jarvis, clean up that mess” along with a gesture pointing to a location in the environment would indicate to theautonomous vacuum100 that there is a mess at that location. In some embodiments, if not indicated by the user speech input, theautonomous vacuum100 self-determines a mess type, surface type, and location of the mess and creates a cleaning task for the mess. Theautonomous vacuum100 adds the cleaning task to the top of the task list and begins1565 the cleaning task.
ThoughFIG. 15 illustrates a number of interactions according to one embodiment, the precise interactions and/or order of interactions may vary in different embodiments. For example, in some embodiments, theautonomous vacuum100 only receives1500 a user speech input and does not analyze1560 a gesture from theuser1560 to determine and begin1565 a cleaning task.
User Interfaces
Control of theautonomous vacuum100 may be affected through interfaces that include, for example, physical interface buttons on theautonomous vacuum100, a touch sensitive display on theautonomous vacuum100, and/or a user interface on a client device410 (e.g., a computing device such as a smartphone, tablet, laptop computer or desktop computer). Some or all of the components of an example client device410 are illustrated inFIGS. 16-19. Some or all of the components of the client device410 may be used to execute instructions corresponding to the processes described herein, including generating and rendering (or enabling rendering of) user interfaces to interact with theautonomous vacuum100.
Referring now toFIGS. 16-21, the figures illustrate example user interfaces and methods of using user interfaces presented via one or more client devices410 to instruct theautonomous vacuum100. A user may interact with the user interfaces via a client device410 to perform (or execute) particular tasks. Some of the tasks may be performed in conjunction with theautonomous vacuum100. For example, the user interface of the client device410 may render a view (actual image or virtual) of a physical environment, a route of theautonomous vacuum100 in the environment, obstacles in the environment, and messes encountered in the environment. A user may also interact with the user interfaces to direct theautonomous vacuum100 with cleaning tasks. Further examples of these are described herein.
Turning first toFIG. 16A, it illustrates an example user interface1600A that may be rendered (or enabled for rendering) on the client device410. The user interface1600A depicts a virtual rendering of theautonomous vacuum1605 scouting an environment, according to one example embodiment. In the example, theautonomous vacuum100 is represented by anautonomous vacuum icon1605 in the user interface1600A. When theautonomous vacuum100 is scouting (e.g., traversing the environment looking for messes), the user interface1600A may depict theautonomous vacuum1605 scouting in real-time in the rendering of the environment. In this example, the user interface1600A shows a virtual rendering. For ease of discussion, it will herein be referred to as a “rendering.” Here, the rendering in the user interface1600A displays mappings1610 of physical objects and images within the environment, as determined by themapping module500. In some embodiments, the user interface1600A displays objects mapped to different levels within the environment in different colors. For example, objects in the long-term level may be shown in gray, while objects in the immediate level may be displayed in red. In other embodiments, the user interface1600A only depicts the long-term level of the map of the environment. Further, the user interface1600A may display the rendering with texture mapping matching one or more floorings of the environment.
The user interface1600A displays (or enables for display, e.g., on a display screen apart from the client device410), in the rendering of the environment, ahistorical route1635 of where theautonomous vacuum100 traveled in the environment and a projectedroute1630 of where theautonomous vacuum100 is going within the environment. In some embodiments, the user interface1600A displays the movement of theautonomous vacuum100 in real-time. The user interface1600A shows that theautonomous vacuum100 is “scouting” in theactivity element1655 of theresource bar1665, which also displays statistics about the amount of power and water theautonomous vacuum100 has left and the amount of trash theautonomous vacuum100 has collected. The user interface1600A also displays acoverage bar1660 that indicates a percentage of the environment that theautonomous vacuum100 has covered in the current day.
It is noted that data corresponding to the user interface may be collected by theautonomous vacuum100 via some or all of the components of thesensor system175. This data may be collected in advance (e.g., initial set) and/or collected/updated as the autonomous vacuum410 is in operation. That data may be transmitted directly to the client device410 or to a cloud computing system for further processing. The further processing may include generating a map and corresponding user interface, for example, as shown inFIG. 16A. If the data is processed in the cloud system it may be provided (or enabled), e.g., transmitted, to the client device410 for rendering.
Continuing with the user interface1600A, it comprises a plurality of interactive elements, including apause button1615, adirect button1620, areturn button1625, a floorplan button1640, amess button1645, and a3D button1650. When the user interface1600A receives an interaction with thepause button1615, theautonomous vacuum100 stops its current activity (e.g., scouting). The user interface1600A may then receive an interaction command, e.g., via thedirect button1620, which directs theautonomous vacuum100 to navigate to a location within the environment. Further, when the user interface1600A receives an interaction with thereturn button1625, theautonomous vacuum100 navigates the environment return to thedocking station185 and charge.
Interactions via the user interface1600A with the floorplan button1640, mess button, and3D button1650 alter the rendering of the environment and the display ofmappings1610. For instance, receiving an interaction via the user interface1600A with the floorplan button1640 causes the user interface1600A to display a rending of the environment at a bird's-eye view, as shown inFIG. 16A.
Turning now toFIG. 16B, it illustrates an example user interface1600B for display that depicts a 3D rendering of the environment, according to one embodiment. In this example embodiment, the rendering of the environment includes withtexture mapping1670 matching the flooring of the environment. The texture data may be prestored, e.g., in a cloud computing system database. The texture data may augment the data collected by theautonomous vacuum100 corresponding to the physical environment. For example, theautonomous vacuum100sensor system175 may collect data on a hard floor surface. This data may be further processed, such as by the cloud computing system or client device410, to identify the type of hard floor surface (e.g., tile, hardwood, etc.). Once processed, texture data may be retrieved from a texture database for that hard floor surface to generate the rendering showing the texture.
Continuing with the example ofFIG. 16B, theactivity element1655 of the user interface1600B indicates that theautonomous vacuum100 is patrolling in the environment by moving around the environment and looking for cleaning tasks to complete. Further, the user interface1600B received an interaction with the3D button1650, so the user interface1600B displays a 3D rendering of the environment determined by the 3D module310. For instance, in the 3D rendering, themappings1610 of objects, such as furniture and built-in features, are shown in 3D. The additional data on furniture may be through processing of the sensor and/or image data collected by theautonomous vacuum100 and combined with data from a database for generating the rendering with the furniture in the user interface.
Next,FIG. 16C illustrates a user interface1600C for display on a screen depicting anobstacle icon1675 in the rendering of the environment, according to one example embodiment. In this embodiment, the user interface1600B received an interaction with themess button1645, so the user interface1600C displays a rendering of the environment includingobstacle icons1675 representing locations of obstacles in the environment. In some embodiments, the rendering may further include mess areas detected as theautonomous vacuum100 scouted in the environment. A mess area is an area in the environment in which theautonomous vacuum100 detected, via thesensor system175, messes, such as dirt, dust, and debris. Theautonomous vacuum100 may only register an area with a percentage of mess above a threshold level as a mess area. For example, ifautonomous vacuum100 determines that an area is 1% covered in dust, theautonomous vacuum100 may not label the area as a mess area whereas theautonomous vacuum100 may label an area that is 10% covered in dirt as a mess area to be displayed in the user interface1600C.
FIG. 17A illustrates a user interface1700A for display on a screen that depicts locations of detected messes and obstacles in the environment, according to one example embodiment. In this embodiment, the user interface1700A depicts bothobstacle icons1675 andmess areas1705. When the user interface1700A receives in interaction with anobstacle icon1675, the user interface1700B displays an obstacle image1710 captured by thecamera system420 while theautonomous vacuum100 was scouting, as shown inFIG. 17B. The user interface1700B may depict multiple obstacle images1710 of obstacles in the environment, ordered either chronologically as theautonomous vacuum100 encountered them or by size of the area the obstacle obstructs. Each obstacle image1710 is associated with anenvironment map1720 that depicts a location of the obstacle in the environment and anobstacle description1730 describing what the obstacle is (e.g., “Charging cables”) and the obstacle location (e.g., “Near sofa in living room”). In some embodiments, the user interface1700B may further include an interactive element that, upon interaction, indicates to the autonomous vacuum that the obstacle has been removed.
The user interface1700B also includes awaste toggle1735 and anobstacle toggle1740. When theobstacle toggle1740 is activated, like inFIG. 17B, the user interface1700B displays obstacle images1710 whereas when thewaste toggle1735 is activated, the user interface1700B displays images of waste inmess areas1705, such as trash, spills, dirt, dust, or debris.
FIG. 18A illustrates a user interface1800A for display on a screen depicting a route of theautonomous vacuum100 in the environment, according to one example embodiment. The route is divided into a scouting route1805 and acleaning route1810. The scouting route1805 depicts where theautonomous vacuum100 moved in the environment while scouting for messes, and thecleaning route1810 depicts where theautonomous vacuum100 moved as it cleaned (e.g., activated the vacuum pump115). Theautonomous vacuum100 may alternate between scouting and cleaning as it moves about the environment, as shown inFIG. 18A. The user interface1800A also includes atime scroll bar1815 that represents a time range of a current day. Upon receiving an interaction with thetime scroll bar1815 that sets aviewing time1820, the user interface1800A displays theautonomous vacuum icon1605 at a location in the rendering corresponding to the location of theautonomous vacuum100 in the environment at theviewing time1820. Further, thetime scroll bar1815 is interspersed with cleaning instances1825 that indicate time periods that theautonomous vacuum100 was cleaning in the environment.
FIG. 18B illustrates a user interface1800B for display on a screen that depicts detectedclean areas1835 in the environment, according to one example embodiment. In this embodiment, the user interface1800B illustrated detectedclean areas1840 in gray shading. Detectedclean areas1840 are areas in the environment that theautonomous vacuum100 has traversed and determined, using thesensor system175, are clean (e.g., free of dirt, dust, debris, and stains). The user interface1800B also illustrateduncharted areas1840 are areas in the environment that theautonomous vacuum100 has not yet traversed or determined, using thesensor system175, are not clean.
FIG. 19A illustrates aninteraction1910A with a user interface1900A with thedirect button1620 for display on a screen, according to one example embodiment. Theinteraction1910A is represented by a gray-shaded circle on thedirect button1620. A user interacting with the user interface1900A may interact with thedirect button1620 and select alocation1915 in the rendering corresponding to a location in the environment for theautonomous vacuum100 to travel to and clean, as shown inFIG. 19B. In some embodiments, instead of interacting with thelocation1915, a user may select, via the user interface1900B, a mess area for theautonomous vacuum100 to travel to and clean. Once alocation1915 in the environment has been selected via the user interface1900B, the user interface1900B may depict a projectedroute1630 corresponding to a path in the environment that theautonomous vacuum100 will take to reach the location. Upon receiving an interaction with thesend button1920 via the user interface1900B, theautonomous vacuum100 travels to the location.
Further interactions with the user interface1900 may cause theautonomous vacuum100 to travel through the environment to specific locations. For example, as shown inFIG. 19C, upon receiving an interaction with awaste bin icon1925 via the user interface1900C, which represents the location of the waste bin in the environment, theautonomous vacuum100 may travel to the waste bin for emptying. In another example, shown inFIG. 19D, an interaction may indicate a selectedarea1930D for theautonomous vacuum100 to clean. The selected area may be “free drawn” by a user via the user interface1900D (e.g., the user may select an area by circling or otherwise outlining an area within the rendering). After the user interface1900D has received theinteraction1910D, an interaction with thesend button1915 sends theautonomous vacuum100 to the area in the environment corresponding to the selectedarea1930D, and an interaction with theclean button1935 sends theautonomous vacuum100 to the area in the environment corresponding to the selectedarea1930D to clean the area. An interaction with the cancelbutton1940 cancels theinteraction1910D.
In some embodiments, the user interface1900E may display on a screen the rendering of the environment withroom overlays1945, as shown inFIG. 19E. In this embodiment, themapping module500 may determine locations of typical rooms (e.g., kitchen, living room, etc.) based on barriers within the environment and label the rendering in the user interface1900E with room overlays indicating which areas correspond to typical rooms. Alternatively, a user may input the room overlays for the rendering via the user interface1900E. A user may interact with the user interface1900E to pick a selectedarea1930B for theautonomous vacuum100 to clean.
FIG. 20A illustrates a user interface20000A for display on a screen depicting instructions for giving the autonomous vacuum voice commands, according to one example embodiment. As indicated in the user interface2000A, a user may speak voice commands to theautonomous vacuum100. For example, a user may direct a voice command in the direction of theautonomous vacuum100 stating “Go to the waste bin,” and theautonomous vacuum100 will, in response, traverse the environment to travel to the waste bin. In another example, a user may direct theautonomous vacuum100 with a command, e.g., “Come to me,” and if theautonomous vacuum100 does not detect the user in visual data or directional audio data, theautonomous vacuum100 may navigate to a location of a client device displaying the user interface (e.g., the approximate location of the user) or may use beam-forming with one ormore microphones430 to determine a location of the user to navigate to. In some embodiments, the user may also give visual commands to theautonomous vacuum100, such as pointing to a mess or may enter commands via the user interface.
FIG. 20B illustrates a user interface2000B for display on a screen depicting instructions for setting thewaste bin icon1925 in the rendering, according to one example embodiment. A user may interact with the user interface2000B to move thewaste button icon1925 in the rendering to a location corresponding to the location of the waste bin in the environment. Theautonomous vacuum100 may move to the location corresponding to the waste bin when thewaste bag100 is full, such that a user may efficiently empty thewaste bag110.
FIG. 20C illustrates a user interface2000C for display on a screen depicting instructions for adjusting a cleaning schedule of an autonomous vacuum, according to one example embodiment. In this embodiment, the user interface2000C displays instructions describing how a user may set a cleaning schedule for theautonomous vacuum100 via the user interface2000C, such that theautonomous vacuum100 may continuously scout in the environment, clean after cooking has occurred in the environment, or clean only when directly instructed. In other embodiments, a user may select specific cleaning times via theuser interface100.
FIG. 21 is a flowchart illustrating an example process for rendering a user interface for display on a screen according to one example embodiment. The process for rendering corresponds to anautonomous vacuum100 traversing a physical environment. In some embodiments, theautonomous vacuum100 may transmit sensor data (which may include some or all of the data from thesensor system175 components) to the client device410 and/or a cloud computing system, which further processes the received data to enable the user interface for display on the client device410. Enabling may include generating data and/or instructions that are provided to the client device410 such that the client device410 may process the received data and/or instructions to render the user interface on a screen using the information within. The user interface comprises a virtual rendering of the physical environment, and the virtual rendering includes a current location of theautonomous vacuum100 in the physical environment. The user interface is described in detail inFIGS. 16-20.
Theprocessor470 receives2110 real-time data describing the physical environment from thesensor system175. The data may be used to enable 2120, for display on the client device410, an updated rendering of the user interface depicting entities indicative of activities and messes in the environment. The entities may include a mess in the environment at a first location, as specified by the real-time data, a portion of ahistorical route1635 of theautonomous vacuum100, an area of the physical environment detected as clean by thesensor system175, and/or an obstacle in the environment at a second location.
Theprocessor470 receives2130, from the client device410, an interaction with the user interface rendered for display on the client device410. The interaction may correspond to an action for theautonomous vacuum100 to take relative to the physical environment, such as cleaning, scouting, or moving to a location. Examples of interactions include selecting thedirect button1620, scrolling thetime scroll bar1815, or toggling theobstacle toggle1740. Theprocessor470 generates 2140 instructions for theautonomous vacuum100 to traverse the physical environment based on the interaction.
Example Mop RollerFIG. 22 is amop roller385, according to one example embodiment. Themop roller385 may be located in thecleaning head105 of theautonomous vacuum100. The mop roller is a cylindrical structure and may have diagonal strips of alternating microfiber cloth2200 (or other absorbent material) and abrasive (or scrubbing) material2210 attached around the outer surface of the cylindrical structure (e.g., in a diagonal configuration). Collectively, themicrofiber cloth2220 and the abrasive material may be referred to as the mop pad. In other embodiments, the mop pad may only comprise absorbent material. The mop pad may be a unitary constructed piece that attaches to a cylindrical roller (not shown) that is the cleaninghead105. It is noted that the cleaninghead105 may be a cylindrical structure that is rotatable by theautonomous vacuum100. The mop pad also may be a unitary constructed piece that is removably attached with the cleaninghead105.
Themicrofiber cloth2220 absorbs liquid and may be used to scrub surfaces to remove messes such as dirt and debris. The abrasive material2210 is unable to retain water but may be used to effectively scrub tough stains, due to its resistance to deformation. The abrasive material2210 may be scouring pads or nylon bristles. Together, themicrofiber cloth2220 and abrasive material2210 allow the mop roller to both absorb liquid mess and effectively scrub stains.
Themop roller385 uses the mop pad to scrub surfaces to remove messes and stains. Themop roller385 may be able to remove “light” messes (e.g. particulate matter such as loose dirt) by having theautonomous vacuum100 pass over the light stain once, whereas the mop roller may need to pass over “tough” messes (e.g., stains that are difficult to clean such as coffee or ketchup spills) multiple times. In some embodiments, theautonomous vacuum100 may leverage thesensor system175 to determine how many times to pass themop roller385 over a mess to remove it.
Theautonomous vacuum100 uses contact between themop roller385 and the floor of the environment to effectively clean the floor. In particular, theautonomous vacuum100 may create high friction contact between themop roller385 and surface to fully remove a mess, which may require a threshold pressure exerted by themop roller385 to achieve. To ensure that themop roller385 exerts at least the threshold pressure when cleaning, themop roller385 may be housed in a heavy mopping system mounted to theautonomous vacuum100 via a suspension system that allows a vertical degree of freedom. This mounting results in rotational variance of the mopping system, which may affect the cleaning efficacy and water uptake of theautonomous vacuum100 when mopping. For instance, water uptake of theautonomous vacuum100 is low when there is high compression in the mop pad, causing water to squeeze out of the mop pad. Furthermore, high friction between the mop pad and the floor improves cleaning efficacy.
The rotational variance of the mopping system described herein results in a plurality of effects. For example, when theautonomous vacuum100 tilts such that the mopping system is lifted, mopping results in low cleaning efficacy but high water uptake. In another example, when theautonomous vacuum100 tilts such that the mopping system is pushed into the ground, mopping results in high cleaning efficacy but low water uptake. In some embodiments, to leverage these effects, theautonomous vacuum100 may lift the mopping system when moving forward and push the mopping system into the floor when moving backwards. Thus, theautonomous vacuum100 may move forward to clean light messes and move backwards to clean tough messes, followed by moving forward to remove excess liquid from the floor.
FIG. 23A illustrates operation of amop roller385 according to one example embodiment. In this figure, themop roller385 is being wrung as the cleaninghead105 rotates, for example, along a surface. As theautonomous vacuum100 moves around an environment, theautonomous vacuum100 presses themop pad2200 of themop roller385 to the ground2350 (or surface or floor) to pick up dust and dirt such that themop pad2200 is in contact with theground2350 along the length of the cylindrical structure. Themop pad2200 rotates on an axis parallel to theground2350 and perpendicular to a direction of motion of theautonomous vacuum100, and the autonomous vacuum may release water through awater inlet2340 onto themop pad2200 for cleaning. Thewater2320 acts as a solvent for dirt and stains on theground2350.
When cleaning withwater2320 or another liquid, themop pad2200 will eventually reach a saturation point at which it will only spread around dirt and dust without being cleaned, which may require user interaction. To combat this effect, theautonomous vacuum100 may self-wring themop pad2200 of themop roller385. Themop roller385 is enclosed in amop housing2300 with a flat wringer. Theflat wringer2310 is a substantially planar plate that sits perpendicular to the radius of themop roller385. The planar plate may be smooth or textured. Theflat wringer2310 interferes with themop pad2200 in that it creates a friction surface relative to themop pad2200. While abutted against themop roller385, as shown inFIG. 23, theflat wringer2310 extends slightly out from its contact point with themop roller385 to prevent themop roller385 from catching on theflat wringer2310. Further, theflat wringer2310 requires less torque to wring themop pad2200 than a wringer that is triangular or rectangular, and themop roller385 can rotate in either direction to wring themop pad2200, giving theautonomous vacuum100 more flexibility for wringing. This structural configuration wringswater2320 or other liquids from themop pad2200 as themop roller385 rotates.
Theflat wringer2310 includes awater inlet2340 that allowswater2320 to flow through the center of theflat wringer2310 and exit onto a compressed portion of themop pad2200. In some embodiments, thewater inlet2340 may expel other liquids, such as cleaning solutions, onto themop pad2200. Theflat wringer2310 is positioned to exert pressure on themop pad2200 of themop roller385 such that when themop roller385 spins against theflat wringer2310,water2320 and dissolved dirt captured by themop pad2200 are wrung from themop pad2200 and sucked intoair outlets2330 on either side of theflat wringer2310. Theair outlets2330 may be connected to thevacuum pump115, which draws air and liquids through theair outlets2330. The positioning of theair outlets2330 on either side of theflat wringer2310 allow theair outlets2330 to capturewater2320 expelled from themop pad2200 regardless of the direction themop roller385 is spinning. Wringing themop pad2200 with this combination offlat wringer2310,air outlets2330, andwater inlet2340 keeps themop pad2200 clean of dirt and dust and extends the amount of time between necessary user cleanings of themop pad2200.
FIG. 23B shows thecleaning head105 of theautonomous vacuum100 including themop roller385, according to one embodiment. The cleaning head comprises anenclosure2355 that houses thebrush roller135 and themop roller385. Theenclosure2355 comprises a first interior opposite a second interior, a front interior opposite a back interior, and a top interior opposite theground2350 connecting the front interior, back interior, first interior, and second interior to form acavity2375. Theenclosure2355 further comprises one or more openings. In some embodiments, the one or more openings may include abrush opening2360 partially opposite the top interior and adjacent to the front interior, amop opening2365 opposite the top interior and at a back portion of the enclosure, and one ormore outlets2370 on the back interior. Theoutlets2370 may connect thecavity2375 to the solvent pump120 (or solvent volume340), an inlet to the waste bag110 (orwaste container200 or waste volume350), and/or thevacuum pump115.
Thebrush roller135 sits at the front side of the enclosure2355 (e.g., adjacent to the front interior) such that a first portion of the brush roller is exposed to thecavity2375 while a second portion of thebrush roller135 is externally exposed at thebrush opening2360, allowing thebrush roller135 to makesweeping contact2380 with theground2350. Themop roller385 sits behind thebrush roller135 in theenclosure2355 adjacent to the back interior and below thecavity2375. A lower portion of themop roller385 is externally exposed at themop opening2365 such that themop roller385 may makemopping contact2385 with theground2350. A first end and second end of thebrush roller135 connect to the first interior and the second interior, respectively, of theenclosure2355. A first end and second end of themop roller385 connect to the first interior and second interior, respectively, of theenclosure2355. The connections between thebrush roller135 and moproller385 allow thebrush roller135 and moproller385 to move in parallel with theenclosure2355 when the actuator moves theenclosure2355 vertically and/or tilts theenclosure2355 forwards/backwards.
The actuator of theactuator assembly125 connects at the back of thecleaning head105 to one or more four-bar linkages such that the actuator can control vertical and rotational movement of the cleaning head105 (e.g., theenclosure2355 and its contents, including the cleaning rollers) by moving the one or more four-bar linkages. In particular, a motor of the actuator may be mounted on the autonomous vacuum100 (e.g., the base or a component within the base) and a shaft of the actuator may be connected to thecleaning head105 or a translating end of the one or more four-bar linkages. The cleaning head may be screwed to the one or more four-bar linkages that connects the cleaninghead105 to the base360 of theautonomous vacuum100, allowing the cleaning head to be removed and replaced if thebrush roller135,mop roller385, or any other component of thecleaning head105 needs to be replaced over time. Further, the controller of theactuator assembly125 connects to each of the first end and the second end of each of thebrush roller135 and moproller385 to control rotation when addressing cleaning tasks in the environment. For example, when theautonomous vacuum100 moves to a mess that requires cleaning by themop roller385, the controller may activate the motor that causes themop roller385 to rotate. In another example, when theautonomous vacuum100 moves to a mess that requires cleaning by thebrush roller135, the controller may deactivate the motor that causes rotation of themop roller385 and activate the motor that causes thebrush roller135 to rotate. The controller may also attach the ends of the cleaning rollers to theenclosure2355.
FIGS. 23C-D illustrate anexample selection flap2390 of thecleaning head105. Theselection flap2390 is an elongated piece of material that is hinged at a top portion of thecavity2375. Theselection flap2390 may move to alter the size of thecavity2375. In particular, to clean different mess types efficiently,brush roller135 and moproller385 need thecavity2375 to be differently sized, which may be accomplished with theselection flap2390. To clean liquid messes with themop roller385, thecavity2375 needs to be smaller to allow for quick movement of liquid wastes through, which is difficult when thecavity2375 is large. Thus, theselection flap2390 may be placed in a downward position to clean such messes, shown inFIG. 23D, which decreases the size of thecavity2375. In the downward position, theselection flap2390 extends over a portion of thebrush roller135 to reduce the size of thecavity2375. Alternatively, to clean messes with thebrush roller135, thecavity2375 needs to have a high clearance to capture waste that is large in size (e.g., popcorn, almonds, pebbles, etc.). To accomplish this, theselection flap2390 may be placed in an upward position, where theselection flap2390 extends over a top portion of thecavity2375. When theselection flap2390 is in the upward position, thecavity2375 is large enough for such waste to pass through on its way to thewaste bag110.
Theautonomous vacuum100 may use rotation of thebrush roller135 to move theselection flap2390 between the upward position and downward position. The selection flap may be placed in the downward position by rotating thebrush roller135 backward (e.g., clockwise inFIG. 23C), which uses nominal interference between bristles on thebrush roller135 and theselection flap2390. Theautonomous vacuum100 may use the same nominal interference to place theselection flap2390 in the upward position, shown inFIG. 23D, by rotating thebrush roller135 forward (e.g., counterclockwise inFIG. 23D). Thus, when theautonomous vacuum100 detects messes in the environment, theautonomous vacuum100 may use rotation of thebrush roller135 to control placement of theselection flap2390 to optimize the size of thecavity2375.
FIGS. 23E-F show a mop cover2395 of thecleaning head105. Themop roller385 may be covered or uncovered by a mop cover2395. The mop cover2395 is a partial cylindrical shell rotatably positioned around an outer surface of themop roller385. Themop roller385 may move the mop cover2395 by rotating to cover or uncover themop roller385 with the mop cover2395. For instance, if themop roller385 rotates forward (i.e., counterclockwise inFIG. 23E), the mop cover2395 will uncover themop roller385 and end up in the position shown inFIG. 23E. When uncovered, themop roller385 may still receive water via thewater inlet2340 and may be in contact with theground2350. Alternatively, if themop roller385 rotates backward (i.e., clockwise inFIG. 23F), the mop cover2395 will cover a portion of the mop roller that was externally exposed and end up in the position shown inFIG. 23F. This shields themop roller385 from theground2350, and the mop cover2395 is configured to stay engaged as theautonomous vacuum100 moves over obstacles and one or more surface types.
Theactuator assembly125 may use the controller to control rotation of themop roller385 to cover/uncover themop roller385 with the mop cover2395 based on the environment around theautonomous vacuum100. For example, if theautonomous vacuum100 is about to move over carpet (or another surface type that themop roller385 should not be used on), theactuator assembly125 may rotate themop roller385 to cover themop roller385 with the mop cover2395. Theactuator assembly125 may also cover themop roller385 when theautonomous vacuum100 requires more mobility to move through the environment, such as when moving over an obstacle.
FIG. 24A illustrates themop roller385 rotating counterclockwise as theautonomous vacuum100 moves forward, according to one example embodiment. As shown inFIG. 24, when themop roller385 has arotational velocity2410A in with a direction ofrotation2400A opposite of the autonomous vacuum velocity2420A, the cleaning effectiveness of themop roller385 is decreased. In particular, in this embodiment, therelative contact velocity2430A of themop roller385 with theground2350 is reduced due to the opposing directions of therotational velocity2410A of themop roller385 and the autonomous vacuum velocity2420A. This decreases the cleaning effectiveness of themop roller385 by reducing its scrubbing ability on theground2350 but increases the mop roller's385 ability to pick upwater2320, seen in thewater beading2440 that forms at the front of themop roller385 such that themop roller385 is always moving towards thebead385.
FIG. 24B illustrates amop roller385 rotating counterclockwise as the autonomous vacuum moves backward, according to one example embodiment. In this embodiment, the direction of therotational velocity2410B of themop roller385 and the direction of the autonomous vacuum velocity2420B is the same, resulting in a greaterrelative contact velocity2430B than that shown inFIG. 24. The greaterrelative contact velocity2430B increases the cleaning effectiveness of themop roller385 by increasing its scrubbing ability on theground2350 but decreases its ability to pick upwater2320, as shown by the water pool that forms at the front of themop roller385 such that themop roller385 is always moving away from bead.
The embodiments shown inFIGS. 24A and 24B may be used sequentially to effectively clean an environment. To remove dirt and dust from the2350, theautonomous vacuum100 may employ the embodiment illustrated ground inFIG. 24A where the mop roller rotates in the opposite direction as theautonomous vacuum100 moves. This embodiment optimizes water uptake over cleaning effectiveness, which is sufficient for cleaning loose dirt and dust. To clean a stain, theautonomous vacuum100 may employ the embodiment illustrated inFIG. 24B to increase the mop roller's scrubbing ability (i.e., by increasing the relative contact velocity of the mop roller). Theautonomous vacuum100 may switch back to the embodiment ofFIG. 24A to pick upwater2320 from thewater pool2450 formed while scrubbing the stain.
Further, the abilities of themop roller385 illustrated with respect toFIGS. 24A-24B may be applied by theautonomous vacuum100 to clean themop pad2200 of themop roller385. In particular, theautonomous vacuum100 may rotate themop roller385 forward constantly for an interval of time to clean themop pad2200 by removing dirty water via theair outlets2330. In addition, by keeping themop roller385 constantly rotating, themop pad2200 of themop roller385 may be uniformly exposed to dirt and other messes.
FIG. 25 illustrates amop roller385 over a docking station, according to one example embodiment. After themop roller385 has been in use for cleaning an environment, themop pad2200 may remain damp for several hours due to lack of airflow within themop housing2300. To accelerate drying of themop pad2200, theautonomous vacuum100 may return to thedocking station185, which includes aheating element2500 that generates hot air to dry themop pad2200. Theheating element2500 sits next to anair vent2520 positioned in the side of thedocking station185 to allow air flow through anopening2510. When docked at thedocking station185, theautonomous vacuum100 rests themop roller385 over theopening2510 in thedocking station185 and pulls air at a low speed through the opening using thevacuum pump115. The air of thisairflow2530 heats up by moving over theheating element2500 before rising through theopening2510 towards themop pad2200 from drying. By combiningcontinuous airflow2530 and heat, themop pad2200 can be dried quickly, decreasing the potential for bacterial growth.
FIG. 26 illustrates aflat wringer2310 for themop roller385, according to one example embodiment. In this embodiment, theflat wringer2310 is positioned between two rows ofair outlets2330 and includesmultiple water inlets2340 positioned along the middle of theflat wringer2310. In other embodiments, theflat wringer2310 may includeless water inlets2340 and may be shaped differently, such as to conform to the curve of themop roller385.
Computer ArchitectureFIG. 27 is a high-level block diagram illustrating physical components of acomputer2700 used as part or all of the client device410 fromFIG. 4, according to one embodiment. Illustrated are at least oneprocessor2702 coupled to achipset2704. Also coupled to thechipset2704 are amemory2706, astorage device2708, agraphics adapter2712, and anetwork adapter2716. Adisplay2718 is coupled to thegraphics adapter2712. In one embodiment, the functionality of thechipset2704 is provided by amemory controller hub2720 and an I/O controller hub2722. In another embodiment, thememory2706 is coupled directly to theprocessor2702 instead of thechipset2704.
Thestorage device2708 is any non-transitory computer-readable storage medium, such as a hard drive, compact disk read-only memory (CD-ROM), DVD, or a solid-state memory device. Thememory2706 holds instructions and data used by theprocessor2702. Thegraphics adapter2712 displays images and other information on thedisplay2718. Thenetwork adapter2716 couples thecomputer2700 to a local or wide area network.
As is known in the art, acomputer2700 can have different and/or other components than those shown inFIG. 27. In addition, thecomputer2700 can lack certain illustrated components. In one embodiment, acomputer2700 acting as a server may lack agraphics adapter2712, and/ordisplay2718, as well as a keyboard or pointing device. Moreover, thestorage device2708 can be local and/or remote from the computer2700 (such as embodied within a storage area network (SAN)).
As is known in the art, thecomputer2700 is adapted to execute computer program modules for providing functionality described herein. As used herein, the term “module” refers to computer program logic utilized to provide the specified functionality. Thus, a module can be implemented in hardware, firmware, and/or software. In one embodiment, program modules are stored on thestorage device2708, loaded into thememory2706, and executed by theprocessor2702.
Embodiments of the entities described herein can include other and/or different modules than the ones described here. In addition, the functionality attributed to the modules can be performed by other or different modules in other embodiments. Moreover, this description occasionally omits the term “module” for purposes of clarity and convenience.
Other ConsiderationsThe disclosed configurations have been described in particular detail with respect to one possible embodiment. Those of skill in the art will appreciate that the invention may be practiced in other embodiments. First, the particular naming of the components and variables, capitalization of terms, the attributes, data structures, or any other programming or structural aspect is not mandatory or significant, and the mechanisms that implement the invention or its features may have different names, formats, or protocols. Also, the particular division of functionality between the various system components described herein is merely for purposes of example, and is not mandatory; functions performed by a single system component may instead be performed by multiple components, and functions performed by multiple components may instead performed by a single component.
Some portions of the above description present the features of the present invention in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. These operations, while described functionally or logically, are understood to be implemented by computer programs. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules or by functional names, without loss of generality.
Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as “determining” or “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Certain aspects of the present invention include process steps and instructions described herein in the form of an algorithm. It should be noted that the process steps and instructions of the present invention could be embodied in software, firmware or hardware, and when embodied in software, could be downloaded to reside on and be operated from different platforms used by real time network operating systems.
The present invention also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored on a computer readable medium that can be accessed by the computer. Such a computer program may be stored in a non-transitory computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of computer-readable storage medium suitable for storing electronic instructions, and each coupled to a computer system bus. Furthermore, the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
The algorithms and operations presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will be apparent to those of skill in the art, along with equivalent variations. In addition, the present invention is not described with reference to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any references to specific languages are provided for invention of enablement and best mode of the present invention.
The present invention is well suited to a wide variety of computer network systems over numerous topologies. Within this field, the configuration and management of large networks comprise storage devices and computers that are communicatively coupled to dissimilar computers and storage devices over a network, such as the Internet.
Finally, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, the disclosure of the present disclosure is intended to be illustrative, but not limiting, of the scope of the protection available, which is set forth in the following claims.