FIELDThe described embodiments relate generally to guidance devices. More particularly, the present embodiments relate to guidance devices for the sensory impaired.
BACKGROUNDPeople use a variety of senses to navigate and interact with the various environments they encounter on a daily basis. For example, people use their senses of sight and sound to navigate in their homes, on the street, through workplaces and shopping centers, and so on. Such environments may be designed and configured under the assumption that people will be able to use senses such as sight and sound for navigation.
However, many people are sensory impaired in one way or another. People may be deaf or at least partially auditorily impaired, blind or at least partially visually impaired, and so on. By way of example, the World Health Organization estimated in April of 2012 that 285 million people were visually impaired. Of these 285 million people, 246 million were estimated as having low vision and 39 million were estimated to be blind. Navigation through environments designed and configured for those lacking sensory impairment may be challenging or difficult for the sensory impaired.
Some sensory impaired people use guidance devices or relationships to assist them in navigating and interacting with their environments. For example, some blind people may use a cane in order to navigate and interact with an environment. Others may use a guide animal.
SUMMARYThe present disclosure relates to guidance devices for sensory impaired users. Sensor data may be obtained regarding an environment. A model of the environment may be generated and the model may be mapped at least to an input/output touch surface. Tactile output and/or other output may be provided to a user based at least on the mapping. In this way, a sensory impaired user may be able to navigate and/or interact with an environment utilizing the guidance device.
In various embodiments, a guidance device for a sensory impaired user may include an input/output touch surface, a sensor data component that obtains data regarding an environment around the guidance device, and a processing unit coupled to the input/output touch surface and the sensor data component. The processing unit may generate a model of the environment based at least on the data, map the model to the input/output touch surface and provide tactile output to a user based at least on the mapping via the input/output touch surface.
In some examples, the tactile output may be an arrangement of raised portions of the input/output touch surface or other tactile feedback configured to produce a tactile sensation of bumps.
In various examples, the tactile output may include a representation of an object in the environment and a region of the input/output touch surface where the representation is provided may correspond to positional information regarding the object. The positional information regarding the object corresponding to the region may be first positional information when the tactile output includes a first positional information context indicator and second positional information when the tactile output includes a second positional information context indicator. The shape of the representation may be associated with a detected shape of the object.
In some examples, the sensor data component may receive at least a portion of the data from another electronic device.
In various examples, the processing unit may provide at least one audio notification based at least on the model via an audio component of the guidance device or another electronic device.
In some embodiments, an assistance device for a sensory impaired user may include a surface operable to detect touch and provide tactile output, a sensor that detects information about an environment, and a processing unit coupled to the surface and the sensor. The processing unit may determine a portion of the surface being touched, select a subset of the information for output, and provide the tactile output to a user corresponding to the subset of the information via the portion of the surface.
In some examples, the sensor may detect orientation information regarding the assistance device and the processing unit may provide the tactile output via the portion of the surface according to the orientation information. In various examples, the sensor may detect location information regarding the assistance device and the tactile output may include a direction indication associated with navigation to a destination.
In some examples, the tactile output may include an indication of a height of an object in the environment. In various examples, the tactile output may include an indication that the object is traveling in a course that will connect with a user (which may be determined using real time calculations). In some examples, the tactile output may include an indication that the user is approaching the object and the object is below a head height of the user.
In various examples, the tactile output may include a first representation of a first object located in a direction of travel of the assistance device and a second representation of a second object located in an opposite direction of the direction of travel. In some examples, the wherein the tactile output may include a representation of an object in the environment and a texture of the representation may be associated with a detected texture of the object. In various examples, regions of the portion of the surface where the first representation and the second representation are provided may indicate that the first object is located in the direction of travel and the second object is located in the opposite direction of the direction of travel.
In some examples, the processing unit may provide an audio notification via an audio component upon determining that the assistance device experiences a fall event during use.
In various embodiments, an environmental exploration device may include a cylindrical housing, a processing unit located within the cylindrical housing, a touch sensing device coupled to the processing unit and positioned over the cylindrical housing, a haptic device (such as one or more piezoelectric cells) coupled to the processing unit and positioned adjacent to the touch sensing device, and an image sensor coupled to the processing unit that detects image data about an area around the cylindrical housing. The processing unit may analyze the image data using image recognition to identify an object (and/or analyze data from one or more depth sensors to determine distance to and/or speed of moving objects), creates an output image representing the object and positional information regarding the object in the area, map the output image to the haptic device; and provide the output image as tactile output to a user via the haptic device.
In some examples, the processing unit may provide an audio description of the object and the positional information via an audio component. In various examples, the processing unit may determine details of a hand of the user that is touching the touch sensing device and map the output image to the haptic device in accordance with whether the hand is a left hand of the user, a right hand of the user, has a large palm size, has a small palm size, has less than four fingers, or does not have a thumb.
In various examples, the environmental exploration device may also include a weight component coupled to the cylindrical housing operable to alter an orientation of the environmental exploration device.
BRIEF DESCRIPTION OF THE DRAWINGSThe disclosure will be readily understood by the following detailed description in conjunction with the accompanying drawings, wherein like reference numerals designate like structural elements.
FIG. 1 shows a user navigating an example environment using a guidance device.
FIG. 2 shows the user navigating another example environment using the guidance device.
FIG. 3 shows a flow chart illustrating a method for providing guidance using a guidance device.
FIG. 4A shows an isometric view of an example guidance device.
FIG. 4B shows a block diagram illustrating functional relationships of example components of the guidance device ofFIG. 4A.
FIG. 4C shows a diagram illustrating an example configuration of the input/output touch surface of the guidance device ofFIG. 4A.
FIG. 5A shows a cross-sectional view of the example guidance device ofFIG. 4A, taken along line A-A ofFIG. 4A.
FIG. 5B shows a cross-sectional view of another example of the guidance device ofFIG. 4A in accordance with further embodiments of the present disclosure.
FIG. 6A shows a diagram illustrating an example of how a model of an environment generated based on environmental data may be mapped to the input/output touch surface of the guidance device ofFIG. 5A.
FIG. 6B shows a diagram illustrating another example of how a model of an environment generated based on environmental data may be mapped to the input/output touch surface of the guidance device ofFIG. 5A.
FIGS. 7-10 show additional examples of guidance devices.
DETAILED DESCRIPTIONReference will now be made in detail to representative embodiments illustrated in the accompanying drawings. It should be understood that the following descriptions are not intended to limit the embodiments to one preferred embodiment. To the contrary, it is intended to cover alternatives, modifications, and equivalents as can be included within the spirit and scope of the described embodiments as defined by the appended claims.
Embodiments described herein may permit a sensory-impaired user to quickly and efficiently interact with his or her environment. Sensory-impaired users may use a device which provides guidance to the user that communicate information about the environment to aid the user's interaction therewith. The device may detect information about the environment, model the environment based on the information, and present guidance output based on the model in a fashion detectable by the user. Such guidance output may be tactile so the user can quickly and efficiently “feel” the guidance output while interacting with the environment. This device may enable the sensory-impaired user to more quickly and efficiently interact with his or her environment than is possible with existing sensory-impaired guidance devices such as canes.
The present disclosure relates to guidance devices for sensory impaired users. Sensor data may be obtained regarding an environment around a guidance device, assistance device, environmental exploration device, an/or other such device. A model of the environment may be generated based on the data. The model may be mapped at least to an input/output touch surface of the guidance device. Tactile output may be provided to a user of the guidance device via the input/output touch surface based at least on the mapping. Other output based on the model may also be provided. In this way, a sensory impaired user may be able to navigate and/or interact with an environment utilizing the guidance device. Such a guidance device may provide better assistance than and/or take the place of a cane, a guidance animal, and/or other guidance devices and/or relationships.
The guidance device may include a variety of different components such as sensors that obtain data regarding the environment, input/output mechanisms for receiving input from and/or providing input to the user, processing units and/or other components for generating the model and/or mapping the model to various input/output mechanisms, and so on. Additionally, the guidance device may cooperate and/or communicate with a variety of different electronic devices that have one or more such components in order to perform one or more of these functions.
These and other embodiments are discussed below with reference toFIGS. 1-10. However, those skilled in the art will readily appreciate that the detailed description given herein with respect to these Figures is for explanatory purposes only and should not be construed as limiting.
FIG. 1 shows auser102 navigating anexample environment100 using aguidance device101, assistance device, environmental exploration device, an/or other such device. Aguidance device101 may be a device that detects information about the user'senvironment100 and presents that information to the user to aid the user's102 interaction with theenvironment100.
As illustrated, theuser102 is holding theguidance device101 in ahand103 while walking down a street. Theuser102 also is wearing awearable device109 and has asmart phone108 in the user's pocket. Atraffic signal106 and a movingtruck107 are in front of theuser102 and anotherperson104 looking at acellular telephone105 is walking behind theuser102. Theuser102 may be receiving tactile, audio, and/or other guidance output related to theenvironment100 from the guidance device101 (which may detect information regarding theenvironment100 upon which the guidance output may be based) and/or thesmart phone108 and/orwearable device109. For example, theuser102 may be receiving output that theuser102 is approaching thetraffic signal106, thetruck107 is approaching theuser102, theother person104 is approaching theuser102 from behind, and so on. As illustrated, one or more of the shown devices may be wired and/or wirelessly transmitting and/or receiving in order to communicate with one or more of each other. Such devices may communicate with each other in order to obtain environmental or other sensor data regarding the environment, generate a model based on the sensor data, and provide the guidance output to theuser102 based on the model.
AlthoughFIG. 1 is illustrated as providing tactile guidance output to thehand103 of theuser102, it is understood that this is an example. In various implementations, tactile guidance output may be provided to various different parts of a user's body, such as a shirt made of a fabric configured to provide tactile output.
Similarly,FIG. 2 shows theuser102 navigating anotherexample environment200 using theguidance device101. As illustrated, theuser102 is holding the guidance device101 (which may detect information regarding theenvironment200 upon which the guidance output may be based) in the user'shand103 while walking through a room in a house. The room has adisplay screen208 connected to acommunication adapter209 on a wall and anavigation beacon206 on a table. Anotherperson204 looking at asmart phone205 is also in the room. As illustrated, one or more of the devices, may be transmitting and/or receiving in order to communicate with one or more of each other. Theuser102 may be receiving tactile, audio, and/or other guidance output related to theenvironment200 from theguidance device101. For example, theuser102 may be receiving tactile output of the layout of the room based on a map provided by thenavigation beacon206. By way of another example, theuser102 may be receiving tactile and/or audio direction indications associated with navigation from the user's102 current location in theenvironment200 to a destination input by theother person204 on thesmart phone205, theguidance device101, and/or thedisplay screen208.
FIG. 3 shows a flow chart illustrating amethod300 for providing guidance using a guidance device. By way of example, such a guidance device may be one or more of theguidance devices101,501,601,701, or801 illustrated and described herein with respect toFIGS. 1, 2, and 4-10.
At310, environmental or other sensor data may be obtained regarding an environment around a guidance device. The environmental data may be obtained from a variety of different kind of sensors or other sensor data components such as image sensors (such as cameras, three-dimensional cameras, infra-red image sensors, lasers, ambient light detectors, and so on), positional sensors (such as accelerometers, gyroscopes, magnetometers, and so on), navigations systems (such as a global positioning system or other such system), depth sensors, microphones, temperature sensors, Hall effect sensors, and so on. In some implementations, one or more such sensors may be incorporated into the guidance device. In other implementations, one or more such sensors may be components of other electronic devices and the sensor of the guidance device may be a communication component that receives environmental data from such other sensors transmitted from another electronic device (which may communicate with the guidance device in one or more client/server configurations, peer-to-peer configurations, mesh configurations, and/or other communication network configurations). Such sensors may obtain environmental data regarding any aspect of an environment such as the presence and/or position of objects, the movement of objects, weather conditions, textures, temperatures, and/or any other information about the environment.
At320, a model of the environment may be generated for guidance output based at least on the environmental data. As even the smallest environment may contain too much information to output or output in such a way that a user can make sense of it, the model may include a subset of the obtained environmental data. The environmental data may be processed in order to determine which of the environmental data is relevant enough to the user to be included in the model. Such processing may include image or object recognition and/or other analysis of environmental data. For example, an environment may contain a number of objects, but only the objects directly in front of, to the sides, and/or behind a user and/or particularly dangerous objects such as moving automobiles may be included in a model.
In some cases, the determination of whether to include environmental data in the model may be dependent on a current state of one or more output devices that will be used to present guidance output data for the model. For example, an input/output touch surface may be used to provide tactile output via a currently touched area. When a larger area of the input/output touch surface is being touched, more environmental data may be included in the generated model. Conversely, when a smaller area of the input/output touch surface is being touched, less environmental data may be included in the generated model.
The guidance device may generate the model. However, in various cases the guidance device may cooperate with one or more other electronic devices communicably connected (such as in various client/server configurations, peer-to-peer configurations, mesh configurations, and/or other communication network configurations) to the guidance device in order to generate the model. For example, the guidance device may generate the model by receiving a model generated by another device. By way of another example, another device may process the environmental data and provide the guidance device an intermediate subset of the environmental data which the guidance device then uses to generate the model.
At330, guidance output based at least on the model may be provided. Such guidance output may be tactile output (such as shapes of objects, indications of positions or motions of objects, and so on), audio output (such as audio notifications related to and/or descriptions of objects or conditions in the environment and so on), and/or any other kind of output. Providing guidance output based on the model may include mapping the model to one or more output devices and providing the guidance output based at least on the mapping via the output device(s).
For example, an input/output touch surface may be used to provide tactile guidance output via an area currently being touched by a user's hand. The area currently being touched by the user's hand may be determined along with the orientation of the user's hand touching the area and the model may be mapped to the determined area and orientation. Tactile output may then be provided via the input/output touch surface according to the mapping. In such a case, the tactile output may be mapped in such a way as to convey shapes and/or textures of one or more objects, information about objects (such as position in the environment, distance, movement, speed of movement, and/or any other such information) via the position on the input/output touch surface (such as where the output is provided in relation to various portions of the user's hand) and/or other contextual indicators presented via the input/output touch surface, and/or other such information about the environment.
By mapping guidance output to the area currently being touched by a user's hand, power savings may be achieved as output may not be provided via areas of the input/output touch surface that are not capable of being felt by a user. Further, mapping guidance output to the area currently being touched by a user's hand, assistance may be provided to the user to ensure that the output is provided to portions of the user's hand as expected by the user as opposed to making the user discover how to touch the guidance device in a particular manner.
In some implementations, determination of the area currently being touched by a user's hand may include determining details about the user's hand and the guidance output may be mapped according to such details. For example, the details may include that the hand is a left hand of the user, a right hand of the user, has a large palm size, has a small palm size, has less than four fingers, or does not have a thumb.
The guidance device may provide the guidance output via one or more output components of the guidance device such as one or more input/output touch surface, speakers, and/or other output devices. However, in some cases the guidance device may cooperate with one or more other electronic devices communicably connected (such as in various client/server configurations, peer-to-peer configurations, mesh configurations, and/or other communication network configurations) to provide one or more kinds of output.
For example, a guidance device may provide a pattern of raised bumps or other such protrusions to indicate objects in the environment, the identity of the objects, the position of those objects, and the distance of those objects to the user. A wearable device on the user's wrist may also provide vibration output to indicate that one or more of the objects are moving towards the user. Further, the user's cellular telephone may output audio directions associated with navigation of the user from the user's current location (such current location information may be detected by one or more sensors such as a global positioning system or other navigation component) to a destination.
Although theexample method300 is illustrated and described as including particular operations performed in a particular order, it is understood that this is an example. In various implementations, various orders of the same, similar, and/or different operations may be performed without departing from the scope of the present disclosure.
For example, although themethod300 is illustrated and described as generating a model based on the environmental data and providing guidance output based on the model, it is understood that this is an example. In some implementations, guidance output may be provided based on environmental data without generating a model. In one example of such a scenario, the guidance device may detect that the guidance device was held and then dropped using one or more positional sensors, input/output touch surface, and/or other such components. In such a case, the guidance device may emit an audio alarm upon such detection without generation of a model to aid a user in locating the dropped guidance device.
FIG. 4A shows an isometric view of anexample guidance device101. Theguidance device101 may include acylindrical housing410 that may be formed of aluminum, plastic, and/or any other suitable material. An input/output touch surface411 may be disposed on a surface of thecylindrical housing410. The input/output touch surface411 may be operable to detect touch input (such as touch, force, pressure and so on via one or more capacitive sensors, touch sensors, and/or any other kind of touch and/or force sensors). The input/output touch surface411 may also be operable to provide tactile output, such as one or more patterns of vibrations, raised bumps or other protrusions, and so on. For example, the input/output touch surface411 may include one or more piezoelectric cells that can be electrically manipulated to create one or more patterns of raised bumps on the input/output touch surface411.
Theguidance device101 may also includeimage sensors413 and414. Theimage sensors413 and414 may be any kind of image sensors such as one or more cameras, three dimensional cameras, infra-red image sensors, lasers, ambient light detectors, and so on.
Theguidance device101 may also include one ormore protectors412 that may prevent the input/output touch surface411 from contacting surfaces with which theguidance device101 comes into contact. Theprotectors412 may be formed of rubber, silicone, plastic, and/or any other suitable material. As illustrated, theprotectors412 may be configured as rings. In such a case, theguidance device101 may include one or more weights and/or other orientation elements (discussed further below) to prevent theguidance device101 from rolling when placed on a surface. However, in other cases theprotectors412 may be shaped in other configurations (such as with a flat bottom) to prevent rolling of theguidance device101 on a surface without use of a weight or other orientation element.
As shown, thecylindrical housing410 is shaped such that it may be held in any number of different orientations. To accommodate for such different holding orientations, theguidance device101 may utilize the input/output touch surface411, one or more position sensors, and/or other components to detect orientation information regarding the orientation in which theguidance device101 is being held in order to map output to output devices such as the input/output touch surface411 accordingly. However, in other implementations theguidance device101 may have a housing shaped to be held in a particular manner, such as where a housing is configured with a grip that conforms to a user's hand in a particular orientation. In such an implementation, detection of how theguidance device101 is being held may be omitted while still allowing theguidance device101 to correctly map output to output devices such as the input/output touch surface411.
In some cases, theguidance device101 may be configured to operate in a particular orientation. For example, theimage sensor413 may be configured as a front image sensor and theimage sensor414 may be configured as a rear image sensor. This may simplify analysis of environmental and/or sensor data from theimage sensors413 and414. This may also allow for particular configurations of theimage sensors413 and414, such as where theimage sensor413 is a wider angle image sensor than theimage sensor414 as a user may be more concerned with objects in front of the user than behind.
However, in other cases theguidance device101 may be configurable to operate in a variety of orientations. For example, theimage sensors413 and414 may be identical and theguidance device101 may use theimage sensors413 and414 based on a currently detected orientation (which may be based on detection by input/output touch surface411, one or more positional sensors, and/or other such components).
FIG. 4B shows a block diagram illustrating functional relationships of example components of theguidance device101 ofFIG. 4A. As shown, in various example implementations theguidance device101 may include one ormore processing units424,batteries423,communication units425,positional sensors426,speakers427,microphones428,navigation systems429,image sensors413 and414, tactile input/output surfaces411, and so on. Theguidance device101 may also include one or more additional components not shown, such as one or more non-transitory storage media (which may take the form of, but is not limited to, a magnetic storage medium; optical storage medium; magneto-optical storage medium; read only memory; random access memory; erasable programmable memory; flash memory; and so on).
Theprocessing unit424 may be configured such that theguidance device101 is able to perform a variety of different functions. One such example may be themethod300 illustrated and described above with respect toFIG. 3. Theprocessing unit424 may also be configured such that theguidance device101 is able to receive a variety of different input and/or provide a variety of different output. For example, theguidance device101 may be operable to receive input via thecommunication unit425, the positional sensors426 (such as by shaking or other motion of the guidance device101), the microphone428 (such as voice or other audio commands), the tactile input/output touch surface411, and so on. By way of another example, theguidance device101 may be operable to provide output via thecommunication unit425, speaker427 (such as speech or other audio output), the tactile input/output touch surface411, and so on.
The tactile input/output touch surface411 may be configured in a variety of different ways in a variety of different implementations such that it is operable to detect touch (or force, pressure, and so on) and/or provide tactile output. In some implementations, the tactile input/output touch surface411 may include touch sensing device layer and a haptic device layer. Such touch sensing device and a haptic device layers may be positioned adjacent to each other.
For example,FIG. 4C shows a diagram illustrating an example configuration of the input/output touch surface411. As shown, the input/output touch surface411 may be positioned on thehousing410. The input/output touch surface411 may include a number of layers such as atactile feedback layer411B (such as piezoelectric cells, vibration actuators, and so on) and atouch layer411C (such as a capacitive touch sensing layer, a resistive touch sensing layer, and so on). The input/output touch surface411 may also include acoating411A (which may be formed of plastic or other material that may be more flexible than materials such as glass), which may function to protect the input/output touch surface411.
As shown, in some implementations the input/output touch surface411 may include adisplay layer411D. A vision impaired user may not be completely blind and as such visual output may be presented to him via thedisplay layer411D. Further, in some cases visual output may be presented via thedisplay layer411D to another person who is assisting the user of theguidance device101, such as where the other person is being presented visual output so the other person can input a destination for the user to which theguidance device101 may then guide the user.
In some implementations, such as implementations where thedisplay layer411D is a display that utilizes a backlight, the input/output touch surface411 may include abacklight layer411E.
In various implementations, the input/output touch surface411 (and/or other components of the guidance device101) may be operable to detect one or more biometrics of the user, such as a fingerprint, palm print and so on. For example, a user's fingerprint may be detected using a capacitive or other touch sensing device of the input/output touch surface411.
Such a biometric may be used to authenticate the user. In some situations, entering a password or other authentication mechanism may be more difficult for a sensory impaired user than for other users. In such a situation, using a detected biometric for authentication purposes may make authentication processes easier for the user.
FIG. 5A shows a cross-sectional view of theexample guidance device101 ofFIG. 4A, taken along line A-A ofFIG. 4A. As illustrated, theguidance device101 may include a printed circuit board521 (and/or other electronic module) with one or more connectedelectronic components522 disposed on one or more surfaces thereon. Suchelectronic components522 may include one or more processing units, wired and/or wireless communication units, positional sensors, input/output units (such as one or more cameras, speakers or other audio components, microphones, and so on) navigation systems, and/or any other electronic component. The printedcircuit board521 may be electrically connected to the input/output touch surface411, theimage sensors413 and414, one ormore batteries423 and/or other power sources, and so on.
As illustrated, thebattery423 may be configured as a weight at a “bottom” of theguidance device101. This may operate to orient theguidance device101 as shown when theguidance device101 is resting on a surface instead of being held by a user. As such, thebattery423 may prevent theguidance device101 from rolling.
AlthoughFIG. 5A illustrates a particular configuration of components, it is understood that this is an example. In other implementations of theguidance device101, other configurations of the same, similar, and/or different components are possible without departing from the scope of the present disclosure.
For example, in one implementation of theguidance device101 ofFIG. 5A, theimage sensor413 may be a wide angle image sensor configured as a front image sensor. However, in another implementation shown inFIG. 5B,image sensor413 may be a narrow angle image sensor configured as a front image sensor. To compensate for the narrow angle of theimage sensor413, one or moreadditional image sensors530 may be used.
As shown in this example, theadditional image sensor530 may be located at a bottom corner of thecylindrical housing410. Theadditional image sensor530 may be maneuverable via amotor531 and/or other movement mechanism. In this example, theadditional image sensor530 may be rotatable via themotor531 such that it can be operated to obtain image sensor data of an area around the user's feet in order to compensate for a narrow angle image sensor used for theimage sensor413. In other examples, a similar image sensor/motor mechanism may be located at a top corner of thecylindrical housing410 in order to obtain image sensor data of an area above the user's head.
By way of another example, in various implementations, one or more ends of thecylindrical housing410 may be configured with flanges and/or other structures that project from the ends of thecylindrical housing410. Such flanges and/or other structures may protect theimage sensors413 and/or414 from damage.
FIG. 6A shows a diagram illustrating an example600A of how a model of an environment generated based on environmental data may be mapped to the input/output touch surface411 of the guidance device ofFIG. 5A. For purposes of clarity, the input/output touch surface411 is shown as unrolled and is marked to indicate what theguidance device101 may have detected as the top and front of the input/output touch surface411 is based on a current orientation in which a user is holding theguidance device101. Thehand641 indicates an area of the input/output touch surface411 where a user's hand has been detected as currently touching the input/output touch surface411. In this example, the input/output touch surface411 is providing tactile output indicating information about a model generated based on environmental data regarding theenvironment100 shown inFIG. 1.
The input/output touch surface411 may include a number of bumps that can be raised or not to indicate input, such as via piezoelectric cells. As illustrated, filled bumps indicate raised bumps and unfilled bumps indicate bumps that are not raised.
The input/output touch surface411 may provide tactile output via the raised bumps to indicate shapes or textures of objects in the environment. For example, the raisedbumps644 indicate the shape of thetruck107 in theenvironment100 ofFIG. 1 and the raisedbumps645 indicate the shape of thetraffic signal106 in theenvironment100 ofFIG. 1. The user may be able to feel the shapes of the raisedbumps644 and645 and understand that thetruck107 andtraffic signal106 are present.
The region in which tactile output is provided on the input/output touch surface411 via the raised bumps may correspond to positional information regarding objects in the environment. Further, the relationships between raised bumps in various regions may correspond to relationships in the positions of the corresponding objects in the environment. For example, the raisedbumps645 are illustrated as further to the left on the input/output touch surface411 than the raised bumps644. This may correspond to the fact that thetraffic signal106 is closer to theuser102 in theenvironment100 ofFIG. 1 than thetruck107.
The tactile output may also include a variety of different context indicators. As described above the regions in which tactile output is provided on the input/output touch surface411 via the raised bumps may correspond to positional information regarding objects in the environment. However, in some implementations the positional information indicated by the regions may be dependent on a context indicator presented via tactile output via the input/output touch surface411 and/or otherwise presented, such as where the positional information is first positional information when a first positional context indicator is provided and is second positional information when a second positional context indicator is provided.
For example,FIG. 6A illustrates a series of ranges “Range1,” “Range2,” and “Range3.” Each range maps an area of the input/output touch surface411 touched by the user's hand as indicate by thehand641 to a distance range. As illustrated,range1 maps the pinky finger of thehand641 to a distance of 0 meters, the ring finger to a distance of 1 meter, and the middle finger to a distance of 3 meters.Range2 maps the pinky finger of thehand641 to a distance of 0 meters, the ring finger to a distance of 10 meters, and the middle finger to a distance of 30 meters.Range3 maps the pinky finger of thehand641 to a distance of 0 meters, the ring finger to a distance of 100 meters, and the middle finger to a distance of 300 meters.Regions642A-C may be range context indicators that indicate via tactile output which of the ranges is being currently presented. In this example,642A may indicateRange1,642B may indicateRange2, and643C may indicateRange3. By making the interpretation of information corresponding to region dependent on such content indicators, a wider variety of information may be presented via the input/output touch surface411 while still being comprehensible to a user.
As shown, a bump is raised for642A. This indicates in this example thatRange1 is currently being used. Thus, thetraffic signal106 is indicated by the raisedbumps645 as being 2 meters from the user and thetruck107 is indicated by the raisedbumps644 as being 3 meters from the user.
In various implementations, a variety of other kind of information may be presented. For example, the raised bump(s) of theregions642A-C indicating the range current being used may be alternatingly raised and lowered to create the sensation that the context indicator is being “flashed.” This may indicate that one or more of the objects are moving. In some implementations the raisedbumps644 and/or645 may be similarly raised and lowered to create the sensation of flashing to indicate that the respective object is moving. In some cases, the speed of the flashing may correspond to the speed of the movement.
By way of another example, azone643 may present tactile output related to one or more alerts to which the user's attention is specially directed. In some cases, thezone643 may present indications of objects above the user's head, objects at the user's feet, the height of an object in the environment, the fact that that an object is traveling in a course that will connect with a user (which may be determined using real time calculations), the fact that the user is approaching the object and the object is below a head height of the user, and so on. In other cases, other alerts may be provided via thezone643, such as raising and lowering bumps in thezone643 to indicate that an object is moving toward the user at high speed. Various configurations are possible and contemplated without departing from the scope of the present disclosure.
Although this example illustrates objects that are in front of the user without illustrating objects that are behind the user, it is understood that this is an example and that various depictions of an environment may be presented. For example, in some implementations one portion of the input/output touch surface411 may correspond to objects located in the user's direction of travel while another portion corresponds to objects located in the opposite direction.
FIG. 6B shows a diagram illustrating another example600B of how a model of an environment generated based on environmental data may be mapped to the input/output touch surface411 of theguidance device101 ofFIG. 5A. In this example, anarea646 of the input/output touch surface411 may provide tactile output related to detected speech in an environment. For example, a microphone or other sound component of theguidance device101 may be used to detect one or more words spoken in the environment. Theguidance device101 may perform voice to text speech recognition on the detected spoken words and provide a tactile output presenting the text in thearea646. For example, as illustrated the detected speech may be presented in braille via raised bumps in thearea646.
FIG. 7 shows an additional example of aguidance device701. Theguidance device701 may be a smart phone with ahousing710, one ormore cameras713 or other sensors, an input/output touch display screen surface711 (that may include a touch sensing device to detect touch and a haptic device to provide tactile output), and/or various other components. The smart phone may provide guidance to a user by performing a method such as themethod300 illustrated and described above. A user may place a hand on the input/output touchdisplay screen surface711 in order to feel tactile output related to guidance.
FIG. 8 shows another example of aguidance device801. Theguidance device801 may be a tablet computing device. Similar to the smart phone ofFIG. 7, the tablet computing device may include ahousing810, one ormore cameras813 or other sensors, an input/output touchdisplay screen surface811, and/or various other components. The tablet computing device may provide guidance to a user by performing a method such as themethod300 illustrated and described above. As the input/output touchdisplay screen surface811 is larger than the input/output touchdisplay screen surface711, placement of a hand on the input/output touchdisplay screen surface811 in order to receive tactile output related to guidance may be more comfortable and may be capable of providing more tactile information than theguidance device701.
FIG. 9 shows yet another example of aguidance device901. Theguidance device901 may be an item of apparel. Similar to the smart phone ofFIG. 7 and the tablet computing device ofFIG. 8, the item of apparel may include ahousing910, one ormore cameras913 or other sensors, an input/output touch surface911, and/or various other components. The item of apparel may provide guidance to a user by performing a method such as themethod300 illustrated and described above. As shown, the input/output touch surface911 may be in contact with a user's back when the item of apparel is worn. Thus, the user may feel tactile output related to guidance provided by the item of apparel without other people being able to visibly detect that the user is receiving guidance.
FIG. 10 shows still another example of aguidance device1001. Theguidance device1001 may be a smart watch and/or other wearable device. Similar to the smart phone ofFIG. 7, the tablet computing device ofFIG. 8, and the item of apparel ofFIG. 9, the smart watch may include ahousing1010, one ormore cameras1013 or other sensors, an input/output touch surface1011, and/or various other components. The smart watch may provide guidance to a user by performing a method such as themethod300 illustrated and described above. As shown, the input/output touch surface1011 may be in contact with a user's wrist when the smart watch is attached. Thus, the user may feel tactile output related to guidance provided by the smart watch in a hands free manner.
As described above and illustrated in the accompanying figures, the present disclosure relates to guidance devices for sensory impaired users. Sensor data may be obtained regarding an environment around a guidance device, assistance device, environmental exploration device, an/or other such device. A model of the environment may be generated based on the data. The model may be mapped at least to an input/output touch surface of the guidance device. Tactile output may be provided to a user of the guidance device via the input/output touch surface based at least on the mapping. Other output based on the model may also be provided. In this way, a sensory impaired user may be able to navigate and/or interact with an environment utilizing the guidance device. Such a guidance device may provide better assistance than and/or take the place of a cane, a guidance animal, and/or other guidance devices and/or relationships.
The described disclosure may be provided as a computer program product, or software, that may include a non-transitory machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A non-transitory machine-readable medium includes any mechanism for storing information in a form (e.g., software, processing application) readable by a machine (e.g., a computer). The non-transitory machine-readable medium may take the form of, but is not limited to, a magnetic storage medium (e.g., floppy diskette, video cassette, and so on); optical storage medium (e.g., CD-ROM); magneto-optical storage medium; read only memory (ROM); random access memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM); flash memory; and so on.
The present disclosure recognizes that personal information data, including biometric data, in the present technology, can be used to the benefit of users. For example, the use of biometric authentication data can be used for convenient access to device features without the use of passwords. In other examples, user biometric data is collected for providing users with feedback about their health or fitness levels. Further, other uses for personal information data, including biometric data, that benefit the user are also contemplated by the present disclosure.
The present disclosure further contemplates that the entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information data will comply with well-established privacy policies and/or privacy practices. In particular, such entities should implement and consistently use privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining personal information data private and secure, including the use of data encryption and security methods that meets or exceeds industry or government standards. For example, personal information from users should be collected for legitimate and reasonable uses of the entity and not shared or sold outside of those legitimate uses. Further, such collection should occur only after receiving the informed consent of the users. Additionally, such entities would take any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices.
Despite the foregoing, the present disclosure also contemplates embodiments in which users selectively block the use of, or access to, personal information data, including biometric data. That is, the present disclosure contemplates that hardware and/or software elements can be provided to prevent or block access to such personal information data. For example, in the case of biometric authentication methods, the present technology can be configured to allow users to optionally bypass biometric authentication steps by providing secure information such as passwords, personal identification numbers (PINS), touch gestures, or other authentication methods, alone or in combination, known to those of skill in the art. In another example, users can select to remove, disable, or restrict access to certain health-related applications collecting users' personal health or fitness data.
The foregoing description, for purposes of explanation, used specific nomenclature to provide a thorough understanding of the described embodiments. However, it will be apparent to one skilled in the art that the specific details are not required in order to practice the described embodiments. Thus, the foregoing descriptions of the specific embodiments described herein are presented for purposes of illustration and description. They are not targeted to be exhaustive or to limit the embodiments to the precise forms disclosed. It will be apparent to one of ordinary skill in the art that many modifications and variations are possible in view of the above teachings.