CROSS-REFERENCE TO RELATED APPLICATIONSThis application claims priority to U.S. Provisional Application Ser. No. 61/074,415, filed on Jun. 20, 2008, entitled “MOBILE COMPUTING SERVICES BASED ON DEVICES WITH DYNAMIC DIRECTION INFORMATION,” U.S. Provisional Application Ser. No. 61/074,590, filed on Jun. 20,2008, entitled “MOBILE COMPUTING SERVICES BASED ON DEVICES WITH DYNAMIC DIRECTION INFORMATION,” and to U.S. Provisional Application Ser. No. 61/073,849, filed on Jun. 19, 2008, entitled “MOBILE COMPUTING DEVICES, ARCHITECTURE AND USER INTERFACES BASED ON DYNAMIC DIRECTION INFORMATION,” the entirety of each of which are incorporated herein by reference.
TECHNICAL FIELDThe subject disclosure relates to the provision of direction-based services for a device based on direction information and/or other information, such as location information, and to overlaying information in an image based view of a set of points of interest associated with one or more direction-based services.
BACKGROUNDBy way of background concerning some conventional systems, mobile devices, such as portable laptops, PDAs, mobile phones, navigation devices, and the like have been equipped with location based services, such as global positioning system (GPS) systems, WiFi, cell tower triangulation, etc. that can determine and record a position of mobile devices. For instance, GPS systems use triangulation of signals received from various satellites placed in orbit around Earth to determine device position. A variety of map-based services have emerged from the inclusion of such location based systems that help users of these devices to be found on a map and to facilitate point to point navigation in real-time and search for locations near a point on a map.
However, such navigation and search scenarios are currently limited to displaying relatively static information about endpoints and navigation routes. While some of these devices with location based navigation or search capabilities allow update of the bulk data representing endpoint information via a network, e.g., when connected to a networked portable computer (PC) or laptop, such data again becomes fixed in time. Accordingly, it would be desirable to provide a set of richer experiences for users than conventional experiences predicated on location and conventional processing of static bulk data representing potential endpoints of interest.
Moreover, with conventional navigation systems, a user may wish to request information about a particular point of interest (POI), but if it is not clear what additional information might be available about various POIs represented on display, other than that it is possible to navigate to the particular POI. The user experience suffers as a result since opportunities to interact with POIs are lost with conventional navigation systems.
The above-described deficiencies of today's location based systems and devices are merely intended to provide an overview of some of the problems of conventional systems, and are not intended to be exhaustive. Other problems with the state of the art and corresponding benefits of some of the various non-limiting embodiments may become further apparent upon review of the following detailed description.
SUMMARYA simplified summary is provided herein to help enable a basic or general understanding of various aspects of exemplary, non-limiting embodiments that follow in the more detailed description and the accompanying drawings. This summary is not intended, however, as an extensive or exhaustive overview. Instead, the sole purpose of this summary is to present some concepts related to some exemplary non-limiting embodiments in a simplified form as a prelude to the more detailed description of the various embodiments that follow.
Direction based pointing services are provided for portable devices or mobile endpoints. Mobile endpoints can include a positional component for receiving positional information as a function of a location of the portable electronic device, a directional component that outputs direction information as a function of an orientation of the portable electronic device and a processing engine that processes the positional information and the direction information to determine a subset of points of interest relative to the portable electronic device as a function of the positional information and/or the direction information.
Devices or endpoints can include compass(es), e.g., magnetic or gyroscopic, to determine a direction and location based systems for determining location, e.g., GPS. To supplement the positional information and/or the direction information, devices or endpoints can also include component(s) for determining speed and/or acceleration information for processing by the engine, e.g., to aid in the determination of gestures made with the device.
With the addition of directional information in the environment, a variety of service(s) can be provided on top of identification of specific object(s) of interest. For instance, content for POIs can be overlaid on top of an image based representation of real space to provide entry points to viewing information about the POIs or interacting with the POIs.
Various embodiments include displaying image data representing a subset of real space near a portable computing device; determining a set of points of interest (POIs) for direction based service(s) supported by the portable computing device within scope of the real space represented by the image data and automatically overlaying POI content on the image data. In one embodiment, the display is included in an electronic device worn such that the display is substantially in front of a user's eyes, e.g., as part of a heads up display, helmet, headgear, helmet, shoulder supported device, neck supported device, etc.
These and other embodiments are described in more detail below.
BRIEF DESCRIPTION OF THE DRAWINGSVarious non-limiting embodiments are further described with reference to the accompanying drawings in which:
FIG. 1 illustrates a block diagram of POIs displayed and corresponding overlay information in accordance with an embodiment;
FIG. 2 illustrates a non-limiting sample image overlay in accordance with an embodiment;
FIG. 3 illustrates another non-limiting sample image overlay in accordance with an embodiment;
FIG. 4 is a flow diagram illustrating an exemplary non-limiting process for when a portable electronic device is held in a vertical plane;
FIG. 5 is a flow diagram illustrating an exemplary non-limiting process for determining a planar orientation of a device;
FIG. 6 is a block diagram illustrating alternate embodiments for image based representation of real space based on whether the device is horizontal or vertical;
FIG. 7 is a block diagram illustrating an embodiment for image based representation of real space when a planar orientation of the device is vertical;
FIG. 8 is a block diagram illustrating an embodiment for image based representation of real space when executing a collision based algorithm(s);
FIG. 9 is a block diagram illustrating an embodiment for image based representation of real space when marking points of interest for audio/visual notification;
FIG. 10 is an embodiment of an image rendering device as implemented in a heads up display device, such as headgear, glasses, or the like;
FIG. 11 is a block diagram illustrating alternate embodiments for image based representation of real space based on the device being in a substantially horizontal plane;
FIG. 12 is a block diagram illustrating alternate embodiments for image based representation of real space based on the device being in a substantially horizontal plane;
FIG. 13 is a block diagram illustrating alternate 2-D or 3-D embodiments for image based representation of real space in front of a user of the device based on the device being in a substantially vertical plane;
FIG. 14 is a block diagram illustrating alternate 2-D or 3-D embodiments for image based representation of real space behind a user of the device based on the device being in a substantially vertical plane;
FIG. 15 is a block diagram illustrating alternate embodiments for modifying the image based representation of real space prior to overlaying POI content;
FIG. 16 is a non-limiting process for overlaying POI content on a display in a direction based services environment;
FIG. 17 is another non-limiting process for overlaying POI content on a display in a direction based services environment;
FIG. 18 is a sample mobile computing device for performing POI overlay of content in a direction based services environment applicable to one or more embodiments herein;
FIG. 19 is an exemplary non-limiting architecture for providing direction based services based on direction based requests as satisfied by network services and corresponding data layers;
FIG. 20 is a sample computing device in which one or more embodiments described herein may be implemented;
FIG. 21 illustrates a sample embodiment in the context of advertisement content and opportunity to deliver the advertisement content as overlay content to clients consuming direction based services for a set of POIs within scope;
FIG. 22 is a block diagram illustrating the formation of motion vectors for use in connection with location based services;
FIG. 23,FIG. 24 andFIG. 25 illustrate aspects of algorithms for determining intersection endpoints with a pointing direction of a device;
FIG. 26 represents a generic user interface for a mobile device for representing points of interest based on pointing information;
FIG. 27 represents some exemplary, non-limiting alternatives for user interfaces for representing point of interest information;
FIG. 28 represents some exemplary, non-limiting fields or user interface windows for displaying static and dynamic information about a given point of interest;
FIG. 29 illustrates a process for predicting points of interest and aging out old points of interest in a region-based algorithm;
FIG. 30 illustrates a first process for a device upon receiving a location and direction event;
FIG. 31 illustrates a second process for a device upon receiving a location and direction event;
FIG. 32 is a block diagram representing an exemplary non-limiting networked environment in which embodiment(s) may be implemented; and
FIG. 33 is a block diagram representing an exemplary non-limiting computing system or operating environment in which aspects of embodiment(s) may be implemented.
DETAILED DESCRIPTIONOverviewAmong other things, current location services systems and services, e.g., GPS, cell triangulation, P2P location service, such as Bluetooth, WiFi, etc., tend to be based on the location of the device only, and tend to provide static experiences that are not tailored to a user because the data about endpoints of interest is relatively static, or fixed in time. Another problem is that a user may wish to do other things than navigate to a particular point of interest (POI).
At least partly in consideration of these and other deficiencies of conventional location based services, in various non-limiting embodiments, in addition to displaying image based representations of real space including representations of direction based services objects determined for the real space, e.g., points of interest, the image based representations are overlaid with additional POI information pertaining to the POIs. In this regard, the user experience is substantially improved since users can view or interact with POI information in conceptual proximity to the objects as represented in the image based representation of real space, e.g., in real time.
For instance, various embodiments of a portable device are provided that use direction information, position information and/or motion information to determine a set of POIs within scope. Then, when displaying an image based view (e.g., video data or satellite images) of the set of POIs and corresponding real space, POI information is overlaid, next to, nearby or over the POIs. A way to interact with POIs is thus provided via a device having access to direction information about a direction of the device, position information about a position of the device and optional motion information, wherein based on the information, the device intelligently fetches content regarding POIs and overlays the content in association with the POIs as represented in the image data.
A non-limiting device provisioned for direction based services can include an engine for analyzing location information (e.g., GPS, cell phone triangulation, etc.), direction information such as compass information (e.g., North, West, South, East, up, down, etc.), and optionally movement information (e.g., accelerometer information) to allow a platform for pointing to and thereby finding objects of interest in a user's environment. A variety of scenarios are contemplated based on a user finding information of interest about objects of interests, such as restaurants, or other items around an individual, or persons, places or events of interest nearby a user and tailoring information to that user (e.g., coupons, advertisements), and then overlaying that content on a display representing real space in proximity to the device. Any of the embodiments described herein can be provided in the context of a heads up display of POIs, or portable electronic device, i.e., any computing device wherein the act of pointing directionally with the device can be used in connection with one or more direction based services
In various non-limiting embodiments, a process includes displaying image data representing a subset of real space in a pre-defined vicinity of a portable computing device, determining a set of POIs of direction based service(s) supported by the portable computing device within scope and automatically overlaying content relating to the POIs of the set on the image data. The overlaying can include indicating an interactive capability with respect to the POI(s) via the direction based service(s). The overlaying of content can overlap or be presented near the POI content and the underlying POI as represented in the image data. The content relating to the POIs can be automatically received from the direction based service(s).
The image data can be any one or more of: video data input from an image capture device of the portable computing device, image data received from a network service based on a location of the portable computing device, satellite image data received from a network service based on a location of the portable computing device, or image data received from a network service based on a direction and the location of the portable computing device.
The process can also include determining a planar orientation of a display of the portable computing device. In such embodiments, if the planar orientation is substantially vertical, two dimensional image data representing a subset of three dimensional real space in front of, or alternatively behind, the user is displayed. The determining can also ascertain whether the display is facing substantially up or substantially down. If the display is facing substantially up, the image data is a topographical map of the area in the vicinity of the device. If the display is facing substantially down, the image data is a celestial body map of the sky in the vicinity of the device.
In other embodiments, a portable electronic device includes a positional component that outputs position information as a function of a location of the portable electronic device and a directional component, e.g., a digital compass, that outputs direction information as a function of an orientation of the portable electronic device. The position information and the direction information are processed to determine POIs relating to the position information. Then, the POIs are displayed within a user interface representing geographical space nearby the portable electronic device along with overlaid interactive user interface elements overlapping or over the at least one point of interest in the user interface. Automatic action can thus be taken by inputting one or more of the interactive user interface elements.
The device can also include a motion component that outputs motion information as a function of at least one movement of the portable device. Using the motion information, gestures can be determined with respect to the POIs, and the gestures can initiate automatic action with respect to at least one POI.
In other embodiments, a method for displaying POI information on a mobile device is provided including determining a set of POIs within interactive scope of the device based on direction information and the position information of the device, displaying an image based representation of some POIs, receiving POI advertisement information for the POIs and automatically overlaying the POI advertisement information at pertinent locations of the image based representation.
This can include modifying the image based representation prior to the displaying of the POIs. This might, for example, include switching between nighttime and daytime views, modifying the image based representation based on a planar orientation of the device, or modifying the image based representation as a function of the direction information of the device.
Accordingly, in various non-limiting embodiments, a way to interact with POIs is provided by a pointing device having imaging means, such as a camera for still or video imaging of by the device. In a variety of embodiments, visual indications of POIs are overlaid on an image or map or graphic of a location, so that a user can easily distinguish among their actual surroundings and POIs in their actual surroundings. In addition to being implemented on a pointing device, a heads up display embodiment is provided that is worn on the head. A variety of scenarios are explored showing the benefits of POI overlay content.
While each of the various embodiments herein are presented independently, e.g., as part of the sequence of respective Figures, one can appreciate that a portable device and/or associated network services, as described, can incorporate or combine two or more of any of the embodiments. Given that each of the various embodiments improve the overall services ecosystem in which users wish to operate, together a synergy results from combining different benefits. Accordingly, the combination of different embodiments described below shall be considered herein to represent a host of further alternate embodiments.
Details of various other exemplary, non-limiting embodiments are provided below.
Overlay of Information Associated with Points of Interest of Direction Based Data Services
As mentioned, with the addition of directional information in the environment, a variety of service(s) can be provided on top of identification of specific object(s) or point(s) of interest. For instance, content for POIs can be overlaid on top of an image based representation of real space to provide entry points to viewing information about the POIs or interacting with the POIs. The techniques can be embodied in any device provisioned for direction based services, such as a portable electronic device, or an electronic device worn such that the display is substantially in front of a user's eyes, e.g., as part of a heads up display, helmet, headgear, helmet, shoulder supported device, neck supported device, etc.
FIG. 1 is a high level block diagram of POIs displayed and corresponding overlay information in accordance with an embodiment of a user interface. Direction based services enabled device100 (examples provided below) includes adisplay110 for displaying image based data corresponding to real space in proximity todevice100 and/or as a function of direction of thedisplay110 ofdevice100. In a typical scenario, based on location and/or direction, a set of POIs is displayed in the image data ondisplay110, such asPOIs122,124 and126. Correspondingly, in various embodiments, POI content is retrieved from one or more direction based data services and overlaid near thePOIs122,124 and126, for example, at locations indicated byPOI overlays112,114 and126, respectively.
FIG. 2 illustrates a non-limiting sample image overlay in accordance with an embodiment. Where a device includes a camera, a representativenon-limiting overlay UI200 might, for example, include an image based representation of three POIs POI1, POI2, POI3 and POI4. The POIs are overlaid over actual image data being real time viewed on the device via an LCD screen or like display. The actual image data can be of products on a shelf or other display or exhibit in a store. Thus, as the user aims the camera around his or her environment, the lens becomes the pointer, and the POI information can be overlaid intelligently for discovery of endpoints of interest. Moreover, a similar embodiment can be imagined even without a camera, such as a UI in which 3-D objects are virtually represented based on real geometries known for the objects relative to the user.
Thus, in the present non-limiting embodiment, the device UI can be implemented consistent with a camera, or a virtual camera, view for intuitive use of such devices. The pointer mechanism of the device could also switch based on whether the user was currently in live view mode for the camera or not. Moreover, assuming sufficient processing power and storage, real time image processing could discern an object of interest and based on image signatures, overlay POI information over such image in a similar manner to the above embodiments. In this regard, with the device provided herein with a camera, a user can perform such actions as zoom in zoom out, perform tilt detection for looking down or up, or pan across a field of view to obtain a range of POIs associated with a panning scope, etc.
With respect to a representative set of user settings, a number or maximum number of desired endpoints delivered as results can be configured. How to filter can also be configured, e.g., 5 most likely, 5 closest, 5 closest to 100 feet away, 5 within category or sub-category, alphabetical order, etc. In each case, based on a pointing direction, implicitly a cone or other cross section across physical space is defined as a scope of possible points of interest. In all cases, some set of POIs is defined according to a proximity to the device. In this regard, the width or deepness of this cone or cross section can be configurable by the user to control the accuracy of the pointing, e.g., narrow or wide radius of points and how far out to search. The images ofFIG. 2 do not need to come from a camera but could come from a network or satellite service based on location and/or direction.
FIG. 3 illustrates another non-limiting sample image overlay in accordance with an embodiment. In contrast to the “in front of the user's device or face” view ofFIG. 2,FIG. 3 illustrates a topographical map view vianon-limiting overlay UI300. For example,UI300 includes an image based topographical representation of five POIs POI1, POI2, POI3, POI4 and POI5. The view of POIs can be compared fromFIG. 2 andFIG. 3 (except that POI5 is not visible inFIG. 2).
FIG. 4 is a flow diagram of a non-limiting process whereby it is anticipated that a user will hold a device substantially in a vertical plane, as if scanning an area in a camera viewfinder with overlay information and actions introduced to give the viewfinder context for POI action, though the image data representing the real space can be received from any source. For instance, when a user's arm is extended forward in front of the user's eyes, and the user observes the display by looking forward towards the landscape. In such a case where the device is held upright, which can be detected by motion information of the device, substantially in the vertical plane, at400, camera imagery is displayed with overlay of point of interest indication or information. At410, a distance is indicated to scope the points of interest on display, e.g., close, near or far items. For instance, nearness or farness can be based on tiers of concentric rings and user indication of which tier.
At420, information about a selected point of interest is displayed as overlay over the image. At430, an action is requested with respect to the selected place or item, e.g., show information, directions, etc. For example, a user may wish to review the item or add to wikipedia knowledge about point of interest, e.g., upload information, images, etc. In this regard, because it is intuitive to give a 3-D perspective view when the viewing plane is orthogonal to the ground plane, in the present embodiment, a 3-D perspective view with POI information overlay is implemented when the device is held substantially in the vertical plane. In effect, the camera shows the real space behind the device, and indications of points of interest in that space as if the user was performing a scan of his or her surroundings with the device. Direction information of thedevice2600 enables data and network services to know what the scope of objects for interaction with the device is.
FIG. 5 is another non-limiting flow diagram relating to a process for determining whether a portable device is aligned substantially vertically or horizontally with respect to a viewing plane of the device. At500, motion information of the device is analyzed, e.g., accelerometer input. At510, it is determined whether a viewing plane of a portable device is aligned with a substantially horizontal plane substantially parallel to a ground plane or aligned with a substantially vertical plane substantially orthogonal to the ground plane. At520, if the answer is horizontal, a topographical map view of a geographical area map is displayed determined based on location and direction information measured by the portable device. Indication(s) of the point(s) of interest on the map can also be displayed, e.g., highlighting or other designation, or enhancement. At530, if the answer is vertical, then an image based view of three-dimensional (3-D) space extending from the portable device (e.g., from the camera) is displayed. Similarly to the topographical map view, indication(s) of point(s) of interest pertaining to the 3-D space can be displayed.
FIG. 6 is a block diagram illustrating alternate embodiments for image based representation of real space based on whether the device is horizontal or vertical, illustrating a general difference between embodiments having a display of the device in a horizontal planar orientation or a vertical planar orientation. Withdevice600 in the horizontal plane, a 2-D topographical map display of geographical area and indications of points ofinterest620 is displayed. In this regard,device600 detects it is substantially in the horizontal plane and displaysUI610. Whendevice650 detects it is in the substantiallyvertical plane650, upright, avertical plane UI660 is invoked which, instead of a 2-D plan view of the world, includes a 3-D perspective view670 as reflected by the 2-D imagery of the camera input.
FIG. 7 is a block diagram illustrating an embodiment for image based representation of real space when a planar orientation of thedevice700 is vertical, thereby invoking theimage acquisition device710 to acquire input720 and display the input ondisplay730 withPOI information740. In this regard, as the user rotates the camera according to thearrow750, the POI information changes along with the scope of thecamera input710 as it changes with thedevice700 spinning around.
FIG. 8 is a block diagram illustrating an embodiment for image based representation of real space when executing a collision based algorithm. Direction based services enableddevice800 includes adisplay810 for displaying image based data corresponding to real space in proximity todevice800 and/or as a function of direction of thedisplay810 ofdevice800. In a typical scenario, based on location and/or direction, a set of POIs is displayed in the image data ondisplay810, such asPOIs822,824 and826. Correspondingly, in various embodiments, POI content is retrieved from one or more direction based data services and overlaid near thePOIs822,824 and826, for example, at locations indicated byPOI overlays812,814 and826, respectively.
In addition, sincePOIs822,824 and826 may be moving along a path recorded or tracked by one or more direction based services,direction indicators832,834 and836, respectively, can be provided to give a user a real-time view of the movement of thePOIs822,824 and826 and their current direction. In this way, based on algorithms that either help the user to collide (or otherwise come into contact) with other POIs, or help the user to avoid other POIs, a variety of applications and scenarios are contemplated from social networking scenarios to restaurant finding to games, such as hide and seek.
FIG. 9 is a block diagram illustrating an embodiment for image based representation of real space when marking points of interest for audio/visual notification. Direction based services enableddevice900 includes adisplay910 for displaying image based data corresponding to real space in proximity todevice900 and/or as a function of direction of thedisplay910 ofdevice900. In a typical scenario, based on location and/or direction, a set of POIs is displayed in the image data ondisplay910, such asPOIs922,924 and926. Correspondingly, in various embodiments, POI content is retrieved from one or more direction based data services and overlaid near thePOIs922,924 and926, for example, at locations indicated byPOI overlays912,914 and926, respectively. In one embodiment, any of thePOIs922,924 or926 can be marked by a user implicitly or explicitly, and as a result, an audio orvisual notification932 can be applied to themarked POI926 orPOI overlay916 now, or at a future interaction time as well (e.g., a reminder).
FIG. 10 is an embodiment of an image rendering device as implemented in a heads up display device, such as headgear, glasses, or the like. As mentioned, any of the embodiments herein can be equally applied in a set of glasses, or other embodiment in which a display can be presented in front of a user's eyes without being a handheld device per se. For instance, this could beglasses1014 orhead gear1012. In either case, the device includes a heads updisplay1010 that supports the display of POI data received from direction based services. A camera C can be included to observe what the user's eye oreyes1020 are looking at.Devices1014 or1012 can further includevoice input1040 for voice input commands to the display to take action with respect to overlay content. The content can also be projected content or a virtual image plane with 2-D or 3-D POI overlays in alternative embodiments of thedevice1012, or1014, orHUD1010.
FIG. 11 is a block diagram illustrating alternate embodiments for image based representation of real space based on the device being in a substantially horizontal plane. In this embodiment, adevice1100 is held with thedisplay1105 facing substantially up towards the sky, orsky plane1120, which is defined generally parallel with respect to aground plane1110. In such an embodiment, it can be inferred that the user wants atopographical map view1125 of his or her surroundings or proximity in connection withdisplay1105.
Similar toFIG. 11,FIG. 12 is a block diagram illustrating an alternate embodiment for image based representation of real space based on the device being in a substantially horizontal plane. However, in this case, instead of up, adevice1200 is held with thedisplay1205 facing substantially down towards the ground aground plane1210 running parallel to asky plane1220. In such an embodiment, it can be inferred that the user wants asky map view1225 of the sky above the user in connection withdisplay1205, particularly if it can be determined if the user's head or eyes are underneath the display (i.e., looking up, e.g., stargazing). In one application, at nighttime, a user can scan the sky and learn of planets, constellations, etc., marking them, etc. interacting with them via the universe of users also observing or having observed such heavenly bodies.
FIG. 13 is a block diagram illustrating alternate 2-D or 3-D embodiments for image based representation of real space in front of a user of the device based on the device being in a substantially vertical plane. As mentioned, where the user holds adevice1300 having adisplay1305 substantially facing the device user, thedisplay1305 can display a 2-D or 2-D view of the POIs in front of the user1325. For instance, animaging element1330 can be used to provide the image based view in front of the user, and POI content can be overlaid on the image based view.
FIG. 14 is a block diagram illustrating alternate 2-D or 3-D embodiments for image based representation of real space behind a user of the device based on the device being in a substantially vertical plane, e.g., a sleuth mode to see what is happening with moving POIs behind the user. A user thus holds adevice1400 having adisplay1405 substantially facing the device user, thedisplay1405 can display a 2-D or 2-D view of the POIs behind the user1425. For instance, animaging element1430 can be used to provide an image based view of what is behind the user, and POI content can be overlaid on the image based view.
FIG. 15 is a block diagram illustrating alternate embodiments for modifying the image based representation of real space prior to overlaying POI content. For instance, adevice1500 supporting direction based services may include an overall scene ondisplay1505 of some area being pointed at by the device. The area might includePOIs1510 and1512 on the display, andoverlay elements1520 and1522 are respectively positioned near the POIs. According to the present embodiment, a variety of views of the image data can be achieved other than camera based views of the surroundings. For instance, algorithms can be applied to the image based view ondisplay1505 includingnight view1530, edge detectedview1532, acartoonized view1534, a virtualearth image view1536, a POI heat map view (popularity, relevance, etc.)1537 or other image based representations of a scene, orPOIs1538, which may be suited to a given application.
FIG. 16 is a non-limiting process for overlaying POI content on a display in a direction based services environment. At1600, image data is displayed representing a subset of real space in a pre-defined vicinity of a portable computing device. At1610, a set of POIs of direction based service(s) are determined within scope. At1620, a planar orientation of a display of the portable computing device is determined. At1630, the content relating to the POIs can be received from direction based service(s). At1640, the content relating to the POIs is automatically overlaid on the image data ready for user viewing or interaction.
FIG. 17 is another non-limiting process for overlaying POI content on a display in a direction based services environment. At1700, direction information is determined as a function of a direction of the device and at1710, position information is determined as a function of a position of the device. At1720, a set of points of interest within interactive scope of the device is determined based on the direction information and the position information. At1730, an image based representation of the point(s) of interest is displayed by the device. At1740, point of interest advertisement information for the point(s) of interest of the set is received and at1750, the point of interest advertisement information is automatically overlaid at pertinent locations of the image based representation relating to the point(s) of interest.
FIG. 18 is a sample mobile computing device for performing POI overlay of content in a direction based services environment applicable to one or more embodiments herein. In this regard, a set ofservices1860 can be built based onlocation information1822 anddirection information1832 collected by the phone with a corresponding interface ordisplay1825 including POI overlay content as described in one or more embodiments herein. For instance,location information1822 can be recorded by alocation subsystem1820 such as a GPS subsystem communicating withGPS satellites1840. Direction orpointing information1832 can be collected by adirection subsystem1830, such as a compass, e.g., gyroscopic, magnetic, digital compass, etc. In addition, optionally,movement information1812 can be gathered by thedevice1800, e.g., via tower triangulation algorithms, and/or acceleration of thedevice1800 can be measured as well, e.g., with an accelerometer. Thecollective information1850 can be used to gain a sense of not only where thedevice1800 is located in relation to other potential points of interest tracked or known by the overall set ofservices1860, but also what direction the user is pointing thedevice1800, so that theservices1860 can appreciate at whom or what the user is pointing thedevice1800.
In addition, agesture subsystem1870 can optionally be included, which can be predicated on any one or more of themotion information1812,location information1822 ordirection information1832. In this regard, not only candirection information1832 andlocation information1822 be used to define a set of unique gestures, but alsomotion information1812 can be used to define an even more complicated set of gestures. The gesture monitor1870 producesgesture information1872, which can be input as appropriate in connection with deliveringservices1860.
As mentioned, in another aspect, adevice1800 can include aclient side memory1880, such as a cache, of potentially relevant points of interest, which, based on the user's movement history can be dynamically updated. The context, such as geography, speed, etc. of the user can be factored in when updating. For instance, if a user's velocity is 2 miles an hour, the user may be walking and interested in updates at a city block by city block level, or at a lower level granularity if they are walking in the countryside. Similarly, if a user is moving on a highway at 60 miles per hour, the block-by-block updates of information are no longer desirable, but rather a granularity can be provided and predictively cached on thedevice1800 that makes sense for the speed of the vehicle.
FIG. 19 is an exemplary non-limiting architecture for providing direction basedservices1910 based on direction based requests as satisfied by network services and corresponding data layers according to one or more embodiments. Location information1900 (e.g., WiFi, GLS, tower triangulation, etc.), direction information1902 (e.g., digital compass) and userintent information1904, which can be implicit or explicit, are input toservices1910, which may be any one or more ofweb services1912,cloud services1914 orother data services1916. As a result,content1940 is returned for efficient real-time interactions with POIs of current relevance. Data can come from more than one storage layer orabstraction1920,1922,1924, . . . , orabstraction1930,1932,1934, . . . , e.g., from local server databases or remote third party storage locations.
FIG. 20 illustrates anexemplary non-limiting device2000 including processor(s)2010 having a position engine orsubsystem2020 for determining a location of thedevice2000 and a direction engine or subsystem2030 for determining a direction or orientation of thedevice2000. Then, by interacting with local application(s)2040 and/or service(s)2070, content, such as advertisements, can be delivered to the device, which can tailored to device intent and a place in which the device is present, or other factors. When the content is displayed according to a interaction, the content can be rendered by graphic subsystem or display/UI2050 oraudio subsystem2060, and POI content can be supplemented with overlay content placed at, near, overlapping with or over corresponding POIs in an underlying image based representation.
In one non-limiting embodiment, point structure2090 is included, e.g., a triangular or other polygonal piece that points along anorientation line2095 upon which directional calculations are based. Similarly, theorientation line2095 can be indicated by graphics subsystem display/UI2050 with or without point structure2090. In this regard, various embodiments herein enablePOI ID information2080 to be received from services2070 so that the content can be viewed or interactions can occur with services2070 with respect to the POIs.
FIG. 21 illustrates a sample embodiment in the context of advertisement content and opportunity to deliver the advertisement content as overlay content to clients consuming direction based services for a set of POIs within scope. A potential benefit of thePOI overlay content2120 for devices supporting direction basedservices2120 based onlocation information2140 anddirection information2150 isadvertising opportunity2130. Based on aggregate data, business intelligence can price based on statistics and other factors, the cost of anadvertising opportunity2130 can be calculated. In short, if Coca Cola believes that it is likely that a user will be nearby Coca Cola merchandise soon, there is value to Coca Cola in accelerating the process of getting information to the user's device about a Coke coupon via POI overlay, such that the Coke coupon pops up immediately when the user is within range of a Coke retailer POI.
Due to the enhanced interactive skills of a device provisioned for direction based location services,FIG. 21 also illustrates a variety of device interactions that help to form aggregate and individual user data for purposes of input to a business intelligence andadvertising engine2130, and/or invited by way of POI overlay content. By measuring interactions with points of interest viatext2100,search2102,barcode scan2104,image scan2106, designation/selection of item ofinterest2108, price compareoperations2110,gesture input2112, other interaction with item ofinterest2114, voice input, etc., a lot of user knowledge is gained that can help determine probabilities sufficient to trigger advertising opportunities forinterested entities2130. In addition, those advertisingopportunities2130 can be sent to the user in the form ofoverlay UI content2120 that invites any of the foregoing types of device interactions as well.
In this regard, users can interact with the endpoints in a host of context sensitive ways to provide or update information associated with endpoints of interest, or to receive beneficial information or instruments (e.g., coupons, offers, etc.) from entities associated with the endpoints of interest, and any of such actions can be facilitated by information, content, advertising, etc. that can relate to POIs and overlaid with the POIs in connection with an image based representation of the POIs.
Supplemental Context Regarding Pointing Devices, Architectures and ServicesThe following description contains supplemental context regarding potential non-limiting pointing devices, architectures and associated services to further aid in understanding one or more of the above embodiments. Any one or more of any additional features described in this section can be accommodated in any one or more of the embodiments described above with respect to predictive direction based services at a particular location for given POI(s). While such combinations of embodiments or features are possible, for the avoidance of doubt, no embodiments set forth in the subject disclosure should be considered limiting on any other embodiments described herein.
As mentioned, a broad range of scenarios can be enabled by a device that can take location and direction information about the device and build a service on top of that information. For example, by using an accelerometer in coordination with an on board digital compass, an application running on a mobile device updates what each endpoint is “looking at” or pointed towards, attempting hit detection on potential points of interest to either produce real-time information for the device or to allow the user to select a range, or using the GPS, a location on a map, and set information such as “Starbucks—10% off cappuccinos today” or “The Alamo—site of . . . ” for others to discover. One or more accelerometers can also be used to perform the function of determining direction information for each endpoint as well. As described herein, these techniques can become more granular to particular items within a Starbucks, such as “blueberry cheesecake” on display in the counter, enabling a new type of sale opportunity.
Accordingly, a general device for accomplishing this includes a processing engine to resolve a line of sight vector sent from a mobile endpoint and a system to aggregate that data as a platform, enabling a host of new scenarios predicated on the pointing information known for the device. The act of pointing with a device, such as the user's mobile phone, thus becomes a powerful vehicle for users to discover and interact with points of interest around the individual in a way that is tailored for the individual. Synchronization of data can also be performed to facilitate roaming and sharing of POV data and contacts among different users of the same service.
In a variety of embodiments described herein, 2-dimensional (2D), 3-dimensional (3D) or N-dimensional directional-based search, discovery, and interactivity services are enabled for endpoints in the system of potential interest to the user.
The pointing information and corresponding algorithms depend upon the assets available in a device for producing the pointing or directional information. The pointing information, however produced according to an underlying set of measurement components, and interpreted by a processing engine, can be one or more vectors. A vector or set of vectors can have a “width” or “arc” associated with the vector for any margin of error associated with the pointing of the device. A panning angle can be defined by a user with at least two pointing actions to encompass a set of points of interest, e.g., those that span a certain angle defined by a panning gesture by the user.
In one non-limiting embodiment, a portable electronic device includes a positional component for receiving positional information as a function of a location of the portable electronic device, a directional component that outputs direction information as a function of an orientation of the portable electronic device and a location based engine that processes the positional information and the direction information to determine a subset of points of interest relative to the portable electronic device as a function of at least the positional information and the direction information.
The positional component can be a positional GPS component for receiving GPS data as the positional information. The directional component can be a magnetic compass and/or a gyroscopic compass that outputs the direction information. The device can include acceleration component(s), such as accelerometer(s), that outputs acceleration information associated with movement of the portable electronic device. The use of a separate sensor can also be used to further compensate for tilt and altitude adjustment calculations.
In one embodiment, the device includes a cache memory for dynamically storing a subset of endpoints of interest that are relevant to the portable electronic device and at least one interface to a network service for transmitting the positional information and the direction information to the network service. In return, based on real-time changes to the positional information and direction/pointing information, the device dynamically receives in the cache memory an updated subset of endpoints that are potentially relevant to the portable electronic device.
For instance, the subset of endpoints can be updated as a function of endpoints of interest within a pre-defined distance substantially along a vector defined by the orientation of the portable electronic device. Alternatively or in addition, the subset of endpoints can be updated as a function of endpoints of interest relevant to a current context of the portable electronic device. In this regard, the device can include a set of Representational State Transfer (REST)-based application programming interfaces (APIs), or other stateless set of APIs, so that the device can communicate with the service over different networks, e.g., Wi-Fi, a GPRS network, etc. or communicate with other users of the service, e.g., Bluetooth. For the avoidance of doubt, the embodiments are in no way limited to a REST based implementation, but rather any other state or stateful protocol could be used to obtain information from the service to the devices.
The directional component outputs direction information including compass information based on calibrated and compensated heading/directionality information. The directional component can also include direction information indicating upward or downward tilt information associated with a current upward or downward tilt of the portable electronic device, so that the services can detect when a user is pointing upwards or downwards with the device in addition to a certain direction. The height of the vectors itself can also be taken into account to distinguish between an event of pointing with a device from the top of a building (likely pointing to other buildings, bridges, landmarks, etc.) and the same event from the bottom of the building (likely pointing to a shop at ground level), or towards a ceiling or floor to differentiate among shelves in a supermarket. A 3-axis magnetic field sensor can also be used to implement a compass to obtain tilt readings.
Secondary sensors, such as altimeters or pressure readers, can also be included in a mobile device and used to detect a height of the device, e.g., what floor a device is on in a parking lot or floor of a department store (changing the associated map/floorplan data). Where a device includes a compass with a planar view of the world (e.g., 2-axis compass), the inclusion of one or more accelerometers in the device can be used to supplement the motion vector measured for a device as a virtual third component of the motion vector, e.g., to provide measurements regarding a third degree of freedom. This option may be deployed where the provision of a 3-axis compass is too expensive, or otherwise unavailable.
In this respect, a gesturing component can also be included in the device to determine a current gesture of a user of the portable electronic device from a set of pre-defined gestures. For example, gestures can include zoom in, zoom out, panning to define an arc, all to help filter over potential subsets of points of interest for the user.
For instance, web services can effectively resolve vector coordinates sent from mobile endpoints into <x,y,z> or other coordinates using location data, such as GPS data, as well as configurable, synchronized POV information similar to that found in a GPS system in an automobile. In this regard, any of the embodiments can be applied similarly in any motor vehicle device. One non-limiting use is also facilitation of endpoint discovery for synchronization of data of interest to or from the user from or to the endpoint.
Among other algorithms for interpreting position/motion/direction information, as shown inFIG. 22, adevice2200 employing the direction based location basedservices2202 as described herein in a variety of embodiments herein include a way to discern between near objects, such as POI2214 and far objects, such as POI2216. Depending on the context of usage, the time, the user's past, the device state, the speed of the device, the nature of the POIs, etc., the service can determine a general distance associated with a motion vector. Thus, amotion vector2206 will implicate POI2214, but not POI2216, and the opposite would be true formotion vector2208.
In addition, adevice2200 includes an algorithm for discerning items substantially along a direction at which the device is pointing, and those not substantially along a direction at which the device is pointing. In this respect, whilemotion vector2204 might implicatePOI2212, without a specific panning gesture that encompassed more directions/vectors, POIs2214 and2216 would likely not be within the scope of points of interest defined bymotion vector2204. The distance or reach of a vector can also be tuned by a user, e.g., via a slider control or other control, to quickly expand or contract the scope of endpoints encompassed by a given “pointing” interaction with the device.
In one non-limiting embodiment, the determination of at what or whom the user is pointing is performed by calculating an absolute “Look” vector, within a suitable margin of error, by a reading from an accelerometer's tilt and a reading from the magnetic compass. Then, an intersection of endpoints determines an initial scope, which can be further refined depending on the particular service employed, i.e., any additional filter. For instance, for an apartment search service, endpoints falling within the look vector that are not apartments ready for lease, can be pre-filtered.
In addition to the look vector determination, the engine can also compensate for, or begin the look vector, where the user is by establish positioning (˜15 feet) through an A-GPS stack (or other location based or GPS subsystem including those with assistance strategies) and also compensate for any significant movement/acceleration of the device, where such information is available.
As mentioned, in another aspect, a device can include a client side cache of potentially relevant points of interest, which, based on the user's movement history can be dynamically updated. The context, such as geography, speed, etc. of the user can be factored in when updating. For instance, if a user's velocity is 2 miles an hour, the user may be walking and interested in updates at a city block by city block level, or at a lower level granularity if they are walking in the countryside. Similarly, if a user is moving on a highway at 60 miles per hour, the block-by-block updates of information are no longer desirable, but rather a granularity can be provided and predictively cached on the device that makes sense for the speed of the vehicle.
In an automobile context, the location becomes the road on which the automobile is travelling, and the particular items are the places and things that are passed on the roadside much like products in a particular retail store on a shelf or in a display. The pointing based services thus creates a virtual “billboard” opportunity for items of interest generally along a user's automobile path. Proximity to location can lead to an impulse buy, e.g., a user might stop by a museum they are passing and pointing at with their device, if offered a discount on admission.
In various alternative embodiments, gyroscopic or magnetic compasses can provide directional information. A REST based architecture enables data communications to occur over different networks, such as Wi-Fi and GPRS architectures. REST based APIs can be used, though any stateless messaging can be used that does not require a long keep alive for communicated data/messages. This way, since networks can go down with GPRS antennae, seamless switching can occur to Wi-Fi or Bluetooth networks to continue according to the pointing based services enabled by the embodiments described herein.
A device as provided herein according to one or more embodiments can include a file system to interact with a local cache, store updates for synchronization to the service, exchange information by Bluetooth with other users of the service, etc. Accordingly, operating from a local cache, at least the data in the local cache is still relevant at a time of disconnection, and thus, the user can still interact with the data. Finally, the device can synchronize according to any updates made at a time of re-connection to a network, or to another device that has more up to date GPS data, POI data, etc. In this regard, a switching architecture can be adopted for the device to perform a quick transition from connectivity from one networked system (e.g., cell phone towers) to another computer network (e.g., Wi-Fi) to a local network (e.g., mesh network of Bluetooth connected devices).
With respect to user input, a set of soft keys, touch keys, etc. can be provided to facilitate in the directional-based pointing services provided herein. A device can include a windowing stack in order to overlay different windows, or provide different windows of information regarding a point of interest (e.g., hours and phone number window versus interactive customer feedback window). Audio can be rendered or handled as input by the device. For instance, voice input can be handled by the service to explicitly point without the need for a physical movement of the device. For instance, a user could say into a device “what is this product right in front of me? No, not that one, the one above it” and have the device transmit current direction/movement information to a service, which in turn intelligently, or iteratively, determines what particular item of interest the user is pointing at, and returns a host of relevant information about the item.
One non-limiting way for determining a set of points of interest is illustrated inFIG. 23. InFIG. 23, adevice2300 is pointed (e.g., point and click) in a direction D1, which according to the device or service parameters, implicitly defines an area withinarc2310 anddistance2320 that encompassesPOI2330, but does not encompassPOI2332. Such an algorithm will also need to determine any edge case POIs, i.e., whether POIs such asPOI2334 are within the scope of pointing in direction D1, where the POI only partially falls within the area defined byarc2310 anddistance2320.
Other gestures that can be of interest in for a gesturing subsystem include recognizing a user's gesture for zoom in or zoom out. Zoom in/zoom out can be done in terms of distance likeFIG. 24. InFIG. 24, adevice2400 pointed in direction D1 may include zoomed in view which includes points of interest withindistance2420 andarc2410, or a medium zoomed view representing points of interest betweendistance2420 and2422, or a zoomed out view representing points of interest beyonddistance2422. These zoom zones correspond toPOIs2430,2432 and2434, respectively. More or less zones can be considered depending upon a variety of factors, the service, user preference, etc.
For another non-limiting example, with location information and direction information, a user can input a first direction via a click, and then a second direction after moving the device via a second click, which in effect defines anarc2510 for objects of interest in the system as illustrated inFIG. 25. For instance, via first pointing act by the user at time t1 in direction D1 and a second pointing act at time t2 by the user in direction D2, anarc2510 is implicitly defined. The area of interest implicitly includes a search of points of object within adistance2520, which can be zoomed in and out, or selected by the service based on a known granularity of interest, selected by the user, etc. This can be accomplished with a variety of forms of input to define the two directions. For instance, the first direction can be defined upon a click-and-hold button event, or other engage-and-hold user interface element, and the second direction can be defined upon release of the button. Similarly, two consecutive clicks corresponding to the two different directions D1 and D2 can also be implemented.
Also, instead of focusing on real distance, zooming in or out could also represent a change in terms of granularity, or size, or hierarchy of objects. For example, a first pointing gesture with the device may result in a shopping mall appearing, but with another gesture, a user could carry out a recognizable gesture to gain or lose a level of hierarchical granularity with the points of interest on display. For instance, after such gesture, the points of interest could be zoomed in to the level of the stores at the shopping mall and what they are currently offering.
In addition, a variety of even richer behaviors and gestures can be recognized when acceleration of the device in various axes can be discerned. Panning, arm extension/retraction, swirling of the device, backhand tennis swings, breaststroke arm action, golf swing motions could all signify something unique in terms of the behavior of the pointing device, and this is to just name a few motions that could be implemented in practice. Thus, any of the embodiments herein can define a set of gestures that serve to help the user interact with a set of services built on the pointing platform, to help users easily gain information about points of information in their environment.
Furthermore, with relatively accurate upward and downward tilt of the device, in addition to directional information such as calibrated and compensated heading/directional information, other services can be enabled. Typically, if a device is ground level, the user is outside, and the device is “pointed” up towards the top of buildings, the granularity of information about points of interest sought by the user (building level) is different than if the user was pointing at the first floor shops of the building (shops level), even where the same compass direction is implicated. Similarly, where a user is at the top of a landmark such as the Empire State building, a downward tilt at the street level (street level granularity) would implicate information about different points of interest that if the user of the device pointed with relatively no tilt at the Statue of Liberty (landmark/building level of granularity).
Also, when a device is moving in a car, it may appear that direction is changing as the user maintains a pointing action on a single location, but the user is still pointing at the same thing due to displacement. Thus, thus time varying location can be factored into the mathematics and engine of resolving at what the user is pointing with the device to compensate for the user experience based upon which all items are relative.
Accordingly, armed with the device's position, one or more web or cloud services can analyze the vector information to determine at what or whom the user is looking/pointing. The service can then provide additional information such as ads, specials, updates, menus, happy hour choices, etc., depending on the endpoint selected, the context of the service, the location (urban or rural), the time (night or day), etc. As a result, instead of a blank contextless Internet search, a form of real-time visual search for users in real 3-D environments is provided.
In one non-limiting embodiment, the direction based pointing services are implemented in connection with a pair of glasses, headband, etc. having a corresponding display means that acts in concert with the user's looking to highlight or overlay features of interest around the user.
As shown inFIG. 26, once a set of objects is determined from the pointing information according to a variety of contexts of a variety of services, amobile device2600 can display the objects viarepresentation2602 according to a variety of user experiences tailored to the service at issue. For instance, a virtual camera experience can be provided, where POI graphics or information can be positioned relative to one another to simulate an imaging experience. A variety of other user interface experiences can be provided based on the pointing direction as well.
For instance, a set of different choices are shown inFIG. 27.UI2700 and2702 illustrate navigation of hierarchical POI information. For instance, level1 categories may include category1, category2, category3, category4 and category5, but if a user selects around the categories with a thumb-wheel, up-down control, or the like, and chooses one such as category2. Then, subcategory1, subcategory2, subcategory3 and subcategory4 are displayed as subcategories of category2. Then, if the user selects, for instance, subcategory4, perhaps few enough POIs, such asbuildings2700 and2710 are found in the subcategory in order to display on a2D map UI2704 along the pointing direction, or alternatively as a 3Dvirtual map view2706 along the pointing direction.
Once a single POI is implicated or selected, then a full screen view for the single POI can be displayed, such as theexemplary UI2800.UI2800 can have one or more of any of the following representative areas.UI2800 can include astatic POI image2802 such as a trademark of a store, or a picture of a person.UI2800 can also include other media, and a staticPOI information portion2804 for information that tends not to change such as restaurant hours, menu, contact information, etc. In addition,UI2800 can include an information section for dynamic information to be pushed to the user for the POI, e.g., coupons, advertisements, offers, sales, etc. In addition, a dynamic interactive information are2808 can be included where the user can fill out a survey, provide feedback to the POI owner, request the POI to contact the user, make a reservation, buy tickets, etc.UI2800 also can include a representation of the direction information output by the compass for reference purposes. Further,UI2800 can include other third party static or dynamic content inarea2812.
When things change from the perspective of either the service or the client, a synchronization process can bring either the client or service, respectively, up to date. In this way, an ecosystem is enabled where a user can point at an object or point of interest, gain information about it that is likely to be relevant to the user, interact with the information concerning the point of interest, and add value to services ecosystem where the user interacts. The system thus advantageously supports both static and dynamic content.
Other user interfaces can be considered such as left-right, or up-down arrangements for navigating categories or a special set of soft-keys can be adaptively provided.
To support processing of vector information and aggregating POI databases from third parties, a variety of storage techniques, such as relational storage techniques can be used. For instance, Virtual Earth data can be used for mapping and aggregation of POI data can occur from third parties such as Tele Atlas, NavTeq, etc. In this regard, businesses not in the POI database will want to be discovered and thus, the service provides a similar, but far superior from a spatial relevance standpoint, Yellow Pages experiences where businesses will desire to have their additional information, such as menus, price sheets, coupons, pictures, virtual tours, etc. accessible via the system.
In addition, a synchronization platform or framework can keep the roaming caches in sync, thereby capturing what users are looking at and efficiently processing changes. Or, where a user goes offline, local changes can be recorded, and when the user goes back online, such local changes can be synchronized to the network or service store. Also, since the users are in effect pulling information they care about in the here and in the now through the act of pointing with the device, the system generates high cost per thousand impression (CPM) rates as compared to other forms of demographic targeting. Moreover, the system drives impulse buys, since the user may not be physically present in a store, but the user may be near the object, and by being nearby and pointing at the store, information about a sale concerning the object can be sent to the user.
As mentioned, different location subsystems, such as tower triangulation, GPS, A-GPS, E-GPS, etc. have different tolerances. For instance, with GPS, tolerances can be achieved to about 10 meters. With A-GPS, tolerances can be tightened to about 12 feet. In turn, with E-GPS, tolerance may be a different error margin still. Compensating for the different tolerances is part of the interpretation engine for determining intersection of a pointing vector and a set of points of interest. In addition, a distance to project out the pointing vector can be explicit, configurable, contextual, etc.
In this regard, the various embodiments described herein can employ any algorithm for distinguishing among boundaries of the endpoints, such as boundary boxes, or rectangles, triangles, circles, etc. As a default radius, e.g., 150 feet could be selected, and such value can be configured or be context sensitive to the service provided. On-line real estate sites can be leveraged for existing POI information. Since different POI databases may track different information at different granularities, a way of normalizing the POI data according to one convention or standard can also be implemented so that the residential real estate location data of Zillow can be integrated with GPS information from Starbucks of all the Starbucks by country.
In addition, similar techniques can be implemented in a moving vehicle client that includes GPS, compass, accelerometer, etc. By filtering based on scenarios (e.g., I need gas), different subsets of points of interest (e.g., gas stations) can be determined for the user based not only on distance, but actual time it may take to get to the point of interest. In this regard, while a gas station may be 100 yards to the right off the highway, the car may have already passed the corresponding exit, and thus more useful information to provide is what gas station will take the least amount of time to drive from a current location based on direction/location so as to provide predictive points of interest that are up ahead on the road, and not already aged points of interest that would require turning around from one's destination in order to get to them.
For existing motor vehicle navigation devices, or other conventional portable GPS navigation devices, where a device does not natively include directional means such as a compass, the device can have an extension slot that accommodates direction information from an external directional device, such as a compass. Similarly, for laptops or other portable electronic devices, such devices can be outfitted with a card or board with a slot for a compass. While any of the services described herein can make web service calls as part of the pointing and retrieval of endpoint process, as mentioned, one advantageous feature of a user's locality in real space is that it is inherently more limited than a general Internet search for information. As a result, a limited amount of data can be predictively maintained on a user's device in cache memory and properly aged out as data becomes stale.
In another aspect of any of the embodiments described herein, because stateless messaging is employed, if communications drop with one network, the device can begin further communicating via another network. For instance, a device has two channels, and a user gets on a bus, but no longer have GPRS or GPS activity. Nonetheless the user is able to get the information the device needs from some other channel. Just because a tower, or satellites are down, does not mean that the device cannot connect through an alternative channel, e.g., the bus's GPS location information via Bluetooth.
With respect to exemplary mobile client architectures, a representative device can include, as described variously herein, client Side Storage for housing and providing fast access to cached POI data in the current region including associated dynamically updated or static information, such as annotations, coupons from businesses, etc. This includes usage data tracking and storage. In addition, regional data can be a cached subset of the larger service data, always updated based on the region in which the client is roaming. For instance, POI data could include as a non-limiting example, the following information:
|
| POI coordinates and data | //{−70.26322, 43.65412, |
| “STARBUCK'S”} |
| Localized annotations | //Menu, prices, |
| hours of operation, etc |
| Coupons and ads | //Classes of coupons (new |
| user, returning, etc) |
|
Support for different kinds of information (e.g., blob v structured information (blob for storage and media; structured for tags, annotations, etc.)
A device can also include usage data and preferences to hold settings as well as usage data such as coupons “activated,” waypoints, businesses encountered per day, other users encountered, etc. to be analyzed by the cloud services for business intelligence analysis and reporting.
A device can also include a continuous update mechanism, which is a service that maintains the client's cached copy of a current region updated with the latest. Among other ways, this can be achieved with a ping-to-pull model that pre-fetches and swaps out the client's cached region using travel direction and speed to facilitate roaming among different regions. This is effectively a paging mechanism for upcoming POIs. This also includes sending a new or modified POI for the region (with annotations+coupons), sending a new or modified annotation for the POIs (with coupons), or sending a new or modified coupon for the POI.
A device can also include a Hardware Abstraction Layer (HAL) having components responsible for abstracting the way the client communicates with the measuring instruments, e.g., the GPS driver for positioning and LOS accuracy (e.g., open eGPS), magnetic compass for heading and rotational information (e.g., gyroscopic), one or more accelerometers for gestured input and tilt (achieves 3D positional algorithms, assuming gyroscopic compass).
As described earlier, a device can also include methods/interfaces to make REST calls via GPRS/Wi-Fi and a file system and storage for storing and retrieving the application data and settings.
A device can also include user input and methods to map input to the virtual keys. For instance, one non-limiting way to accomplish user input is to have softkeys as follows, though it is to be understood a great variety of user inputs can be used to achieve interaction with the user interfaces of the pointing based services.
|
| SK up/down: //Up and down on choices |
| SK right, SK ok/confirm: | //Choose an option or drill down/next |
| page |
| SK left, SK cancel/back, | //Go back to a previous window, |
| cancel |
| Exit / Incoming Call events | //Exit the app or minimize |
|
In addition, a representative device can include a graphics and windowing stack to render the client side UI, as well as an audio stack to play sounds/alerts.
As mentioned, such a device may also include spatial and math computational components including a set of APIs to perform 3D collision testing between subdivided surfaces such as spherical shells (e.g., a simple hit testing model to adopt and boundary definitions for POIs), rotate points, and cull as appropriate from conic sections.
A representative interaction with a pointing device as provided in one or more embodiments herein is illustrated inFIG. 29. At2900, location/direction vector information is determined based on the device measurements. This information can be recorded so that a user's path or past can be used when predictively factoring what the user will be interested in next, as illustrated at2910. The predicting can be made based on a variety of other factors as well, such as context, application, user history, preferences, path, time of day, proximity, etc. such that the object(s) or POI(s) a user is most likely to interact with in the future are identified.
At2920, based on the object(s) or POI(s) identified at2910, predictive information is pre-fetched or otherwise pre-processed for use with the predicted services with respect to such object(s) or POI(s). Then, based on current vector information, or more informally, the act of pointing by the user, at2930, an object or point of interest is selected based on any of a variety of “line of sight” algorithms that determine what POI(s) are currently within (or outside) of the vector path. It is noted that occlusion culling techniques can optionally be used to facilitate overlay techniques. In this regard, at2940, based at least in part on the pre-fetched or pre-processed predictive information, services are performed with respect to the object(s) or POI(s).
Additionally, whether the point of interest at issue falls within the vector can factor in the error in precision of any of the measurements, e.g., different GPS subsystems have different error in precision. In this regard, one or more items or points of interest may be found along the vector path or arc, within a certain distance depending on context. As mentioned, at2940, any of a great variety of services can be performed with respect to any point of interest selected by the user via a user interface. Where only one point of interest is concerned, the service can be automatically performed with respect to the point of interest.
FIG. 30 is a block diagram of an example region basedprediction algorithm3000 that takes into account user path and heading, e.g., as a user has moved from age out candidate3010 to the present location3002, and based on a current user path, locations3004 and3006 are predicted for the user. Accordingly, based on the direction and location based path history, POI data for locations3004 and3006 can be pre-fetched to local memory of the device. Similarly, location3010 becomes the topic for a decision as to when to age out the data. Such an age out decision can also be made based on the amount of unused space remaining in memory of the device. WhileFIG. 30 illustrates a path based algorithm, as mentioned, other algorithms can be used to predict what POIs will be of interest as well.
For existing motor vehicle navigation devices, or other conventional portable GPS navigation devices, where a device does not natively include directional means such as a compass, the device can have an extension slot that accommodates direction information from an external directional device, such as a compass. Similarly, for laptops or other portable electronic devices, such devices can be outfitted with a card or board with a slot for a compass. While any of the services described herein can make web service calls as part of the pointing and retrieval of endpoint process, as mentioned, limited bandwidth may degrade the interactive experience. As a result, a limited amount of data can be predictively maintained on a user's device in cache memory and optionally aged out as data becomes stale, e.g., when relevance to the user falls below a threshold.
As described in various embodiments herein,FIG. 31 illustrates a process for a device when location (e.g., GPS) and direction (e.g., compass) events occur. Upon the detection of a location and direction event, at3100, for POIs in the device's local cache, a group of POIs are determined that pass an intersection algorithm for the direction of pointing of the device. At3110, POIs in the group can be represented in some fashion on a UI, e.g., full view if only 1 POI, categorized view, 2-D map view, 3-D perspective view, or user images if other users, etc. The possibilities for representation are limitless; the embodiments described herein are intuitive based on the general notion of pointing based direction services.
At3120, upon selection of a POI, static content is determined and any dynamic content is acquired via synchronization. When new data becomes available, it is downloaded to stay up to date. At3130, POI information is filtered further by user specific information (e.g., if it is the user's first time at the store, returning customer, loyalty program member, live baseball game offer for team clothing discounts, etc.). At3140, static and dynamic content that is up to date is rendered for the POI. In addition, updates and/or interaction with POI information is allowed which can be synced back to the service.
Exemplary Networked and Distributed EnvironmentsOne of ordinary skill in the art can appreciate that the various embodiments of methods and devices for pointing based services and related embodiments described herein can be implemented in connection with any computer or other client or server device, which can be deployed as part of a computer network or in a distributed computing environment, and can be connected to any kind of data store. In this regard, the various embodiments described herein can be implemented in any computer system or environment having any number of memory or storage units, and any number of applications and processes occurring across any number of storage units. This includes, but is not limited to, an environment with server computers and client computers deployed in a network environment or a distributed computing environment, having remote or local storage.
FIG. 32 provides a non-limiting schematic diagram of an exemplary networked or distributed computing environment. The distributed computing environment comprises computingobjects3210,3212, etc. and computing objects ordevices3220,3222,3224,3226,3228, etc., which may include programs, methods, data stores, programmable logic, etc., as represented byapplications3230,3232,3234,3236,3238. It can be appreciated thatobjects3210,3212, etc. and computing objects ordevices3220,3222,3224,3226,3228, etc. may comprise different devices, such as PDAs, audio/video devices, mobile phones, MP3 players, laptops, etc.
Eachobject3210,3212, etc. and computing objects ordevices3220,3222,3224,3226,3228, etc. can communicate with one or moreother objects3210,3212, etc. and computing objects ordevices3220,3222,3224,3226,3228, etc. by way of thecommunications network3240, either directly or indirectly. Even though illustrated as a single element inFIG. 32,network3240 may comprise other computing objects and computing devices that provide services to the system ofFIG. 32, and/or may represent multiple interconnected networks, which are not shown. Eachobject3210,3212, etc. or3220,3222,3224,3226,3228, etc. can also contain an application, such asapplications3230,3232,3234,3236,3238, that might make use of an API, or other object, software, firmware and/or hardware, suitable for communication with or implementation of the predicted interaction model as provided in accordance with various embodiments.
There are a variety of systems, components, and network configurations that support distributed computing environments. For example, computing systems can be connected together by wired or wireless systems, by local networks or widely distributed networks. Currently, many networks are coupled to the Internet, which provides an infrastructure for widely distributed computing and encompasses many different networks, though any network infrastructure can be used for exemplary communications made incident to the techniques as described in various embodiments.
Thus, a host of network topologies and network infrastructures, such as client/server, peer-to-peer, or hybrid architectures, can be utilized. In a client/server architecture, particularly a networked system, a client is usually a computer that accesses shared network resources provided by another computer, e.g., a server. In the illustration ofFIG. 32, as a non-limiting example,computers3220,3222,3224,3226,3228, etc. can be thought of as clients andcomputers3210,3212, etc. can be thought of as servers whereservers3210,3212, etc. provide data services, such as receiving data fromclient computers3220,3222,3224,3226,3228, etc., storing of data, processing of data, transmitting data toclient computers3220,3222,3224,3226,3228, etc., although any computer can be considered a client, a server, or both, depending on the circumstances. Any of these computing devices may be processing data, or requesting services or tasks that may implicate the predicted interaction model and related techniques as described herein for one or more embodiments.
A server is typically a remote computer system accessible over a remote or local network, such as the Internet or wireless network infrastructures. The client process may be active in a first computer system, and the server process may be active in a second computer system, communicating with one another over a communications medium, thus providing distributed functionality and allowing multiple clients to take advantage of the information-gathering capabilities of the server. Any software objects utilized pursuant to the direction based services can be provided standalone, or distributed across multiple computing devices or objects.
In a network environment in which the communications network/bus3240 is the Internet, for example, theservers3210,3212, etc. can be Web servers with which theclients3220,3222,3224,3226,3228, etc. communicate via any of a number of known protocols, such as the hypertext transfer protocol (HTTP).Servers3210,3212, etc. may also serve asclients3220,3222,3224,3226,3228, etc., as may be characteristic of a distributed computing environment.
Exemplary Computing DeviceAs mentioned, various embodiments described herein apply to any device wherein it may be desirable to perform pointing based services, and predict interactions with points of interest. It should be understood, therefore, that handheld, portable and other computing devices and computing objects of all kinds are contemplated for use in connection with the various embodiments described herein, i.e., anywhere that a device may request pointing based services. Accordingly, the below general purpose remote computer described below inFIG. 33 is but one example, and the embodiments of the subject disclosure may be implemented with any client having network/bus interoperability and interaction.
Although not required, any of the embodiments can partly be implemented via an operating system, for use by a developer of services for a device or object, and/or included within application software that operates in connection with the operable component(s). Software may be described in the general context of computer-executable instructions, such as program modules, being executed by one or more computers, such as client workstations, servers or other devices. Those skilled in the art will appreciate that network interactions may be practiced with a variety of computer system configurations and protocols.
FIG. 33 thus illustrates an example of a suitablecomputing system environment3300 in which one or more of the embodiments may be implemented, although as made clear above, thecomputing system environment3300 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of any of the embodiments. Neither should thecomputing environment3300 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in theexemplary operating environment3300.
With reference toFIG. 33, an exemplary remote device for implementing one or more embodiments herein can include a general purpose computing device in the form of ahandheld computer3310. Components ofhandheld computer3310 may include, but are not limited to, aprocessing unit3320, asystem memory3330, and a system bus3321 that couples various system components including the system memory to theprocessing unit3320.
Computer3310 typically includes a variety of computer readable media and can be any available media that can be accessed bycomputer3310. Thesystem memory3330 may include computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) and/or random access memory (RAM). By way of example, and not limitation,memory3330 may also include an operating system, application programs, other program modules, and program data.
A user may enter commands and information into thecomputer3310 through input devices3340 A monitor or other type of display device is also connected to the system bus3321 via an interface, such asoutput interface3350. In addition to a monitor, computers may also include other peripheral output devices such as speakers and a printer, which may be connected throughoutput interface3350.
Thecomputer3310 may operate in a networked or distributed environment using logical connections to one or more other remote computers, such asremote computer3370. Theremote computer3370 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, or any other remote media consumption or transmission device, and may include any or all of the elements described above relative to thecomputer3310. The logical connections depicted inFIG. 33 include anetwork3371, such local area network (LAN) or a wide area network (WAN), but may also include other networks/buses. Such networking environments are commonplace in homes, offices, enterprise-wide computer networks, intranets and the Internet.
As mentioned above, while exemplary embodiments have been described in connection with various computing devices, networks and advertising architectures, the underlying concepts may be applied to any network system and any computing device or system in which it is desirable to derive information about surrounding points of interest.
There are multiple ways of implementing one or more of the embodiments described herein, e.g., an appropriate API, tool kit, driver code, operating system, control, standalone or downloadable software object, etc. which enables applications and services to use the pointing based services. Embodiments may be contemplated from the standpoint of an API (or other software object), as well as from a software or hardware object that provides pointing platform services in accordance with one or more of the described embodiments. Various implementations and embodiments described herein may have aspects that are wholly in hardware, partly in hardware and partly in software, as well as in software.
The word “exemplary” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, for the avoidance of doubt, such terms are intended to be inclusive in a manner similar to the term “comprising” as an open transition word without precluding any additional or other elements.
As mentioned, the various techniques described herein may be implemented in connection with hardware or software or, where appropriate, with a combination of both. As used herein, the terms “component,” “system” and the like are likewise intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on computer and the computer can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
The aforementioned systems have been described with respect to interaction between several components. It can be appreciated that such systems and components can include those components or specified sub-components, some of the specified components or sub-components, and/or additional components, and according to various permutations and combinations of the foregoing. Sub-components can also be implemented as components communicatively coupled to other components rather than included within parent components (hierarchical). Additionally, it should be noted that one or more components may be combined into a single component providing aggregate functionality or divided into several separate sub-components, and any one or more middle layers, such as a management layer, may be provided to communicatively couple to such sub-components in order to provide integrated functionality. Any components described herein may also interact with one or more other components not specifically described herein but generally known by those of skill in the art.
In view of the exemplary systems described supra, methodologies that may be implemented in accordance with the disclosed subject matter will be better appreciated with reference to the flowcharts of the various figures. While for purposes of simplicity of explanation, the methodologies are shown and described as a series of blocks, it is to be understood and appreciated that the claimed subject matter is not limited by the order of the blocks, as some blocks may occur in different orders and/or concurrently with other blocks from what is depicted and described herein. Where non-sequential, or branched, flow is illustrated via flowchart, it can be appreciated that various other branches, flow paths, and orders of the blocks, may be implemented which achieve the same or a similar result. Moreover, not all illustrated blocks may be required to implement the methodologies described hereinafter.
While the various embodiments have been described in connection with the preferred embodiments of the various figures, it is to be understood that other similar embodiments may be used or modifications and additions may be made to the described embodiment for performing the same function without deviating therefrom. Still further, one or more aspects of the above described embodiments may be implemented in or across a plurality of processing chips or devices, and storage may similarly be effected across a plurality of devices. Therefore, the present invention should not be limited to any single embodiment, but rather should be construed in breadth and scope in accordance with the appended claims.