METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR
NAVIGATION IN AN INDOOR SPACE
TECHNICAL FIELD
[0001] Various implementations relate generally to method, apparatus, and computer program product for navigation in an indoor space.
BACKGROUND
[0002] Navigation enables continuous tracking of a user's location to dynamically determine a path for the user's intended destination. Various technologies, for example, technologies based on a Global Positioning System (GPS) are widely used for navigating in outdoor spaces. However, indoor navigation and communication in large indoor spaces, such as, for example, shopping malls and convention centers is a challenge due to limited signal strength and accuracy of the GPS in the indoor space. People navigate relying on popular landmarks in the indoor space or written directions, for example, determining the location of a store in a shopping mall involves relying on written directions to reach the store. Therefore, an accurate, cost effective and real-time solution for communicating between individuals and navigation between two locations in the indoor space is desirable.
SUMMARY OF SOME EMBODIMENTS
[0003] Various aspects of example embodiments are set out in the claims.
[0004] In a first aspect, there is provided a method comprising: facilitating receipt of a navigation request of a user to reach a target location from a current location in an indoor space by performing at least: facilitating receipt of a first input corresponding to the current location of the user in the indoor space; and facilitating receipt of a second input corresponding to the target location within the indoor space; continuously tracking the user in a three-dimensional space in the indoor space by one or more virtual reality cameras based on depth information associated with the user, wherein the continuous tracking comprises tracking the user by at least one virtual reality camera of the one or more virtual reality cameras at each time instant; and providing details for navigation on a user device associated with the user based on the continuous tracking of the user and the target location. [0005] In a second aspect, there is provided an apparatus comprising at least one processor; one or more virtual reality cameras electrically coupled with the at least one processor; and at least one memory comprising computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to at least perform: facilitate receipt of a navigation request of a user to reach a target location from a current location in an indoor space by performing at least: facilitating receipt of a first input corresponding to the current location of the user in the indoor space; and facilitating receipt of a second input corresponding to the target location within the indoor space; continuously track the user in a three-dimensional space in the indoor space by the one or more virtual reality cameras based on a depth information associated with the user, wherein the continuous tracking comprises tracking the user by at least one virtual reality camera of the one or more virtual reality cameras at each time instant; and provide details for navigation on a user device associated with the user based on the continuous tracking of the user and the target location. [0006] In a third aspect, there is provided a computer program product comprising at least one computer-readable storage medium, the computer-readable storage medium comprising a set of instructions, which, when executed by one or more processors, cause an apparatus to at least perform: facilitate receipt of a navigation request of a user to reach a target location from a current location in an indoor space by performing at least: facilitating receipt of a first input corresponding to the current location of the user in the indoor space; and facilitating receipt of a second input corresponding to the target location within the indoor space; continuously track the user in a three-dimensional space by one or more virtual reality cameras in the indoor space based on a depth information associated with the user, wherein the continuous tracking comprises tracking the user by at least one virtual reality camera of the one or more virtual reality cameras at each time instant; and provide details for navigation on a user device associated with the user based on the continuous tracking of the user and the target location.
[0007] In a fourth aspect, there is provided an apparatus comprising: by at least one processor, means for facilitating receipt of a navigation request of a user to reach a target location from a current location in an indoor space by performing at least: means for facilitating receipt of a first input corresponding to the current location of the user in the indoor space; and means for facilitating receipt of a second input corresponding to the target location within the indoor space; means for continuously tracking the user in a three-dimensional space in the indoor space by one or more virtual reality cameras electrically coupled with the at least one processor based on a depth information associated with the user, wherein the continuous tracking comprises tracking the user by at least one virtual reality camera of the one or more virtual reality cameras at each time instant; and means for providing details for navigation on a user device associated with the user based on the continuous tracking of the user and the target location.
BRIEF DESCRIPTION OF THE FIGURES
[0008] Various embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which:
[0009] FIGURE 1 illustrates a device, in accordance with an example embodiment;
[0010] FIGURE 2 illustrates an apparatus for navigating a user in an indoor space, in accordance with an example embodiment;
[0011] FIGURE 3 illustrates an example representation of an indoor space for navigating a user from a current location to a target location, in accordance with an example embodiment;
[0012] FIGURE 4 illustrates an example schematic representation of camera modules of a virtual reality camera with overlapping field of views in an indoor space, in accordance with an example embodiment;
[0013] FIGURE 5 illustrates an example representation view of a virtual reality camera for localization and navigation of a user in an indoor space, in accordance with an example embodiment;
[0014] FIGURE 6 illustrates an example representation of calibrating a virtual reality camera for localization and navigation of a user in an indoor space, in accordance with an example embodiment; [0015] FIGURE 7 illustrates an example representation of a virtual reality camera with overlapping field of views of a scene within an indoor space, in accordance with an example embodiment;
[0016] FIGURE 8 illustrates an example representation of navigating a user based on a data signal corresponding to a unique code in an indoor space, in accordance with an example embodiment; [0017] FIGURE 9 illustrates an example representation of navigating a user in an indoor space, in accordance with an example embodiment; [0018] FIGURE 10 is a flowchart depicting an example method for navigating a user in an indoor space, in accordance with an example embodiment; and
[0019] FIGURE 11 is a flowchart depicting an example method for navigating a user in an indoor space, in accordance with another example embodiment.
DETAILED DESCRIPTION
[0020] Example embodiments and their potential effects are understood by referring to FIGURES 1 through 11 of the drawings.
[0021] FIGURE 1 illustrates a device 100, in accordance with an example embodiment. It should be understood, however, that the device 100 as illustrated and hereinafter described is merely illustrative of one type of device that may benefit from various embodiments, therefore, should not be taken to limit the scope of the embodiments. As such, it should be appreciated that at least some of the components described below in connection with the device 100 may be optional and thus in an example embodiment may include more, less or different components than those described in connection with the example embodiment of FIGURE 1. The device 100 could be any of a number of types of touch screen based mobile electronic devices, for example, portable digital assistants (PDAs), mobile televisions, gaming devices, cellular phones, all types of computers (for example, laptops, mobile computers or desktops), cameras, mobile digital assistants, or any combination of the aforementioned, and other types of communications devices. [0022] The device 100 may include an antenna 102 (or multiple antennas) in operable communication with a transmitter 104 and a receiver 106. The device 100 may further include an apparatus, such as a controller 108 or other processing devices that provides signals to and receives signals from the transmitter 104 and the receiver 106, respectively. The signals may include signaling information in accordance with the air interface standard of the applicable cellular system, and/or may also include data corresponding to user speech, received data and/or user generated data. In this regard, the device 100 may be capable of operating with one or more air interface standards, communication protocols, modulation types, and access types. By way of illustration, the device 100 may be capable of operating in accordance with any of a number of first, second, third and/or fourth-generation communication protocols or the like. For example, the device 100 may be capable of operating in accordance with second-generation (2G) wireless communication protocols such as IS- 136 (time division multiple access (TDMA)), GSM (global system for mobile communication), and IS-95 (code division multiple access (CDMA)), or with third-generation (3G) wireless communication protocols, such as Universal Mobile Telecommunications System (UMTS), CDMA1000, wideband CDMA (WCDMA) and time division-synchronous CDMA (TD-SCDMA), with 3.9G wireless communication protocol such as evolved universal terrestrial radio access network (E-UTRAN), with fourth -generation (4G) wireless communication protocols, or the like. As an alternative (or additionally), the device 100 may be capable of operating in accordance with non-cellular communication mechanisms. For example, computer networks such as the Internet, local area network, wide area networks, and the like; short range wireless communication networks such as include Bluetooth® networks, Zigbee® networks, Institute of Electric and Electronic Engineers (IEEE) 802. llx networks, and the like; wireline telecommunication networks such as public switched telephone network (PSTN).
[0023] The controller 108 may include circuitry implementing, among others, audio and logic functions of the device 100. For example, the controller 108 may include, but are not limited to, one or more digital signal processor devices, one or more microprocessor devices, one or more processor(s) with accompanying digital signal processor(s), one or more processor(s) without accompanying digital signal processor(s), one or more special-purpose computer chips, one or more field -programmable gate arrays (FPGAs), one or more controllers, one or more application-specific integrated circuits (ASICs), one or more computer(s), various analog to digital converters, digital to analog converters, and/or other support circuits. Control and signal processing functions of the device 100 are allocated between these devices according to their respective capabilities. The controller 108 thus may also include the functionality to convolutionally encode and interleave message and data prior to modulation and transmission. The controller 108 may additionally include an internal voice coder, and may include an internal data modem. Further, the controller 108 may include functionality to operate one or more software programs, which may be stored in a memory. For example, the controller 108 may be capable of operating a connectivity program, such as a conventional web browser. The connectivity program may then allow the device 100 to transmit and receive web content, such as location-based content and/or other web page content, according to a Wireless Application Protocol (WAP), Hypertext Transfer Protocol (HTTP) and/or the like. In an example embodiment, the controller 108 may be embodied as a multi-core processor such as a dual or quad core processor. However, any number of processors may be included in the controller 108. [0024] The device 100 may also comprise a user interface including an output device such as a ringer 110, an earphone or speaker 112, a microphone 114, a display 116, and a user input interface, which may be coupled to the controller 108. The user input interface, which allows the device 100 to receive data, may include any of a number of devices allowing the device 100 to receive data, such as a keypad 118, a touch display, a microphone or other input devices. In embodiments including the keypad 118, the keypad 118 may include numeric (0-9) and related keys (#, *), and other hard and soft keys used for operating the device 100. Alternatively or additionally, the keypad 118 may include a conventional QWERTY keypad arrangement. The keypad 118 may also include various soft keys with associated functions. In addition, or alternatively, the device 100 may include an interface device such as a joystick or other user input interface. The device 100 further includes a battery 120, such as a vibrating battery pack, for powering various circuits that are used to operate the device 100, as well as optionally providing mechanical vibration as a detectable output.
[0025] In an example embodiment, the device 100 includes a media capturing element, such as a camera, video and/or audio module, in communication with the controller 108. The media capturing element may be any means for capturing an image, video and/or audio for storage, display or transmission. In an example embodiment in which the media capturing element is a camera module 122, the camera module 122 may include a digital camera capable of forming a digital image file from a captured image. As such, the camera module 122 includes all hardware, such as a lens or other optical component(s), and software for creating a digital image file from a captured image. Alternatively, the camera module 122 may include the hardware needed to view an image, while a memory device of the device 100 stores instructions for execution by the controller 108 in the form of software to create a digital image file from a captured image. In an example embodiment, the camera module 122 may further include a processing element such as a co-processor, which assists the controller 108 in processing image data and an encoder and/or a decoder for compressing and/or decompressing image data. The encoder and/or the decoder may encode and/or decode according to a JPEG standard format or another like format. For video, the encoder and/or the decoder may employ any of a plurality of standard formats such as, for example, standards associated with H.261, H.262/ MPEG-2, H.263, H.264, H.264/MPEG-4, MPEG-4, and the like. In some cases, the camera module 122 may provide live image data to the display 116. Moreover, in an example embodiment, the display 116 may be located on one side of the device 100 and the camera module 122 may include a lens positioned on the opposite side of the device 100 with respect to the display 116 to enable the camera module 122 to capture images on one side of the device 100 and present a view of such images to the user positioned on the other side of the device 100. [0026] The device 100 may further include a user identity module (UIM) 124. The UIM 124 may be a memory device having a processor built in. The UIM 124 may include, for example, a subscriber identity module (SIM), a universal integrated circuit card (UICC), a universal subscriber identity module (USIM), a removable user identity module (R-UIM), or any other smart card. The UIM 124 typically stores information elements related to a mobile subscriber. In addition to the UIM 124, the device 100 may be equipped with memory. For example, the device 100 may include a volatile memory 126, such as volatile random access memory (RAM) including a cache area for the temporary storage of data. The device 100 may also include other non-volatile memory 128, which may be embedded and/or may be removable. The non-volatile memory 128 may additionally or alternatively comprise an electrically erasable programmable read only memory (EEPROM), flash memory, hard drive, or the like. The memories may store any number of pieces of information, and data, used by the device 100 to implement the functions of the device 100.
[0027] FIGURE 2 illustrates an apparatus 200 for navigating a user in an indoor space, in accordance with an example embodiment. In an embodiment, the apparatus 200 may be employed, for example, in one or more devices such as the device 100 of FIGURE 1. However, it should be noted that the apparatus 200, may also be employed on a variety of other devices both mobile and fixed, and therefore, embodiments should not be limited to application on devices such as the device 100 of FIGURE 1. In an example embodiment, the apparatus 200 may be embodied in form of one or more virtual reality cameras that are connected to each other, and where each virtual reality camera captures a 360 degree view of a scene. Herein, the term 'virtual reality camera' refers to any camera system that comprises a plurality of components cameras configured with respect to each other such that the plurality of component cameras are used to capture 360 degree views of the surrounding. Hence, references to the term 'virtual reality camera' throughout the description should be construed as any camera system that has multiple cameras for capturing a 360 degree view of the surrounding. The plurality of component cameras may have overlapping field of views, such that the images (or image frames) captured by the plurality of component cameras may be used to generate a 360 degree view of the surrounding. Without limiting the scope of the embodiments, examples of the virtual reality cameras may include surveillance cameras that may be fixed to stationary objects, or may be positioned in indoor spaces such as shopping malls, convention centers, etc. Alternatively, embodiments may be employed on a combination of devices including, for example, those listed above. Accordingly, various embodiments may be embodied wholly at a single device, or in a combination of devices. Furthermore, it should be noted that the devices or elements described below may not be mandatory and thus some may be omitted in certain embodiments. [0028] The apparatus 200 includes or otherwise is in communication with at least one processor 202 and at least one memory 204. Examples of the at least one memory 204 include, but are not limited to, volatile and/or non-volatile memories. Some examples of the volatile memory include, but are not limited to, random access memory, dynamic random access memory, static random access memory, and the like. Some examples of the non-volatile memory include, but are not limited to, hard disks, magnetic tapes, optical disks, programmable read only memory, erasable programmable read only memory, electrically erasable programmable read only memory, flash memory, and the like. The memory 204 may be configured to store information, data, applications, instructions or the like for enabling the apparatus 200 to carry out various functions in accordance with various example embodiments. For example, the memory 204 may be configured to buffer input data comprising media content for processing by the processor 202. Additionally or alternatively, the memory 204 may be configured to store instructions for execution by the processor 202.
[0029] An example of the processor 202 may include the controller 108. The processor 202 may be embodied in a number of different ways. The processor 202 may be embodied as a multi-core processor, a single core processor; or combination of multi-core processors and single core processors. For example, the processor 202 may be embodied as one or more of various processing means such as a coprocessor, a microprocessor, a controller, a digital signal processor (DSP), processing circuitry with or without an accompanying DSP, or various other processing devices including integrated circuits such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a microcontroller unit (MCU), a hardware accelerator, a special-purpose computer chip, or the like. In an example embodiment, the multi-core processor may be configured to execute instructions stored in the memory 204 or otherwise accessible to the processor 202. Alternatively or additionally, the processor 202 may be configured to execute hard coded functionality. As such, whether configured by hardware or software methods, or by a combination thereof, the processor 202 may represent an entity, for example, physically embodied in circuitry, capable of performing operations according to various embodiments while configured accordingly. For example, if the processor 202 is embodied as two or more of an ASIC, FPGA or the like, the processor 202 may be specifically configured hardware for conducting the operations described herein. Alternatively, as another example, if the processor 202 is embodied as an executor of software instructions, the instructions may specifically configure the processor 202 to perform the algorithms and/or operations described herein when the instructions are executed. However, in some cases, the processor 202 may be a processor of a specific device, for example, a mobile terminal or network device adapted for employing embodiments by further configuration of the processor 202 by instructions for performing the algorithms and/or operations described herein. The processor 202 may include, among other things, a clock, an arithmetic logic unit (ALU) and logic gates configured to support operation of the processor 202. [0030] A user interface 206 may be in communication with the processor 202.
Examples of the user interface 206 include, but are not limited to, an input interface and/or an output interface. The input interface is configured to receive an indication of a user input. The output user interface provides an audible, visual, mechanical or other output and/or feedback to the user. Examples of the input interface may include, but are not limited to, a keyboard, a mouse, a joystick, a keypad, a touch screen, soft keys, and the like. Examples of the output interface may include, but are not limited to, a display such as light emitting diode display, thin-film transistor (TFT) display, liquid crystal displays, active-matrix organic light-emitting diode (AMOLED) display, a microphone, a speaker, ringers, vibrators, and the like. In an example embodiment, the user interface 206 may include, among other devices or elements, any or all of a speaker, a microphone, a display, and a keyboard, touch screen, or the like. In this regard, for example, the processor 202 may comprise user interface circuitry configured to control at least some functions of one or more elements of the user interface 206, such as, for example, a speaker, ringer, microphone, display, and/or the like. The processor 202 and/or user interface circuitry comprising the processor 202 may be configured to control one or more functions of one or more elements of the user interface 206 through computer program instructions, for example, software and/or firmware, stored on a memory, for example, the at least one memory 204, and/or the like, accessible to the processor 202.
[0031] In an example embodiment, the apparatus 200 may include an electronic device. Some examples of the electronic device include a communication device, a media capturing device with or without communication capabilities, computing devices, and the like. Some examples of the electronic device may include a mobile phone, a personal digital assistant (PDA), and the like. Some examples of computing device may include a laptop, a personal computer, and the like. In an example embodiment, the electronic device may include a user interface, for example, the user interface 206, having user interface circuitry and user interface software configured to facilitate a user to control at least one function of the electronic device through use of a display and further configured to respond to user inputs. In an example embodiment, the electronic device may include a display circuitry configured to display at least a portion of the user interface 206 of the electronic device. The display and display circuitry may be configured to facilitate the user to control at least one function of the electronic device. [0032] In an example embodiment, the apparatus 200 may include one or more electronic devices. Some examples of the electronic devices include virtual reality cameras with or without communication capabilities, and the like. In an example embodiment, the electronic device may include a user interface, for example, the user interface 206, having user interface circuitry and user interface software configured to facilitate a user to control at least one function of the electronic device through use of a display and further configured to respond to user inputs. In an example embodiment, the electronic device may include a display circuitry configured to display at least a portion of the user interface 206 of the electronic device. The display and the display circuitry may be configured to facilitate the user to control at least one function of the electronic device.
[0033] In an example embodiment, the electronic device may be embodied as to include one or more virtual reality (VR) cameras, such as a virtual reality camera 208. In an example embodiment, the VR camera 208 include multiple camera modules (e.g., camera modules 210, 212, 214 and 216) that are positioned with respect to each other such that they have overlapping field of views and a 360 degree 3-D view of the scene surrounding the VR camera 208 can be obtained based on the images/image frames captured individually by the camera modules 210, 212, 214 and 216 of the VR camera 208. Only four camera modules 210, 212, 214 and 216 are shown for example purposes to facilitate present description, and it should be understood that there may be more than four camera modules present in the VR camera 208. The VR camera 208 may be in communication with the processor 202 and/or other components of the apparatus 200. The VR camera 208 may be in communication with other imaging circuitries and/or software, and is configured to capture digital images or to capture video or other graphic media. In an example embodiment, the VR camera 208 may be an array camera or a plenoptic camera capable of capturing light-field images (having multiple views of the same scene) and various views of images of the scene can be generated from such captured images. The VR camera 208, and other circuitries, in combination, may be examples of at least one camera module such as the camera module 122 of the device 100. [0034] These components (202-208) may communicate to each other via a centralized circuit system 218 to facilitate adaptive control of image capture parameters of the camera modules for example, the camera modules 210, 212, 214 and 216 of the VR camera 208. The centralized circuit system 218 may be various devices configured to, among other things, provide or enable communication between the components (202-208) of the apparatus 200. In certain embodiments, the centralized circuit system 218 may be a central printed circuit board (PCB) such as a motherboard, a main board, a system board, or a logic board. The centralized circuit system 218 may also, or alternatively, include other printed circuit assemblies (PCAs) or communication channel media.
[0035] In an example embodiment, the apparatus 200 is caused to perform navigation in the indoor space. Herein, in an example, the 'indoor space' refers to closed building such as a shopping mall or a convention centre. Herein, in another example, the term 'indoor space' may also be extended to refer to any defined area which can be entirely imaged by a plurality of VR cameras, for example, a ship, a tradeshow, a small township, etc. The apparatus 200 is installed in the indoor space such that one or more VR cameras (hereinafter also referred as the VR cameras 208) have a view of the entire 3-D space within the indoor space. As each VR camera has a plurality of camera modules that have overlapping field of views (FOVs), so depth of any particular place, object or users within the indoor space can be obtained. In another embodiment, the plurality of camera modules is configured to generate stereoscopic images to determine depth of a particular place, objects or users within the indoor space.
[0036] In an example embodiment, the apparatus 200 is a distributed system with the VR cameras 208 installed at many places inside the indoor space, such that 3-D views of the entire indoor space appear in the FOVs of the VR cameras 208. In this example embodiment, the processor 202 may also be embodied in each of the VR cameras 208 so as to process the images/image frames captured by the VR cameras 208. It is also noted that each of the VR cameras 208 may also have user interfaces (e.g., UI 206) for receiving user images, and communication means for sending and receiving signals from the user devices. For the purposes of this description, unless otherwise is suggested, any acts or functions performed by the VR cameras 208 should be inherently construed that such acts or functions are performed by the processor 202 along with the VR cameras 208 and image processing instructions stored in the memory 204.
[0037] In this example embodiment, the processor 202 is configured to, with the content of the memory 204, and optionally with other components such as the one or more VR cameras 208 and the user interface 206, to cause the apparatus 200 to facilitate receipt of a navigation request by a user to reach from a current location to a target location in an indoor space. Herein, the navigation request may be provided by a user device, for example, the device 100 associated with the user in the indoor space. In an example embodiment, the facilitating navigation request includes facilitating receipt of a first input corresponding to the current location of a user in an indoor space. In an example embodiment, the first input comprises receiving at least one of a user image, a data signal or a first text input to determine the current location of the user. In an example embodiment, facilitating navigation request further includes facilitating receipt of a second input corresponding to the target location within the indoor space. In an example embodiment, the second input comprises receiving at least one of a second text input, a target image input and a speech input. In an example embodiment, a processing means may be configured to facilitate receipt of a navigation request by a user to reach from a first location to a second location in an indoor space by performing at least facilitating receipt of a first input corresponding to a current location of a user in an indoor space; and facilitating receipt of a second input corresponding to a target location within the indoor space. An example of the processing means may include the processor 202, which may be an example of the controller 108, and the user interface 206 and the one or more VR cameras 208.
[0038] In an example embodiment, the processor 202 is configured to, with the content of the memory 204, and the one or more VR cameras 208, to cause the apparatus 200 to continuously track the user in a three-dimensional space within the indoor space. For instance, the user may tracked by the one or more VR cameras 208 installed within the indoor space based on a depth information associated with the user. It is noted that the depth information may be continuously obtained by the one or more VR cameras 208. In an example embodiment, the continuous tracking comprises tracking the user by at least one camera of the one or more VR cameras 208 at each time instant. For instance, the apparatus 200 is caused to share data related to tracking of any user or objects, between the VR cameras 208. For example, the VR cameras 208 may have information of the current location associated with the user, a unique code associated with the user, the target location associated with the user etc.
[0039] In an example embodiment, the current location of the user is tracked based on depth information received from stereoscopic images associated with the user. For example, each VR camera 208 includes a plurality of camera modules with overlapping field of views. The plurality of camera modules may be used to generate a plurality of stereoscopic images associated with the indoor space. In an example embodiment, the VR camera 208 combines the plurality of stereoscopic images to generate a panoramic image covering the indoor space. In an example embodiment, the VR camera 208 may be configured to determine the current location associated with the user based on the stereoscopic images obtained from the plurality of camera modules. For instance, the stereoscopic images may used to generate the three dimensional (3-D) space of the indoor space and determine depth information associated with the user. The depth information may be used to determine the current location associated with the user in the 3-D space. In an example embodiment, the apparatus 200 is caused to determine the depth information (z1) from disparity (d1) obtained from the stereoscopic images based on following expression (1) B
(1), where f denotes focal length of the VR camera 208 in pixels and B denotes stereo baseline of the VR camera 208 in meters. The co-ordinates x1 and y1 associated with the current location of the user may be computed based on fundamental equation of optics as shown by following expressions (2) and (3)
(2)
/
/■camera
where f is the focal length in pixels, xcamem? ycameraare coordinates of a point on the sensor For example, as shown in Figure 5, co-ordinates associated with a store 502 are -250,-50.
[0040] In an example embodiment, the apparatus 200 comprising the one or more VR cameras 208 may determine the current location associated with the user based on the first input. The apparatus 200 may receive the navigation request with the first input. In an example embodiment, an example of the first input is the user image that may be provided by the user device. For instance, the apparatus 200 comprising the one or more VR cameras 208 has a map of the 3-D space of the indoor space. The one or more VR cameras 208 compute the depth information associated with the user based on the user image. The depth information may be used to determine the current location associated with the user in the 3-D space corresponding to the indoor space.
[0041] In another example embodiment, an example of the first input may be a data signal corresponding to the unique code of the user. In an example embodiment, the unique code is provided to the user while registration of the user with the apparatus 200, which is described later with reference to Figure 8. In an example embodiment, the data signal may be generated by the user device associated with the user, such as, the device 100, by strobing the flashlight of the camera, such as, the camera module 122 of the user device at high frequencies. For instance, the user device may comprise an application installed on the user device, where the application can be used by the user for generating a modulated flash based on the unique code associated with the user. Accordingly, the strobing of the flashlight from the user device is based on the unique code associated with the user. In an example embodiment, the one or more VR cameras 208 may use rolling shutter mechanism of image capture in sensor modules of the one or more VR cameras 208 to detect the strobe pattern and demodulate the strobe pattern to determine the data signal. The demodulated data signal may be used to identify the user based on the unique code, and further determine the current location associated with the user. It is noted that the demodulated data may contain various kinds of information, for example, identifying user and/or tracking the user across various VR cameras as the users moves. [0042] In an example embodiment, the one or more VR cameras 208 may determine the target location associated with the user based on the second input. The one or more VR cameras 208 may receive the navigation request with the second input. In an example embodiment, the second input may be a second text input corresponding to the target location. For example, if the user intends to locate a store 'XYZ', the user provides the second text input corresponding to the store 'XYZ' . In an example embodiment, the apparatus 200 is caused to search a database to determine the target location. For example, the apparatus 200 including the one or more VR cameras 208 may have a map corresponding to the 3-D space of the indoor space. The map may have data associated with location of various stores and landmarks. In an example embodiment, the one or more VR cameras 208 perform optical character recognition of the map to match the second text input with the target location, such as the store 'XYZ' .
[0043] In another example embodiment, the navigation request comprising the second input may include providing a target image input to at least one VR camera of the one or more VR cameras 208. For example, the target image input may be an image associated with a store. The one or more VR cameras 208 may be configured to determine depth information associated with the target location i.e. the store. The one or more VR cameras 208 may use stereoscopic images obtained from the one or more VR cameras 208 to determine the depth information associated with the target location. The depth information may be used to determine the target location corresponding to the store. Alternatively, the user may provide a speech input corresponding to the target location. It should be noted that the target location herein also corresponds to another user, for example, the second user in the indoor space. In an example embodiment, the one or more VR cameras 208 may use an image corresponding to a friend obtained from a social networking account associated with the user as the target image input. For instance, the user may authorize the one or more VR cameras 208 to locate one or more friends associated with the user on the social networking accounts. The one or more VR cameras extract features of the one or more friends associated with the user and scan the indoor space for a suitable match. The one or more VR cameras 208 may provide a list of friends in the indoor space if there is a suitable match. [0044] In an example embodiment, the first user may provide a speech signal to the one or more VR cameras to locate the second user. For instance, the first user may be trying to locate the second user named 'ABC . The first user may speak into the microphone of the user device associated with the first user, for example, the microphone 114 of the device 100. In an example embodiment, the apparatus 200 including the one or more VR cameras 208 may comprise a speech recognition unit configured to detect the speech signal. In an example embodiment, the one or more VR cameras 208 may be configured to use sound source localization and determine a relative position of the first user providing the speech signal to a public address system. In an example embodiment, the one or more VR cameras 208 are configured to communicate with the public address system in the indoor space. For instance, the speech recognition unit associated with the one or more VR cameras 208 may recognize a message of the first user trying to locate the second user named 'ABC . The public address system may be configured to broadcast the message received from the speech recognition unit. For example, the public address system broadcasts the message of the first user associated with the current location at a relative position of the public address system that the first user is trying to locate the second user named 'ABC .
[0045] In an example embodiment, the processor 202 is configured to, with the content of the memory 204, and with other components such as the one or more VR cameras 208, to cause the apparatus 200 to provide details for navigation on a user device associated with the user based on the continuous tracking of the user and the second location (or target location). For instance, the one or more VR cameras 208 may be configured to locate the target location and provide instructions to the user associated with the current location to reach the target location.
[0046] It should be noted that in an example embodiment, before providing the details for navigation, the one or more VR cameras 208 are calibrated for navigation in the indoor space. For instance, calibration of the one or more VR cameras 208 includes identifying landmarks and valid paths in the indoor space. The user may be mapped to a closest landmark for navigation. In an example embodiment, a bi-directional graph may be constructed with landmarks as nodes of the bi-directional graph and the valid paths form edges of the bi-directional graph. For example, a coffee shop, a ground floor lift entrance, a first floor lift entrance, a main entrance, an apparel store, a food stall and an Automated Teller Machine (ATM) may be marked as landmarks in the indoor space, such as, a shopping mall. The VR cameras 208 may be configured to determine valid paths between the landmarks. In an example embodiment, the apparatus 200 comprising the VR cameras 208 is caused to compute a shortest path for the user associated with the current location to locate the target destination based on shortest path algorithm. For instance, if the user associated with the current location on a first floor of the shopping mall intends to locate the target location corresponding to the coffee shop in a ground floor of the shopping mall, the one or more VR cameras 208 determine the current location of the user and maps the user to the closest landmark, say the first floor lift entrance. The one or more VR cameras 208 construct the bidirectional graph with the landmarks as nodes and valid paths between the landmarks as edges. The one or more VR cameras 208 configure the first floor lift entrance as a starting node and the coffee shop in the ground floor of the shopping mall as the ending node. The shortest path may be computed based on the shortest path routing algorithm.
[0047] In an example embodiment, instructions associated with the navigation to the user may be provided on a user device, for example, the device 100 associated with the user. For instance, the user may be provided with instructions to move down a floor to the ground floor lift entrance and walk north 100 meters to reach the target location of the coffee shop. In an example embodiment, navigation instructions to locate the target location may be provided in a graphical form on the user device associated with the user. Alternatively, the one or more VR cameras 208 may provide navigation instructions as turn by turn instructions in a textual form. In another example embodiment, a voice signal may provide navigation instructions to the user to locate the target location. Alternatively, any combination of the above may be used to provide navigation instructions to the user.
[0048] In an example embodiment, tracking the user includes registering the user with the apparatus 200 including the one or more VR cameras 208 before the user could actually start using the apparatus 200 for the indoor navigation purposes. For instance, the VR camera 208 receives the user image and extracts features associated with the user. The processor 202 linked with the VR camera 208 stores the extracted features in a database and provides a unique code to the user. The unique code associated with the user may be shared with the one or more VR cameras 208 in the indoor space. The user device may generate a data signal corresponding to the unique code and provide the data signal to the apparatus 200. In an example embodiment, the user device may comprise an application that is installed on the user device for generating the data signal corresponding to the unique code. For example, once the application is initialized, the data signal in form of strobe light may be pulsed at the VR camera 208 by the user, and the apparatus 200 comprising the VR camera 208 determines that the user is placing a navigation request. The VR camera 208 may receive and decode the data signal using rolling shutter mechanism of image capture in sensor module of VR camera 208. Further based on the decoded data signal, the user is recognized by the apparatus 200 and his current location is also determined using the VR camera 208. [0049] In another example embodiment, two different users may locate each other using the same unique code. For instance, two users may send the data signals corresponding to the same unique code, and the apparatus 200 upon receiving the same unique code from two different source locations, determined a navigation request from each of the two users. In this example embodiment, the first user associated with a first user device and the second user associated with a second user device may strobe the same unique code. The one or more VR cameras 208 may be configured to match received unique codes and compute location associated with the first user and the second user. The VR cameras 208 provide navigation instructions to the users to locate each other. This example embodiment is further explained with reference to Figure 8. [0050] Various non-limiting example embodiments of the indoor navigation are explained further with reference to Figures 3 to 9, by taking references of Figure 3.
[0051] FIGURE 3 illustrates an example representation of an indoor space (e.g., a shopping mall 300) for navigating a user 302 from a current location 304 to a target location 306 (hereinafter referred as store 306), in accordance with an example embodiment. For instance, the user 302 carrying a user device, such as the device 100, intends to locate a store 306 (i.e. the target location) in the shopping mall 300 (indoor space). In this example representation, a plurality of VR cameras 308, 310, 312, 314, 316, 318 are mounted at different locations in the shopping mall 300 and are configured to continuously track the current location 304 of the user 302 and navigate the user 302 to the store 306. The VR cameras 308-318 are configured to communicate with each other and provide directions to the user 302 to locate the store 306. For example, the VR camera 308 computes local co-ordinates associated with the current location 304 of the user 302 and provide a shortest path (ABCD) with directions to reach the store 306 in the shopping mall 300. The movement of the user 302 maybe tracked continuously by the VR camera 308 as the user moves from location A to location B. The local co-ordinates of the user 302 corresponding to the current location 304 of the user 302 maybe updated in the plurality of VR cameras 308-318. Each VR camera of the plurality of VR cameras 308-318 comprises a plurality of camera modules with overlapping field of views. [0052] In this example representation, the plurality of VR cameras 308-318 maybe used to determine a location based on depth information associated with the location. For instance, in an example, the current location 304 of the user 302 maybe determined based on the depth information obtained from stereoscopic images of the plurality of camera modules of the VR camera 308. In this example representation, the VR camera 308 is configured to capture image frames of a scene 320 near the VR camera 308. For instance, a plurality of image frames obtained from the plurality of camera modules of the VR camera 308 with overlapping field of views are combined together to generate the image frames corresponding to the scene 320 around the VR camera 308.
[0053] In this example representation, as the user 302 moves till the location B (near the elevator), it may be far from the VR camera 308 and it may be better tracked in the FOVs of the VR camera 310 as compared to the FOVs of the VR camera 308. The VR camera 310 continues to track the user from the location B to location C and provides details for navigation to the user 302 on the user device. In this example representation, the current location 304 of the user 302 maybe continuously tracked and updated by the apparatus 200. The apparatus 200 including the plurality of the VR cameras 308-318 is configured to receive the continuously updated current location 304 associated with the user 302 and provide details for navigation to the user device for reaching the store 306.
[0054] In an example representation, the apparatus 200 may also assist in navigating one user to reach at a location of another user within the indoor space, or may also assist in navigating the both users to reach at a common location within the indoor space. For instance, the VR cameras 308-318 may track the first user 302 and a second user 322 and using the information received from the VR cameras 308-318, the apparatus 200 is caused to navigate the first user 302 to the second user 322, or guide them to meet at a common place. In an example embodiment, the user 302 captures a first image in the user device associated with user 302, such as, the camera module 122 of the device 100 and uploads the first image to the apparatus 200 comprising the VR camera 308. It is noted that the user 302 may upload his image using the application (e.g., indoor navigation application) installed or otherwise accessible (e.g., accessible from Internet, Intranet, etc) by the user device. The user 322 also captures a second image corresponding to the user 322 and uploads the second image to the apparatus 200 comprising the VR camera 310 using the application installed or otherwise accessible to the user device of the user 322. In this example representation, the apparatus 200 using the VR camera 308 determines the current location 304 associated with the user 302. For instance, the VR camera 308 computes a first stereoscopic depth of the first image based on stereoscopic images obtained from the plurality of camera modules of the VR camera 308. The first stereoscopic depth maybe used to determine the current location 304 of the user 302. In this example representation, the plurality of VR cameras 308-318 are configured to continuously track the location associated with user, such as the current location 304 associated with the user 302. Similarly, the VR camera 310 may be configured to compute the second stereoscopic depth of the user 322 based on the second image. The second stereoscopic depth may be used to determine a location 326 associated with the user 322. [0055] In this example representation, processor 202 is configured, along with the VR cameras 308-318 to determine a shortest path for the user 302 to locate the user 322 in the shopping mall 300. Alternatively, the processor 202 is configured to, along with the VR cameras 308-318 generate a common location for the user 302 to meet the user 322. For example, the VR cameras 308, 310 may determine a landmark, such as the coffee shop for the users 302 and 304 to meet. In this example representation, the apparatus 200 is caused to provide details for navigation to the user 302 for locating the user 322. In an example embodiment, the navigation details may be provided in a graphical representation on a user device, such as the device 100 carried by the user 302. Alternatively or additionally, the instructions may be provided in a textual from as turn by turn instruction. For example, the VR cameras 308, 310 provide the user 302 the distance and direction to be walked for reaching the user 322.
[0056] In an example embodiment, users may also provide navigation request inputs in form of gestures, for example, a gesture may be defined for users trying to locate each other. For instance, a user 324 may raise a hand as a gesture to locate another user in the shopping mall 300. In this example embodiment, the VR cameras 308-318 may be configured to scan and recognize users showing the gesture indicating of navigation request. For instance, users using the same gesture are recognized and displayed in the user device, for example, the device 100, associated with the user 302. The user 302 selects the intended user, for example, the user 324, and the VR cameras 308-318 determine the location associated with the user 324. The VR cameras 308-318 provide details for navigation to the user 302 and/or the user 324 to locate each other in the shopping mall 300. For instance, the instructions may be provided to the user 302 on an associated user device, such as, the device 100 in a graphical representation. Alternatively, the instructions may be provided in textual from as turn by turn instruction. For example, the apparatus 200 provides the user 302 the distance and direction to be walked to reach the user 324.
[0057] In this example representation, the user 302 may provide a speech signal to the apparatus 200 comprising the user interface 206 and the VR cameras 308-318 to locate the user 322. For instance, the user 302 may be trying to locate the user 322 named 'XYZ' . The user 302 may speak into the microphone of the user device associated with the user 302, for example, the microphone 114 of the device 100. In this example representation, the apparatus 200 includes the user interface 206 inform of a speech recognition unit configured to detect the speech signal associated with the user 302. In an example embodiment, the speech recognition unit may be embodied in each of the VR cameras 308-318. In this example, the VR camera 308 may be configured to use sound source localization and determine a relative position of the user 302 providing the speech signal to a public address system 328. In this example representation, the VR cameras 308-318 are configured to communicate with the public address system 328. The speech recognition unit of the VR camera 308 may be configured to decode the speech signal and provide decoded speech data. The decoded speech data may be provided to the public address system 328 configured to broadcast decoded speech data. For example, the public address system 328 broadcasts that the user 302 located at the relative position to the public address system 328 is trying to locate the user 322 named 'XYZ' .
[0058] FIGURE 4 illustrates an example schematic representation of camera modules of a VR camera 400 with overlapping field of views in an indoor space, in accordance with an example embodiment. The VR camera 400 may be an example of the VR camera 208 or any of the VR cameras 308-318. In an example embodiment, the VR camera 400 comprises plurality of camera modules 402, 404, 406, 408 covering 360 degree view of a surrounding space. An example of the camera module 402 maybe the VR camera 208. For instance, the plurality of camera modules 402, 404, 406, 408 has overlapping field of views enabling fully 360 degree view of the surrounding of the VR camera 400. It should be noted that the camera modules 402, 404, 406, 408 are shown to facilitate description of some example embodiments only and such VR cameras may comprise fewer or more camera modules than shown in FIGURE 4. In an example embodiment, stereoscopic depth of a scene maybe computed from the camera modules 402, 404, 406, 408 of the VR camera 400. The stereoscopic depth may be used to determine location data associated with a location and/or a person in the indoor space. FIGURE 4 shows local coordinates 412, 414, 416 (xl-axis, yl-axis, zl-axis) associated with the camera module 402 and local co-ordinates 418, 420, 422 (x2-axis, y2-axis, z2-axis) associated with the camera module 404. [0059] In an example embodiment, the processor 202 is configured along with the content of the memory 204, to compute a registration matrix for transformation of the local coordinates corresponding to the plurality of camera modules 402, 404, 406, 408 to a reference coordinate system. For instance, the plurality of camera modules 402, 404, 406, 408 may be fixed rigidly with respect to each other and are configured to have different field of views corresponding to the local co-ordinate system of the VR camera 400. The surrounding space of the VR camera 400 may be mapped to the reference co-ordinate system. For example, the local co-ordinates 412, 414, 416 associated with the camera module 402 and the local co-ordinates 418, 420, 422 associated with the camera module 404 are different. In an example embodiment, the reference co-ordinate system maybe computed assuming local co-ordinates system associated with one of the camera modules, say the local co-ordinates system 412, 414, 416 (xl-axis, yl- axis, zl-axis) associated with the camera module 402 as the reference co-ordinate system for the VR camera 400. In an example embodiment, the registration matrix maybe computed based on relative position of the plurality of camera modules 402, 404, 406, 408.
[0060] FIGURE 5 illustrates an example representation view of a camera module of a VR camera (such as the VR camera 400) for localization and navigation of a user in an indoor space 500, in accordance with an example embodiment. In an example embodiment, localization may be performed by transforming local co-ordinates associated with objects, such as a store 502, to a reference co-ordinate system. For example, local co-ordinates associated with the store 502 are -250 pixels in X-axis (see, XI), -50 pixels in Y-axis (see, Yl) and a disparity of d1 pixels in Z- axis direction, and local co-ordinates associated with a lift 504 are 200 pixels in X-axis, -50 pixels in Y-axis and a disparity of d2 pixels in Z-axis direction. In an example embodiment, disparity in a VR camera maybe converted to depth based the expressions (1), (2) and (3) provided with reference to Figure 2. [0061] FIGURE 6 illustrates an example representation of calibrating a VR camera for example, the VR camera 400 for localization and navigation of a user 602 in an indoor space 600. In an example embodiment, the VR camera may be configured to identify landmarks in the indoor space 600. For example, the VR camera 400 locates landmarks such as a main entrance 604, a first lift entrance 606, a second lift entrance 608, an 'ABC store 610 and an 'XYZ' store 612. The VR camera, such as the VR camera 400, determines co-ordinates associated with location of the landmarks 604, 606, 608, 610, 612 and stores data associated with the location in a location database. An example of the location database is the memory 204. In an example embodiment, the processor 202 is configured, along with the content of the memory 204 and the VR cameras 208 (e.g., VR camera 400), to determine valid paths 614, 616 618, 620 between the landmarks 604, 606, 608, 610, and 612. For instance, the path 614 is determined between the 'XYZ' store 612 and the second lift entrance 608.
[0062] In an example embodiment, the landmarks 604, 606, 608, 610, 612 and the valid paths 614, 616, 618, 620 maybe used to construct a bi-directional graph for navigating the user 602 within the indoor space. For example, the landmarks 604, 606, 608, 610, 612 form nodes of the bi-directional graph and the valid paths 614, 616, 618, 620 form the edges of the bidirectional graph. In an example embodiment, the VR camera 400 may map the user 602 to the landmark closest to the user 602. For instance, the landmark closest to the user 602 is the first lift entrance 606 and navigation instructions to the user 602 may be provided from the first lift entrance 606. In an example embodiment, a shortest path algorithm maybe used to navigate the user 602 to a target location. For instance, if the user 602 in first floor intends to locate the 'ABC store 610 in the indoor space 600, the VR camera in the indoor space 600 is configured to determine the landmark closest to the user 602, and accordingly maps the user 602 to the second lift entrance 608. The processor linked with the VR camera computes the shortest path for the user 602 and provides instructions to traverse two edges 616, 618 (valid paths) to reach the 'ABC store 610. As illustrated in the FIGURE 6, the navigation instructions may include a first instruction to move (see, arrow 622) down a lift to the first lift entrance 606 and a second instruction to walk (see, arrow 624) diagonally 100 meters to locate the 'ABC store 610. It is noted that throughout the movement of the user 602 between a starting location to the target location (i.e. the 'ABC store 610), the user 602 remains in the FOV of at least one VR camera, and is continuously tracked.
[0063] FIGURE 7 illustrates an example representation of a VR camera 700 with overlapping field of views of a scene 702 within an indoor space, in accordance with an example embodiment. In an example embodiment, the VR camera 700 may be configured to determine depth of objects, such as, depth associated with a user 704 in the scene 702 due to disparity between camera modules 706, 708 of the VR camera 700. In this example representation, FOV of the camera module 706 is shown as between dashed lines 712 and 714, and FOV of the camera module 708 is shown as between dashed lines 716 and 718. The VR camera 700 may be configured to receive a first image frame (II) and a second image frame (12) of the scene 702 comprising the user 704 from the camera modules 706 and 708, respectively. Herein, the first image II and the second image 12 represent different view images of the user 704 in the scene 702. In an example embodiment, the first image II and the second image 12 may also be a stereoscopic pair of images of the scene 702. In an example embodiment, the processor (e.g., the processor 202) electrically coupled with or embodied within the VR camera 700 is configured to determine a depth associated with the user 704 in the scene 702 based on the first image II and the second image 12. In an example embodiment, the depth associated with the user 704 may be used to determine location of the user 704 in the indoor space. For example, local co-ordinates corresponding to the location of the user 704 may be computed based on the depth.
[0064] In an example embodiment, the VR camera 700 may be configured to enable communication between user devices associated with users, such as the user 704 and a user 710. For example, the user 704 may share media data with the user 710. Examples of media data may include but not limited to text data, image data, video data, voice data or any combination of above. In an example embodiment, the VR camera 700 may be configured to recognize a user, for example, the user 704. For instance, the user 704 may register with the apparatus (e.g., the apparatus 200) comprising the VR camera 700. The VR camera 700 is configured to extract features associated with the user 704 and store the extracted features in a database such as the memory 204. In an example if the user 710 intends to locate the user 704, the user 710 sends a request to the apparatus 200 comprising the VR camera 700. The VR camera 700 may scan the scene 702 and identify the user 704 based on the extracted features stored in the database. Further, the VR camera 700 may determine the location associated with the user 704 based on stereoscopic depth obtained from the camera modules 706, 708, and thereafter details for navigation may be provided to the user device of the user 710 for reaching the user 704.
[0065] In an example embodiment, the VR camera 700 may be configured to connect users using a gesture. For instance, the user 704 may raise a hand as a gesture to the FOV of the VR camera 700 so as to be identified by the user 710. The VR camera 700 may be configured to provide the user 710 a list of users with raised hand and location corresponding to the users. The user 710 may identify the user 704 from the list of users with raised hands and selects the user 704 in the navigation application installed or otherwise accessible in the user device of the user 710. Thereafter, the location corresponding to the user 704 maybe used to navigate the user 710 to the user 704.
[0066] FIGURE 8 illustrates an example representation of navigating a user based on a data signal corresponding to a unique code in an indoor space 800, in accordance with an example embodiment. In an example embodiment, a VR camera 802 may be configured to enable communication between a user 804 carrying a user device 806 and a user 808 carrying a user device 810 based on data signals generated from the user devices 806 and 810. For example, a first data signal may be generated by the user device 806 associated with the user 804 and a second data signal may be generated by the user device 810 associated with user 808. The user 804 may generate the first data signal corresponding to the first unique code from the user device 806 and the user 808 may generate the second data signal corresponding to the second unique code from the user device 810.
[0067] In an example embodiment, the first data signal and the second data signal may be strobe signals generated by a flashlight corresponding to the user device 806 and the user device 810, respectively. For instance, the first data signal may be generated by the user device 806 by modulating the flashlight of camera, such as, the camera module 122 at high frequencies using an application installed or accessible to the user device 806. Hence, the strobing of the flashlight is based on the unique code associated with the user 804. In an example embodiment, as the VR camera 802 and/or any other VR camera in the indoor space 800 receive the data signal corresponding to the same unique code from two different user devices and decode the unique code, it may be determined that the users are trying to locate each other. As the navigation apparatus (e.g., the apparatus 200) determines that users associated with the two different user devices are trying to locate each other, the VR camera 802 and/or any other VR camera in the indoor space 800 determine a first location associated with the user 804 based on the first data signal and a second location associated with the user 808 based on the second data signal. For instance, the VR camera 802 may be configured to decode the first data signal and determine that the user 804 is associated with the first location. Similarly, the second data signal may be used to determine that the user 808 is associated with the second location by the VR camera 802 if the user 808 is in FOV of the VR camera 802, or any other VR camera in which FOV the user 808 is currently located. In an example embodiment, the apparatus 200 incorporating the VR camera 802 along with other VR cameras is configured to navigate the users 804, 808 to a common meeting point. The VR camera 802 may provide the common meeting point based on the first location and the second location associated with the users 804, 808, respectively. The VR camera 802 may also provide navigation instructions to locate the common meeting point for the users 804, 808. Alternatively, the VR camera 802 may provide instructions to the user 804 to locate the user 808. In an example embodiment, the instructions may be in textual form, graphical form or voice data providing turn by turn instructions to the user. In another example embodiment, the VR camera 802 may have a light source attached, that can in turn be pulsed to send out the code sent by one individual, and that code can be read by the user device of another individual who is looking for the target individual, creating a communication network. For example, the VR camera 802 may be configured to generate a light pulse corresponding to a data signal received from a user, say the user 804. The generated light pulse may be received by a user, say the user 808 trying to locate the user 804. Accordingly, the VR camera 802 along with the processor 202 and the memory 204 is configured to navigate and provide instructions to the user 808 to locate the user 804.
[0068] FIGURE 9 illustrates an example representation of navigating a user 902 in an indoor space 900, in accordance with an example embodiment. In an example embodiment, a VR camera 904 may be configured to navigate the user 902 to locate a user 906 in the indoor space 900. For instance, the user 902 may be in first floor and the user 906 may be in ground floor of the indoor space 900. The VR camera 904 provides precise turn-by-turn directions from one point to another for the user 902 to locate the user 906. In an example embodiment, the VR camera 904 may be configured to generate a 3-D map corresponding to the indoor space 900. For instance, the VR camera 904 comprises one or more camera modules with overlapping FOVs. The one or more camera modules of the VR camera 904 provide one or more stereoscopic images associated with the indoor space 900. The VR camera 904 may be configured to generate the 3-D map of the indoor space 900 based on the one or more stereoscopic images obtained from the one or more camera modules of the VR camera 904. [0069] In an example embodiment, the VR camera 904 may be configured to determine location associated with the users 902, 906 in the indoor space 900. For instance, the VR camera 904 may be configured to compute the stereoscopic depth associated with the user 902 based on the 3-D map obtained from the stereoscopic images of the VR camera 904. The stereoscopic depth may be used to determine the location associated with the user 902. Similarly, the VR camera 904 (if the user 906 is in FOV of the VR camera 904) or another VR camera determines a location associated with the user 906 based on the 3-D map generated by the VR camera 904 or any other VR camera. In an example embodiment, the VR camera 904 may be configured to provide navigation instructions to the user 902 to locate the user 906 based on the 3- D map obtained from the VR camera 904. The VR camera 904 computes a shortest route for the user 902 to locate the user 906 based on the 3-D map generated by the VR camera 904. For example, the VR camera 904 provides a first instruction 912 to 'walk 100 meters to closest lift (shown as a first lift entrance 908)', a second instruction 914 to 'go to down floor' using the lift (shown as a second lift entrance 910) and a third instruction 916 to 'walk south for 50 meters' to locate the user 906.
[0070] FIGURE 10 is a flowchart depicting an example method 1000 for navigating a user in an indoor space, in accordance with an example embodiment. The method 1000 is shown and explained with reference to FIGURE 2. The method 1000 depicted in the flowchart may be executed by, for example, the apparatus 200 of FIGURE 2. At operation 1002, the method 1000 includes facilitating receipt of a navigation request by a user to reach a target location from a current location to in an indoor space. In an example embodiment, the navigation request may be received from a user device associated with the user. The target location associated with the navigation request may be a point of interest or another user. The navigation request may be received from the user device, for example, the device 100 associated with the user. In an example embodiment, the user may use a navigation application specific to the indoor space installed or otherwise accessible to the user device, to generate the navigation request. Alternatively, the navigation request may be received from external sources accessible to the apparatus 200.
[0071] In an example, the operation 1002 of the navigation request primarily comprises standalone or jointly performed operations of 1004 and 1006. At operation 1004, the method 1000 includes facilitating receipt of a first input corresponding to a current location of a user in an indoor space. In an example, the first input may be at least one of a user image, a data signal or a first text input to determine the current location of the user. At operation 1006, the method 1000 includes facilitating receipt of a second input corresponding to a target location within the indoor space. In an example, the second input comprises receiving at least one of a second text input, a target image input and a speech input.
[0072] At operation 1008, the method 1000 includes continuously tracking the user in a three-dimensional space by one or more VR cameras installed within the indoor space, based on depth information associated with the user. In an example embodiment, the continuous tracking comprises tracking the user by at least one VR camera of the one or more VR cameras at each time instant. For instance, the one or more VR cameras may be configured to communicate between each other and share data associated with the user. For example, the one or more VR cameras may share current location associated with the user. Each of the VR cameras includes a plurality of camera modules with overlapping field of views. The plurality of camera modules may be used to generate plurality of stereoscopic images associated with the indoor space. The one or more VR cameras may be configured to combine the plurality of stereoscopic images to generate a panoramic image covering the indoor space. In an example embodiment, a VR camera may be configured to determine the current location associated with the user based on the stereoscopic images obtained from the plurality of camera modules. For instance, the stereoscopic images may used to generate the 3-D space of the indoor space and determine depth information associated with the user. The depth information may be used to determine the current location associated with the user in the 3-D space. In an example, if the user moves from the FOV of one VR camera to FOV of another VR camera, the user is continuously tracked by the network of VR cameras constituted by the plurality of VR cameras. The current location and the target location associated with the user are determined as described with reference to FIGURE 2 and particularly with reference to expressions (1), (2) and (3). Some examples of the continuous tracking of the user are described with reference to FIGURES 4 and 5, respectively.
[0073] At operation 1010, the method 1000 includes providing details for navigation on a user device associated with the user based on the continuous tracking of the user and the second location. It should be noted that the details for navigation may be provided in various forms on the user device, for example, graphical instructions, textual instructions or voice instructions providing turn by turn instructions to locate the target location or another user. The navigation of the user to the target location is explained with reference to FIGURE 2.
[0074] FIGURE 11 is a flowchart depicting an example method 1100 for navigating a user in an indoor space, in accordance with another example embodiment. The method 1100 is shown and explained with reference to FIGURE 2. The method 1100 depicted in the flowchart may be executed by, for example, the apparatus 200 of FIGURE 2. [0075] At operation 1102, the method 1100 includes calibrating the one or more VR cameras to provide details for navigation on a user device associated with a user in an indoor space. For instance, calibration of the one or more VR cameras includes identifying landmarks and valid paths in the indoor space. The user may be mapped to a closest landmark for navigation. An example of calibration is also described with reference to FIGURE 6.
[0076] At operation 1104, the method 1100 includes registration of the user with the apparatus, such as, the apparatus 200. An example of the registration of the user with the apparatus 200 is described with reference to FIGURE 2. In an example embodiment, the operation 1104 of registration is performed using the operations 1106, 1108 and 1110. At operation 1106, the method 1100 includes receiving an image of the user. For example, the user provides the image corresponding to the user in the current location using a user device, such as, the device 100, to the apparatus 200. At operation 1108, the method 1100 includes extracting and storing features from the image corresponding to the user for continuous tracking of the user. The extracted features maybe stored in a memory, such as, the memory 204 of the apparatus 200. The features may be used to identify and continuously track the user in the indoor space. At operation 1110, the method 1100 includes assigning a unique code associated with the user, and providing the unique code to the user device associated with the user. In an example embodiment, the unique code may be used to determine the current location associated with the user as described with reference to FIGURE 8.
[0077] At operation 1112, the method 1100 includes receiving a navigation request by the user to reach from the current location to the target location in the indoor space. In an example embodiment, the navigation request may be received from the user device associated with the user. The target location associated with the navigation request may be a location or another user. Alternatively, the navigation request may be received from external sources accessible to the apparatus 200. In an example, the operation 1112 of the navigation request primarily comprises standalone or jointly performed operations of 1114 and 1116. At operation 1114, the method 1100 includes receiving a first input corresponding to the current location of the user in the indoor space. In an example embodiment, the first input may be associated with the current location of the user. In an example, the first input may be at least one of a first text input, image input, a data signal or a first speech input. At operation 1116, the method 1100 includes receiving a second input corresponding to the target location within the indoor space. In an example, the second input comprises receiving at least one of a second text input, a target image input and a speech input. Some example embodiments of receiving the first input and the second input are described with reference to FIGURE 2. [0078] At operation 1118, the method 1100 includes checking if the user is registered with the apparatus 200. If the user is registered with the apparatus, the method 1100 goes to block 1120 else 1104 is performed.
[0079] At operation 1120, the method 1100 includes continuously tracking the user in a three-dimensional space by one or more VR cameras installed within the indoor space based on depth information associated with the user. In an example embodiment, the continuous tracking comprises tracking the user by at least one camera of the one or more VR cameras at each time instant. The continuous tracking of the user by the one or more VR cameras is described with reference to FIGURE 2.
[0080] At operation 1122, the method 1100 includes providing details for navigation on the user device associated with the user based on the continuous tracking of the user and the second location. The details for navigation may be provided in various forms on a user device, for example, graphical instructions, textual instructions or voice instructions providing turn by turn instructions to locate the target location or another user. Some examples of navigating the user described with reference to FIGURES 6, 7, and 8, respectively. [0081] It should be noted that to facilitate discussions of the flowcharts of FIGURES
10 and 11 certain operations are described herein as constituting distinct steps performed in a certain order. Such implementations are examples only and non-limiting in scope. Certain operation may be grouped together and performed in a single operation, and certain operations can be performed in an order that differs from the order employed in the examples set forth herein. Moreover, certain operations of the methods 1000 and 1100 are performed in an automated fashion. These operations involve substantially no interaction with the user. Other operations of the methods 1000 and 1100 may be performed by in a manual fashion or semiautomatic fashion. These operations involve interaction with the user via one or more user interface presentations.
[0082] The methods depicted in these flowcharts may be executed by, for example, the apparatus 200 of FIGURE 2. Operations of the flowchart, and combinations of operation in the flowcharts, may be implemented by various means, such as hardware, firmware, processor, circuitry and/or other device associated with execution of software including one or more computer program instructions. For example, one or more of the procedures described in various embodiments may be embodied by computer program instructions. In an example embodiment, the computer program instructions, which embody the procedures, described in various embodiments may be stored by at least one memory device of an apparatus and executed by at least one processor in the apparatus. Any such computer program instructions may be loaded onto a computer or other programmable apparatus (for example, hardware) to produce a machine, such that the resulting computer or other programmable apparatus embody means for implementing the operations specified in the flowchart. These computer program instructions may also be stored in a computer-readable storage memory (as opposed to a transmission medium such as a carrier wave or electromagnetic signal) that may direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture the execution of which implements the operations specified in the flowchart. The computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operations to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions, which execute on the computer or other programmable apparatus, provide operations for implementing the operations in the flowchart. The operations of the methods are described with help of apparatus 200. However, the operations of the methods can be described and/or practiced by using any other apparatus.
[0083] Without in any way limiting the scope, interpretation, or application of the claims appearing below, a technical effect of one or more of the example embodiments disclosed herein is to provide details for navigation to a user in an indoor space from a current location to a target location. Various example embodiments provision for accurate and reliable navigation of the user in the indoor space by one or more VR cameras having 360 degree view of the indoor space, thereby reducing complexity in locating the target location or people in the indoor space Various example embodiments provision for localizing the user and determining the current location of the user based on depth information computed by the one or more VR cameras. The user requires only a user device, such as, the device 100 to localize the user, thereby also reducing complexity as compared to standard techniques. As the continuous tracking of the user is performed by the one or more VR cameras, there are significant savings in memory that is typical for the user device, for example, the device 100 unlike GPS running on the user device and consuming more power and unreliable for the indoor space. Further, various example embodiments continuously track the user by visual tracking performed by the one or more VR cameras. [0084] Various embodiments described above may be implemented in software, hardware, application logic or a combination of software, hardware and application logic. The software, application logic and/or hardware may reside on at least one memory, at least one processor, an apparatus or, a computer program product. In an example embodiment, the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media. In the context of this document, a "computer-readable medium" may be any media or means that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer, with one example of an apparatus described and depicted in FIGURES 1 and/or 2. A computer-readable medium may comprise a computer-readable storage medium that may be any media or means that can contain or store the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer.
[0085] If desired, the different functions discussed herein may be performed in a different order and/or concurrently with each other. Furthermore, if desired, one or more of the above-described functions may be optional or may be combined. [0086] Although various aspects of the embodiments are set out in the independent claims, other aspects comprise other combinations of features from the described embodiments and/or the dependent claims with the features of the independent claims, and not solely the combinations explicitly set out in the claims. [0087] It is also noted herein that while the above describes example embodiments of the invention, these descriptions should not be viewed in a limiting sense. Rather, there are several variations and modifications which may be made without departing from the scope of the present disclosure as defined in the appended claims.