Movatterモバイル変換


[0]ホーム

URL:


US11307416B2 - Wearable electronic device on head - Google Patents

Wearable electronic device on head
Download PDF

Info

Publication number
US11307416B2
US11307416B2US16/495,379US201916495379AUS11307416B2US 11307416 B2US11307416 B2US 11307416B2US 201916495379 AUS201916495379 AUS 201916495379AUS 11307416 B2US11307416 B2US 11307416B2
Authority
US
United States
Prior art keywords
display unit
electronic device
user
unit
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US16/495,379
Other versions
US20210333549A1 (en
Inventor
Seong JO
Seungyong SHIN
Kyoungtae Kim
Changkyu HWANG
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LG Electronics Inc
Original Assignee
LG Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LG Electronics IncfiledCriticalLG Electronics Inc
Assigned to LG ELECTRONICS INC.reassignmentLG ELECTRONICS INC.ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: HWANG, Changkyu, JO, SEONG, KIM, KYOUNGTAE, Shin, Seungyong
Publication of US20210333549A1publicationCriticalpatent/US20210333549A1/en
Application grantedgrantedCritical
Publication of US11307416B2publicationCriticalpatent/US11307416B2/en
Activelegal-statusCriticalCurrent
Adjusted expirationlegal-statusCritical

Links

Images

Classifications

Definitions

Landscapes

Abstract

Disclosed is an electronic device according to the present invention. The electronic device according to the present invention may move a display unit so that the position of the display unit corresponds to the pupil position of a user, and nanopatterns of an in-coupling portion, where a beam enters the display unit, and an out-coupling portion, where a beam is emitted, may be the same pattern. The electronic device according to the present invention may be associated with an artificial intelligence module, a robot, an augmented reality (AR) device, a virtual reality (VR) device, a device related to a 5G service, and the like.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This application is the National Stage filing under 35 U.S.C. 371 of International Application No. PCT/KR2019/010987, filed on Aug. 28, 2019, the contents of which are hereby incorporated by reference herein in its entirety.
TECHNICAL FIELD
The present invention relates to a wearable electronic device on a head, and more particularly, to an electronic device in which a display unit is capable of being moved according to a pupil position of a user.
BACKGROUND ART
As a device mainly used for virtual reality (VR) or augmented reality (AR), a head-mounted display (HMD) refers to a digital device where a display device is worn on a head, like glasses or a helmet, and allows multimedia contents to be viewed with naked eyes.
Therefore, the HMD generally includes a display module for implementing an image. For example, the display module may include a liquid crystal panel including a liquid crystal and an organic light emitting diode (OLED) panel including an organic light emitting device. In addition, in order to enable a user wearing the HMD to visually recognize an image implemented by the display module at a distance close to the eyes, the display module included in the HMD consists of near-eye display optics.
The near-eye display optics includes a light source and a lens, and the size and volume of the HMD are determined according to the arrangement of the light source and the lens.
Since each user using the HMD has a different size of an inter-pupillary distance (IPD), in the HMD of the related art, the lens included in the display unit has to be as large as possible to project an image in every user's pupil, which may result in increases in the size, volume and weight of the entire electronic device, that is, the HMD.
DISCLOSURETechnical Problem
The present invention has been made to meet above-mentioned needs and to solve the problems.
An object of the present invention is to provide an electronic device used in VR, AR, mixed reality (MR), and the like, capable of minimizing the size of a display unit.
In addition, another object of the present invention is to provide an electronic device used in VR, AR, MR, and the like, capable of allowing a display unit to move to correspond to a pupil position of a user.
Technical Solution
An electronic device according to an embodiment of the present invention includes: a display unit including an in-coupling portion having a first nanopattern formed thereon, and an out-coupling portion having a second nanopattern formed thereon; a moving unit moving the display unit to a position corresponding to a pupil position of the user; and a control unit recognizing the pupil position of the user and controlling the moving unit so that the display unit corresponds to the pupil position. The first nanopattern and the second nanopattern are the same pattern.
The first nanopattern may include a plurality of first protrusions protruding from one surface of the display unit to a face of the user, the second nanopattern may include a plurality of second protrusions protruding from one surface of the display unit to the face of the user, and the first protrusions and the second protrusions may be all parallel to one another.
A height of the first protrusions protruding from the one surface of the display unit may be greater than a height of the second protrusions protruding from the one surface of the display unit.
A height of the first protrusions protruding from the one surface of the display unit may be smaller than a height of the second protrusions protruding from the one surface of the display unit.
A height of the first protrusions protruding from the one surface of the display unit may be the same as a height of the second protrusions protruding from the one surface of the display unit.
Angles formed by the first protrusions and the second protrusions with respect to the one surface of the display unit may be perpendicular.
Angles formed by the first protrusions and the second protrusions with respect to the one surface of the display unit may be acute.
Angles formed by the first protrusions and the second protrusions with respect to the one surface of the display unit may be obtuse.
A width of the display unit may be formed to correspond to a diameter of at least one of the pupil, iris and eye of the user.
The electronic device may further include a front frame on which the control unit and the display unit are mounted. The control unit may further include a first control module and a second control module, the first control module and the second control module may be disposed on the front frame to be parallel to eyebrows of the user, and a width of the display unit may be half of a length of the first control module or a length of the second control module.
The moving unit may further include a plate coupled to a portion of the display unit, a spring connected to one end of the plate, and a pressurizing unit pressurizing the other end of the plate.
The moving unit may further include an air pocket accommodating a portion of the display unit, and a pressurizing unit pressurizing one end of the air pocket.
The control unit may further include a pupil recognizing unit recognizing the pupil of the user.
Advantageous Effects
Since the electronic device according to the present invention includes a display unit having a narrower width than in the related art, it is possible to minimize the size, volume, and weight of the entire electronic device.
In addition, since the in-coupling portion and the out-coupling portion included in the display unit have an equivalent nanopattern, the electronic device according to the present invention has increased light efficiency.
DESCRIPTION OF DRAWINGS
FIG. 1 is a conceptual diagram illustrating an embodiment of a 5G network environment in which heterogeneous electronic devices are connected to a cloud network.
FIG. 2 is a block diagram illustrating a configuration of an electronic device including a display module according to an embodiment of the present invention.
FIG. 3 is a perspective view of an augmented reality electronic device according to an embodiment of the present invention.
FIG. 4 is an exploded perspective view illustrating a control unit according to the embodiment of the present invention.
FIG. 5 is a diagram illustrating an embodiment of a prism type optical element.
FIG. 6 is a diagram illustrating an embodiment of a waveguide type optical element.
FIGS. 7 and 8 are diagrams illustrating an embodiment of a pin mirror type optical element.
FIG. 9 is a diagram illustrating an embodiment of a surface reflection type optical element.
FIG. 10 is a diagram illustrating an embodiment of a micro-LED type optical element.
FIG. 11 is a diagram illustrating an embodiment of a display unit used for a contact lens.
FIG. 12 is a diagram illustrating an embodiment in which a nanopattern is formed on a display unit of the present invention.
FIG. 13 is a diagram illustrating another embodiment in which a nanopattern is formed on a display unit of the present invention.
FIG. 14 is a diagram illustrating a side of the display unit of the present invention along the XIV line shown inFIG. 13.
FIG. 15 is a diagram illustrating a side of the display unit along the XV line shown inFIG. 14.
FIGS. 16 and 17 are diagrams illustrating the movement of the display unit of the present invention.
FIG. 18 is a diagram illustrating an embodiment of a moving unit of the present invention.
MODE FOR INVENTION
In what follows, embodiments disclosed in this document will be described in detail with reference to appended drawings, where the same or similar constituent elements are given the same reference number irrespective of their drawing symbols, and repeated descriptions thereof will be omitted.
In describing an embodiment disclosed in the present specification, if a constituting element is said to be “connected” or “attached” to other constituting element, it should be understood that the former may be connected or attached directly to the other constituting element, but there may be a case in which another constituting element is present between the two constituting elements.
Also, in describing an embodiment disclosed in the present document, if it is determined that a detailed description of a related art incorporated herein unnecessarily obscure the gist of the embodiment, the detailed description thereof will be omitted. Also, it should be understood that the appended drawings are intended only to help understand embodiments disclosed in the present document and do not limit the technical principles and scope of the present invention; rather, it should be understood that the appended drawings include all of the modifications, equivalents or substitutes described by the technical principles and belonging to the technical scope of the present invention.
[5G Scenario]
The three main requirement areas in the 5G system are (1) enhanced Mobile Broadband (eMBB) area, (2) massive Machine Type Communication (mMTC) area, and (3) Ultra-Reliable and Low Latency Communication (URLLC) area.
Some use case may require a plurality of areas for optimization, but other use case may focus only one Key Performance Indicator (KPI). The 5G system supports various use cases in a flexible and reliable manner.
eMBB far surpasses the basic mobile Internet access, supports various interactive works, and covers media and entertainment applications in the cloud computing or augmented reality environment. Data is one of core driving elements of the 5G system, which is so abundant that for the first time, the voice-only service may be disappeared. In the 5G, voice is expected to be handled simply by an application program using a data connection provided by the communication system. Primary causes of increased volume of traffic are increase of content size and increase of the number of applications requiring a high data transfer rate. Streaming service (audio and video), interactive video, and mobile Internet connection will be more heavily used as more and more devices are connected to the Internet. These application programs require always-on connectivity to push real-time information and notifications to the user. Cloud-based storage and applications are growing rapidly in the mobile communication platforms, which may be applied to both of business and entertainment uses. And the cloud-based storage is a special use case that drives growth of uplink data transfer rate. The 5G is also used for cloud-based remote works and requires a much shorter end-to-end latency to ensure excellent user experience when a tactile interface is used. Entertainment, for example, cloud-based game and video streaming, is another core element that strengthens the requirement for mobile broadband capability. Entertainment is essential for smartphones and tablets in any place including a high mobility environment such as a train, car, and plane. Another use case is augmented reality for entertainment and information search. Here, augmented reality requires very low latency and instantaneous data transfer.
Also, one of highly expected 5G use cases is the function that connects embedded sensors seamlessly in every possible area, namely the use case based on mMTC. Up to 2020, the number of potential IoT devices is expected to reach 20.4 billion. Industrial IoT is one of key areas where the 5G performs a primary role to maintain infrastructure for smart city, asset tracking, smart utility, agriculture and security.
URLLC includes new services which may transform industry through ultra-reliable/ultra-low latency links, such as remote control of major infrastructure and self-driving cars. The level of reliability and latency are essential for smart grid control, industry automation, robotics, and drone control and coordination.
Next, a plurality of use cases will be described in more detail.
The 5G may complement Fiber-To-The-Home (FTTH) and cable-based broadband (or DOCSIS) as a means to provide a stream estimated to occupy hundreds of megabits per second up to gigabits per second. This fast speed is required not only for virtual reality and augmented reality but also for transferring video with a resolution more than 4K (6K, 8K or more). VR and AR applications almost always include immersive sports games. Specific application programs may require a special network configuration. For example, in the case of VR game, to minimize latency, game service providers may have to integrate a core server with the edge network service of the network operator.
Automobiles are expected to be a new important driving force for the 5G system together with various use cases of mobile communication for vehicles. For example, entertainment for passengers requires high capacity and high mobile broadband at the same time. This is so because users continue to expect a high-quality connection irrespective of their location and moving speed. Another use case in the automotive field is an augmented reality dashboard. The augmented reality dashboard overlays information, which is a perception result of an object in the dark and contains distance to the object and object motion, on what is seen through the front window. In a future, a wireless module enables communication among vehicles, information exchange between a vehicle and supporting infrastructure, and information exchange among a vehicle and other connected devices (for example, devices carried by a pedestrian). A safety system guides alternative courses of driving so that a driver may drive his or her vehicle more safely and to reduce the risk of accident. The next step will be a remotely driven or self-driven vehicle. This step requires highly reliable and highly fast communication between different self-driving vehicles and between a self-driving vehicle and infrastructure. In the future, it is expected that a self-driving vehicle takes care of all of the driving activities while a human driver focuses on dealing with an abnormal driving situation that the self-driving vehicle is unable to recognize. Technical requirements of a self-driving vehicle demand ultra-low latency and ultra-fast reliability up to the level that traffic safety may not be reached by human drivers.
The smart city and smart home, which are regarded as essential to realize a smart society, will be embedded into a high-density wireless sensor network. Distributed networks comprising intelligent sensors may identify conditions for cost-efficient and energy-efficient conditions for maintaining cities and homes. A similar configuration may be applied for each home. Temperature sensors, window and heating controllers, anti-theft alarm devices, and home appliances will be all connected wirelessly. Many of these sensors typified with a low data transfer rate, low power, and low cost. However, for example, real-time HD video may require specific types of devices for the purpose of surveillance.
As consumption and distribution of energy including heat or gas is being highly distributed, automated control of a distributed sensor network is required. A smart grid collects information and interconnect sensors by using digital information and communication technologies so that the distributed sensor network operates according to the collected information. Since the information may include behaviors of energy suppliers and consumers, the smart grid may help improving distribution of fuels such as electricity in terms of efficiency, reliability, economics, production sustainability, and automation. The smart grid may be regarded as a different type of sensor network with a low latency.
The health-care sector has many application programs that may benefit from mobile communication. A communication system may support telemedicine providing a clinical care from a distance. Telemedicine may help reduce barriers to distance and improve access to medical services that are not readily available in remote rural areas. It may also be used to save lives in critical medical and emergency situations. A wireless sensor network based on mobile communication may provide remote monitoring and sensors for parameters such as the heart rate and blood pressure.
Wireless and mobile communication are becoming increasingly important for industrial applications. Cable wiring requires high installation and maintenance costs. Therefore, replacement of cables with reconfigurable wireless links is an attractive opportunity for many industrial applications. However, to exploit the opportunity, the wireless connection is required to function with a latency similar to that in the cable connection, to be reliable and of large capacity, and to be managed in a simple manner. Low latency and very low error probability are new requirements that lead to the introduction of the 5G system.
Logistics and freight tracking are important use cases of mobile communication, which require tracking of an inventory and packages from any place by using location-based information system. The use of logistics and freight tracking typically requires a low data rate but requires large-scale and reliable location information.
The present invention to be described below may be implemented by combining or modifying the respective embodiments to satisfy the aforementioned requirements of the 5G system.
FIG. 1 illustrates one embodiment of an AI device.
Referring toFIG. 1, in the AI system, at least one or more of anAI server16,robot11, self-drivingvehicle12,XR device13,smartphone14, orhome appliance15 are connected to acloud network10. Here, therobot11, self-drivingvehicle12,XR device13,smartphone14, orhome appliance15 to which the AI technology has been applied may be referred to as an AI device (11 to15).
Thecloud network10 may comprise part of the cloud computing infrastructure or refer to a network existing in the cloud computing infrastructure. Here, thecloud network10 may be constructed by using the 3G network, 4G or Long Term Evolution (LTE) network, or 5G network.
In other words, individual devices (11 to16) constituting the AI system may be connected to each other through thecloud network10. In particular, each individual device (11 to16) may communicate with each other through the eNB but may communicate directly to each other without relying on the eNB.
TheAI server16 may include a server performing AI processing and a server performing computations on big data.
TheAI server16 may be connected to at least one or more of therobot11, self-drivingvehicle12,XR device13,smartphone14, orhome appliance15, which are AI devices constituting the AI system, through thecloud network10 and may help at least part of AI processing conducted in the connected AI devices (11 to15).
At this time, theAI server16 may teach the artificial neural network according to a machine learning algorithm on behalf of the AI device (11 to15), directly store the learning model, or transmit the learning model to the AI device (11 to15).
At this time, theAI server16 may receive input data from the AI device (11 to15), infer a result value from the received input data by using the learning model, generate a response or control command based on the inferred result value, and transmit the generated response or control command to the AI device (11 to15).
Similarly, the AI device (11 to15) may infer a result value from the input data by employing the learning model directly and generate a response or control command based on the inferred result value.
<AI+Robot>
By employing the AI technology, therobot11 may be implemented as a guide robot, transport robot, cleaning robot, wearable robot, entertainment robot, pet robot, or unmanned flying robot.
Therobot11 may include a robot control module for controlling its motion, where the robot control module may correspond to a software module or a chip which implements the software module in the form of a hardware device.
Therobot11 may obtain status information of therobot11, detect (recognize) the surroundings and objects, generate map data, determine a travel path and navigation plan, determine a response to user interaction, or determine motion by using sensor information obtained from various types of sensors.
Here, therobot11 may use sensor information obtained from at least one or more sensors among lidar, radar, and camera to determine a travel path and navigation plan.
Therobot11 may perform the operations above by using a learning model built on at least one or more artificial neural networks. For example, therobot11 may recognize the surroundings and objects by using the learning model and determine its motion by using the recognized surroundings or object information. Here, the learning model may be the one trained by therobot11 itself or trained by an external device such as theAI server16.
At this time, therobot11 may perform the operation by generating a result by employing the learning model directly but also perform the operation by transmitting sensor information to an external device such as theAI server16 and receiving a result generated accordingly.
Therobot11 may determine a travel path and navigation plan by using at least one or more of object information detected from the map data and sensor information or object information obtained from an external device and navigate according to the determined travel path and navigation plan by controlling its locomotion platform.
Map data may include object identification information about various objects disposed in the space in which therobot11 navigates. For example, the map data may include object identification information about static objects such as wall and doors and movable objects such as a flowerpot and a desk. And the object identification information may include the name, type, distance, location, and so on.
Also, therobot11 may perform the operation or navigate the space by controlling its locomotion platform based on the control/interaction of the user. At this time, therobot11 may obtain intention information of the interaction due to the user's motion or voice command and perform an operation by determining a response based on the obtained intention information.
<AI+Autonomous Navigation>
By employing the AI technology, the self-drivingvehicle12 may be implemented as a mobile robot, unmanned ground vehicle, or unmanned aerial vehicle.
The self-drivingvehicle12 may include an autonomous navigation module for controlling its autonomous navigation function, where the autonomous navigation control module may correspond to a software module or a chip which implements the software module in the form of a hardware device. The autonomous navigation control module may be installed inside the self-drivingvehicle12 as a constituting element thereof or may be installed outside the self-drivingvehicle12 as a separate hardware component.
The self-drivingvehicle12 may obtain status information of the self-drivingvehicle12, detect (recognize) the surroundings and objects, generate map data, determine a travel path and navigation plan, or determine motion by using sensor information obtained from various types of sensors.
Like therobot11, the self-drivingvehicle12 may use sensor information obtained from at least one or more sensors among lidar, radar, and camera to determine a travel path and navigation plan.
In particular, the self-drivingvehicle12 may recognize an occluded area or an area extending over a predetermined distance or objects located across the area by collecting sensor information from external devices or receive recognized information directly from the external devices.
The self-drivingvehicle12 may perform the operations above by using a learning model built on at least one or more artificial neural networks. For example, the self-drivingvehicle12 may recognize the surroundings and objects by using the learning model and determine its navigation route by using the recognized surroundings or object information. Here, the learning model may be the one trained by the self-drivingvehicle12 itself or trained by an external device such as theAI server16.
At this time, the self-drivingvehicle12 may perform the operation by generating a result by employing the learning model directly but also perform the operation by transmitting sensor information to an external device such as theAI server16 and receiving a result generated accordingly.
The self-drivingvehicle12 may determine a travel path and navigation plan by using at least one or more of object information detected from the map data and sensor information or object information obtained from an external device and navigate according to the determined travel path and navigation plan by controlling its driving platform.
Map data may include object identification information about various objects disposed in the space (for example, road) in which the self-drivingvehicle12 navigates. For example, the map data may include object identification information about static objects such as streetlights, rocks and buildings and movable objects such as vehicles and pedestrians. And the object identification information may include the name, type, distance, location, and so on.
Also, the self-drivingvehicle12 may perform the operation or navigate the space by controlling its driving platform based on the control/interaction of the user. At this time, the self-drivingvehicle12 may obtain intention information of the interaction due to the user's motion or voice command and perform an operation by determining a response based on the obtained intention information.
<AI+XR>
By employing the AI technology, theXR device13 may be implemented as a Head-Mounted Display (HMD), Head-Up Display (HUD) installed at the vehicle, TV, mobile phone, smartphone, computer, wearable device, home appliance, digital signage, vehicle, robot with a fixed platform, or mobile robot.
TheXR device13 may obtain information about the surroundings or physical objects by generating position and attribute data about 3D points by analyzing 3D point cloud or image data acquired from various sensors or external devices and output objects in the form of XR objects by rendering the objects for display.
TheXR device13 may perform the operations above by using a learning model built on at least one or more artificial neural networks. For example, theXR device13 may recognize physical objects from 3D point cloud or image data by using the learning model and provide information corresponding to the recognized physical objects. Here, the learning model may be the one trained by theXR device13 itself or trained by an external device such as theAI server16.
At this time, theXR device13 may perform the operation by generating a result by employing the learning model directly but also perform the operation by transmitting sensor information to an external device such as theAI server16 and receiving a result generated accordingly.
<AI+Robot+Autonomous Navigation>
By employing the AI and autonomous navigation technologies, therobot11 may be implemented as a guide robot, transport robot, cleaning robot, wearable robot, entertainment robot, pet robot, or unmanned flying robot.
Therobot11 employing the AI and autonomous navigation technologies may correspond to a robot itself having an autonomous navigation function or arobot11 interacting with the self-drivingvehicle12.
Therobot11 having the autonomous navigation function may correspond collectively to the devices which may move autonomously along a given path without control of the user or which may move by determining its path autonomously.
Therobot11 and the self-drivingvehicle12 having the autonomous navigation function may use a common sensing method to determine one or more of the travel path or navigation plan. For example, therobot11 and the self-drivingvehicle12 having the autonomous navigation function may determine one or more of the travel path or navigation plan by using the information sensed through lidar, radar, and camera.
Therobot11 interacting with the self-drivingvehicle12, which exists separately from the self-drivingvehicle12, may be associated with the autonomous navigation function inside or outside the self-drivingvehicle12 or perform an operation associated with the user riding the self-drivingvehicle12.
At this time, therobot11 interacting with the self-drivingvehicle12 may obtain sensor information in place of the self-drivingvehicle12 and provide the sensed information to the self-drivingvehicle12; or may control or assist the autonomous navigation function of the self-drivingvehicle12 by obtaining sensor information, generating information of the surroundings or object information, and providing the generated information to the self-drivingvehicle12.
Also, therobot11 interacting with the self-drivingvehicle12 may control the function of the self-drivingvehicle12 by monitoring the user riding the self-drivingvehicle12 or through interaction with the user. For example, if it is determined that the driver is drowsy, therobot11 may activate the autonomous navigation function of the self-drivingvehicle12 or assist the control of the driving platform of the self-drivingvehicle12. Here, the function of the self-drivingvehicle12 controlled by therobot12 may include not only the autonomous navigation function but also the navigation system installed inside the self-drivingvehicle12 or the function provided by the audio system of the self-drivingvehicle12.
Also, therobot11 interacting with the self-drivingvehicle12 may provide information to the self-drivingvehicle12 or assist functions of the self-drivingvehicle12 from the outside of the self-drivingvehicle12. For example, therobot11 may provide traffic information including traffic sign information to the self-drivingvehicle12 like a smart traffic light or may automatically connect an electric charger to the charging port by interacting with the self-drivingvehicle12 like an automatic electric charger of the electric vehicle.
<AI+Robot+XR>
By employing the AI technology, therobot11 may be implemented as a guide robot, transport robot, cleaning robot, wearable robot, entertainment robot, pet robot, or unmanned flying robot.
Therobot11 employing the XR technology may correspond to a robot which acts as a control/interaction target in the XR image. In this case, therobot11 may be distinguished from theXR device13, both of which may operate in conjunction with each other.
If therobot11, which acts as a control/interaction target in the XR image, obtains sensor information from the sensors including a camera, therobot11 orXR device13 may generate an XR image based on the sensor information, and theXR device13 may output the generated XR image. And therobot11 may operate based on the control signal received through theXR device13 or based on the interaction with the user.
For example, the user may check the XR image corresponding to the viewpoint of therobot11 associated remotely through an external device such as theXR device13, modify the navigation path of therobot11 through interaction, control the operation or navigation of therobot11, or check the information of nearby objects.
<AI+Autonomous Navigation+XR>
By employing the AI and XR technologies, the self-drivingvehicle12 may be implemented as a mobile robot, unmanned ground vehicle, or unmanned aerial vehicle.
The self-drivingvehicle12 employing the XR technology may correspond to a self-driving vehicle having a means for providing XR images or a self-driving vehicle which acts as a control/interaction target in the XR image. In particular, the self-drivingvehicle12 which acts as a control/interaction target in the XR image may be distinguished from theXR device13, both of which may operate in conjunction with each other.
The self-drivingvehicle12 having a means for providing XR images may obtain sensor information from sensors including a camera and output XR images generated based on the sensor information obtained. For example, by displaying an XR image through HUD, the self-drivingvehicle12 may provide XR images corresponding to physical objects or image objects to the passenger.
At this time, if an XR object is output on the HUD, at least part of the XR object may be output so as to be overlapped with the physical object at which the passenger gazes. On the other hand, if an XR object is output on a display installed inside the self-drivingvehicle12, at least part of the XR object may be output so as to be overlapped with an image object. For example, the self-drivingvehicle12 may output XR objects corresponding to the objects such as roads, other vehicles, traffic lights, traffic signs, bicycles, pedestrians, and buildings.
If the self-drivingvehicle12, which acts as a control/interaction target in the XR image, obtains sensor information from the sensors including a camera, the self-drivingvehicle12 orXR device13 may generate an XR image based on the sensor information, and theXR device13 may output the generated XR image. And the self-drivingvehicle12 may operate based on the control signal received through an external device such as theXR device13 or based on the interaction with the user.
[Extended Reality Technology]
eXtended Reality (XR) refers to all of Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR). The VR technology provides objects or backgrounds of the real world only in the form of CG images, AR technology provides virtual CG images overlaid on the physical object images, and MR technology employs computer graphics technology to mix and merge virtual objects with the real world.
MR technology is similar to AR technology in a sense that physical objects are displayed together with virtual objects. However, while virtual objects supplement physical objects in the AR, virtual and physical objects co-exist as equivalents in the MR.
The XR technology may be applied to Head-Mounted Display (HMD), Head-Up Display (HUD), mobile phone, tablet PC, laptop computer, desktop computer, TV, digital signage, and so on, where a device employing the XR technology may be called an XR device.
In what follows, an electronic device providing XR according to an embodiment of the present invention will be described.
FIG. 2 is a block diagram illustrating the structure of an XRelectronic device20 according to one embodiment of the present invention.
Referring toFIG. 2, the XRelectronic device20 may include awireless communication unit21,input unit22, sensingunit23,output unit24,interface unit25,memory26,controller27, andpower supply unit28. The constituting elements shown inFIG. 2 are not essential for implementing theelectronic device20, and therefore, theelectronic device20 described in this document may have more or fewer constituting elements than those listed above.
More specifically, among the constituting elements above, thewireless communication unit21 may include one or more modules which enable wireless communication between theelectronic device20 and a wireless communication system, between theelectronic device20 and other electronic device, or between theelectronic device20 and an external server. Also, thewireless communication unit21 may include one or more modules that connect theelectronic device20 to one or more networks.
Thewireless communication unit21 may include at least one of a broadcast receiving module, mobile communication module, wireless Internet module, short-range communication module, and location information module.
Theinput unit22 may include a camera or image input unit for receiving an image signal, microphone or audio input unit for receiving an audio signal, and user input unit (for example, touch key) for receiving information from the user, and push key (for example, mechanical key). Voice data or image data collected by theinput unit22 may be analyzed and processed as a control command of the user.
Thesensing unit23 may include one or more sensors for sensing at least one of the surroundings of theelectronic device20 and user information.
For example, thesensing unit23 may include at least one of a proximity sensor, illumination sensor, touch sensor, acceleration sensor, magnetic sensor, G-sensor, gyroscope sensor, motion sensor, RGB sensor, infrared (IR) sensor, finger scan sensor, ultrasonic sensor, optical sensor (for example, image capture means), microphone, battery gauge, environment sensor (for example, barometer, hygrometer, radiation detection sensor, heat detection sensor, and gas detection sensor), and chemical sensor (for example, electronic nose, health-care sensor, and biometric sensor). Meanwhile, theelectronic device20 disclosed in the present specification may utilize information collected from at least two or more sensors listed above.
Theoutput unit24 is intended to generate an output related to a visual, aural, or tactile stimulus and may include at least one of a display unit, sound output unit, haptic module, and optical output unit. The display unit may implement a touchscreen by forming a layered structure or being integrated with touch sensors. The touchscreen may not only function as a user input means for providing an input interface between the ARelectronic device20 and the user but also provide an output interface between the ARelectronic device20 and the user.
Theinterface unit25 serves as a path to various types of external devices connected to theelectronic device20. Through theinterface unit25, theelectronic device20 may receive VR or AR content from an external device and perform interaction by exchanging various input signals, sensing signals, and data.
For example, theinterface unit25 may include at least one of a wired/wireless headset port, external charging port, wired/wireless data port, memory card port, port for connecting to a device equipped with an identification module, audio Input/Output (I/O) port, video I/O port, and earphone port.
Also, thememory26 stores data supporting various functions of theelectronic device20. Thememory26 may store a plurality of application programs (or applications) executed in theelectronic device20; and data and commands for operation of theelectronic device20. Also, at least part of the application programs may be pre-installed at theelectronic device20 from the time of factory shipment for basic functions (for example, incoming and outgoing call function and message reception and transmission function) of theelectronic device20.
Thecontroller27 usually controls the overall operation of theelectronic device20 in addition to the operation related to the application program. Thecontroller27 may process signals, data, and information input or output through the constituting elements described above.
Also, thecontroller27 may provide relevant information or process a function for the user by executing an application program stored in thememory26 and controlling at least part of the constituting elements. Furthermore, thecontroller27 may combine and operate at least two or more constituting elements among those constituting elements included in theelectronic device20 to operate the application program.
Also, thecontroller27 may detect the motion of theelectronic device20 or user by using a gyroscope sensor, g-sensor, or motion sensor included in thesensing unit23. Also, thecontroller27 may detect an object approaching the vicinity of theelectronic device20 or user by using a proximity sensor, illumination sensor, magnetic sensor, infrared sensor, ultrasonic sensor, or light sensor included in thesensing unit23. Besides, thecontroller27 may detect the motion of the user through sensors installed at the controller operating in conjunction with theelectronic device20.
Also, thecontroller27 may perform the operation (or function) of theelectronic device20 by using an application program stored in thememory26.
Thepower supply unit28 receives external or internal power under the control of thecontroller27 and supplies the power to each and every constituting element included in theelectronic device20. Thepower supply unit28 includes battery, which may be provided in a built-in or replaceable form.
At least part of the constituting elements described above may operate in conjunction with each other to implement the operation, control, or control method of the electronic device according to various embodiments described below. Also, the operation, control, or control method of the electronic device may be implemented on the electronic device by executing at least one application program stored in thememory26.
In what follows, the electronic device according to one embodiment of the present invention will be described with reference to an example where the electronic device is applied to a Head Mounted Display (HMD). However, embodiments of the electronic device according to the present invention may include a mobile phone, smartphone, laptop computer, digital broadcast terminal, Personal Digital Assistant (PDA), Portable Multimedia Player (PMP), navigation terminal, slate PC, tablet PC, ultrabook, and wearable device. Wearable devices may include smart watch and contact lens in addition to the HMD.
FIG. 3 is a perspective view of a VR electronic device according to one embodiment of the present invention, andFIG. 4 illustrates a situation in which the VR electronic device ofFIG. 3 is used.
Referring to the figures, a VR electronic device may include a box-type electronic device30 mounted on the head of the user and a controller40 (40a,40b) that the user may grip and manipulate.
The electronic device30 includes a head unit31 worn and supported on the head and a display unit32 being combined with the head unit31 and displaying a virtual image or video in front of the user's eyes. Although the figure shows that the head unit31 and display unit32 are made as separate units and combined together, the display unit32 may also be formed being integrated into the head unit31.
The head unit31 may assume a structure of enclosing the head of the user so as to disperse the weight of the display unit32. And to accommodate different head sizes of users, the head unit31 may provide a band of variable length.
The display unit32 includes a cover unit32acombined with the head unit31 and a display unit32bcontaining a display panel.
The cover unit32ais also called a goggle frame and may have the shape of a tub as a whole. The cover unit32ahas a space formed therein, and an opening is formed at the front surface of the cover unit, the position of which corresponds to the eyeballs of the user.
The display unit32bis installed on the front surface frame of the cover unit32aand disposed at the position corresponding to the eyes of the user to display screen information (image or video). The screen information output on the display unit32bincludes not only VR content but also external images collected through an image capture means such as a camera.
And VR content displayed on the display unit32bmay be the content stored in the electronic device30 itself or the content stored in an external device60. For example, when the screen information is an image of the virtual world stored in the electronic device30, the electronic device30 may perform image processing and rendering to process the image of the virtual world and display image information generated from the image processing and rendering through the display unit32b. On the other hand, in the case of a VR image stored in the external device60, the external device60 performs image processing and rendering and transmits image information generated from the image processing and rendering to the electronic device30. Then the electronic device30 may output 3D image information received from the external device60 through the display unit32b.
The display unit32bmay include a display panel installed at the front of the opening of the cover unit32a, where the display panel may be an LCD or OLED panel. Similarly, the display unit32bmay be a display unit of a smartphone. In other words, the display unit32bmay have a specific structure in which a smartphone may be attached to or detached from the front of the cover unit32a.
And an image capture means and various types of sensors may be installed at the front of the display unit32.
The image capture means (for example, camera) is formed to capture (receive or input) the image of the front and may obtain a real world as seen by the user as an image. One image capture means may be installed at the center of the display unit32b, or two or more of them may be installed at symmetric positions. When a plurality of image capture means are installed, a stereoscopic image may be obtained. An image combining an external image obtained from an image capture means with a virtual image may be displayed through the display unit32b.
Various types of sensors may include a gyroscope sensor, motion sensor, or IR sensor. Various types of sensors will be described in more detail later.
At the rear of the display unit32, a facial pad33 may be installed. The facial pad33 is made of cushioned material and is fit around the eyes of the user, providing comfortable fit to the face of the user. And the facial pad33 is made of a flexible material with a shape corresponding to the front contour of the human face and may be fit to the facial shape of a different user, thereby blocking external light from entering the eyes.
In addition to the above, the electronic device30 may be equipped with a user input unit operated to receive a control command, sound output unit, and controller. Descriptions of the aforementioned units are the same as give previously and will be omitted.
Also, a VR electronic device may be equipped with a controller40 (40a,40b) for controlling the operation related to VR images displayed through the box-type electronic device30 as a peripheral device.
The controller40 is provided in a way that the user may easily grip the controller40 by using his or her both hands, and the outer surface of the controller40 may have a touchpad (or trackpad) or buttons for receiving the user input.
The controller40 may be used to control the screen output on the display unit32bin conjunction with the electronic device30. The controller40 may include a grip unit that the user grips and a head unit extended from the grip unit and equipped with various sensors and a microprocessor. The grip unit may be shaped as a long vertical bar so that the user may easily grip the grip unit, and the head unit may be formed in a ring shape.
And the controller40 may include an IR sensor, motion tracking sensor, microprocessor, and input unit. For example, IR sensor receives light emitted from a position tracking device50 to be described later and tracks motion of the user. The motion tracking sensor may be formed as a single sensor suite integrating a 3-axis acceleration sensor, 3-axis gyroscope, and digital motion processor.
And the grip unit of the controller40 may provide a user input unit. For example, the user input unit may include keys disposed inside the grip unit, touchpad (trackpad) equipped outside the grip unit, and trigger button.
Meanwhile, the controller40 may perform a feedback operation corresponding to a signal received from thecontroller27 of the electronic device30. For example, the controller40 may deliver a feedback signal to the user in the form of vibration, sound, or light.
Also, by operating the controller40, the user may access an external environment image seen through the camera installed in the electronic device30. In other words, even in the middle of experiencing the virtual world, the user may immediately check the surrounding environment by operating the controller40 without taking off the electronic device30.
Also, the VR electronic device may further include a position tracking device50. The position tracking device50 detects the position of the electronic device30 or controller40 by applying a position tracking technique, called lighthouse system, and helps tracking the 360-degree motion of the user.
The position tacking system may be implemented by installing one or more position tracking device50 (50a,50b) in a closed, specific space. A plurality of position tracking devices50 may be installed at such positions that maximize the span of location-aware space, for example, at positions facing each other in the diagonal direction.
The electronic device30 or controller40 may receive light emitted from LED or laser emitter included in the plurality of position tracking devices50 and determine the accurate position of the user in a closed, specific space based on a correlation between the time and position at which the corresponding light is received. To this purpose, each of the position tracking devices50 may include an IR lamp and 2-axis motor, through which a signal is exchanged with the electronic device30 or controller40.
Also, the electronic device30 may perform wired/wireless communication with an external device60 (for example, PC, smartphone, or tablet PC). The electronic device30 may receive images of the virtual world stored in the connected external device60 and display the received image to the user.
Meanwhile, since the controller40 and position tracking device50 described above are not essential elements, they may be omitted in the embodiments of the present invention. For example, an input device installed in the electronic device30 may replace the controller40, and position information may be determined by itself from various sensors installed in the electronic device30.
FIG. 5 is a perspective view of an AR electronic device according to one embodiment of the present invention.
As shown inFIG. 5, the electronic device according to one embodiment of the present invention may include aframe100,controller200, anddisplay unit300.
The electronic device may be provided in the form of smart glasses. The glass-type electronic device may be shaped to be worn on the head of the user, for which the frame (case or housing)100 may be used. Theframe100 may be made of a flexible material so that the user may wear the glass-type electronic device comfortably.
Theframe100 is supported on the head and provides a space in which various components are installed. As shown in the figure, electronic components such as thecontroller200,user input unit130, orsound output unit140 may be installed in theframe100. Also, lens that covers at least one of the left and right eyes may be installed in theframe100 in a detachable manner.
As shown in the figure, theframe100 may have a shape of glasses worn on the face of the user; however, the present invention is not limited to the specific shape and may have a shape such as goggles worn in close contact with the user's face.
Theframe100 may include afront frame110 having at least one opening and one pair of side frames120 parallel to each other and being extended in a first direction (y), which are intersected by thefront frame110.
Thecontroller200 is configured to control various electronic components installed in the electronic device.
Thecontroller200 may generate an image shown to the user or video comprising successive images. Thecontroller200 may include an image source panel that generates an image and a plurality of lenses that diffuse and converge light generated from the image source panel.
Thecontroller200 may be fixed to either of the two side frames120. For example, thecontroller200 may be fixed in the inner or outer surface of oneside frame120 or embedded inside one of side frames120. Or thecontroller200 may be fixed to thefront frame110 or provided separately from the electronic device.
Thedisplay unit300 may be implemented in the form of a Head Mounted Display (HMD). HMD refers to a particular type of display device worn on the head and showing an image directly in front of eyes of the user. Thedisplay unit300 may be disposed to correspond to at least one of left and right eyes so that images may be shown directly in front of the eye(s) of the user when the user wears the electronic device. The present figure illustrates a case where thedisplay unit300 is disposed at the position corresponding to the right eye of the user so that images may be shown before the right eye of the user.
Thedisplay unit300 may be used so that an image generated by thecontroller200 is shown to the user while the user visually recognizes the external environment. For example, thedisplay unit300 may project an image on the display area by using a prism.
And thedisplay unit300 may be formed to be transparent so that a projected image and a normal view (the visible part of the world as seen through the eyes of the user) in the front are shown at the same time. For example, thedisplay unit300 may be translucent and made of optical elements including glass.
And thedisplay unit300 may be fixed by being inserted into the opening included in thefront frame110 or may be fixed on thefront surface110 by being positioned on the rear surface of the opening (namely between the opening and the user's eye). Although the figure illustrates one example where thedisplay unit300 is fixed on thefront surface110 by being positioned on the rear surface of the rear surface, thedisplay unit300 may be disposed and fixed at various positions of theframe100.
As shown inFIG. 5, the electronic device may operate so that if thecontroller200 projects light about an image onto one side of thedisplay unit300, the light is emitted to the other side of the display unit, and the image generated by thecontroller200 is shown to the user.
Accordingly, the user may see the image generated by thecontroller200 while seeing the external environment simultaneously through the opening of theframe100. In other words, the image output through thedisplay unit300 may be seen by being overlapped with a normal view. By using the display characteristic described above, the electronic device may provide an AR experience which shows a virtual image overlapped with a real image or background as a single, interwoven image.
FIG. 6 is an exploded perspective view of a controller according to one embodiment of the present invention.
Referring to the figure, thecontroller200 may include afirst cover207 andsecond cover225 for protecting internal constituting elements and forming the external appearance of thecontroller200, where, inside the first207 and second225 covers, included are a drivingunit201,image source panel203, Polarization Beam Splitter Filter (PBSF)211,mirror209, a plurality oflenses213,215,217,221, Fly Eye Lens (FEL)219,Dichroic filter227, and Freeform prism Projection Lens (FPL)223.
The first207 and second225 covers provide a space in which thedriving unit201,image source panel203,PBSF211,mirror209, a plurality oflenses213,215,217,221,FEL219, and FPL may be installed, and the internal constituting elements are packaged and fixed to either of the side frames120.
The drivingunit201 may supply a driving signal that controls a video or an image displayed on theimage source panel203 and may be linked to a separate modular driving chip installed inside or outside thecontroller200. The drivingunit201 may be installed in the form of Flexible Printed Circuits Board (FPCB), which may be equipped with heatsink that dissipates heat generated during operation to the outside.
Theimage source panel203 may generate an image according to a driving signal provided by the drivingunit201 and emit light according to the generated image. To this purpose, theimage source panel203 may use the Liquid Crystal Display (LCD) or Organic Light Emitting Diode (OLED) panel.
ThePBSF211 may separate light due to the image generated from theimage source panel203 or block or pass part of the light according to a rotation angle. Therefore, for example, if the image light emitted from theimage source panel203 is composed of P wave, which is horizontal light, and S wave, which is vertical light, thePBSF211 may separate the P and S waves into different light paths or pass the image light of one polarization or block the image light of the other polarization. ThePBSF211 may be provided as a cube type or plate type in one embodiment.
The cube-type PBSF211 may filter the image light composed of P and S waves and separate them into different light paths while the plate-type PBSF211 may pass the image light of one of the P and S waves but block the image light of the other polarization.
Themirror209 reflects the image light separated from polarization by thePBSF211 to collect the polarized image light again and let the collected image light incident on a plurality oflenses213,215,217,221.
The plurality oflenses213,215,217,221 may include convex and concave lenses and for example, may include I-type lenses and C-type lenses. The plurality oflenses213,215,217,221 repeat diffusion and convergence of image light incident on the lenses, thereby improving straightness of the image light rays.
TheFEL219 may receive the image light which has passed the plurality oflenses213,215,217,221 and emit the image light so as to improve illuminance uniformity and extend the area exhibiting uniform illuminance due to the image light.
Thedichroic filter227 may include a plurality of films or lenses and pass light of a specific range of wavelengths from the image light incoming from theFEL219 but reflect light not belonging to the specific range of wavelengths, thereby adjusting saturation of color of the image light. The image light which has passed thedichroic filter227 may pass through theFPL223 and be emitted to thedisplay unit300.
Thedisplay unit300 may receive the image light emitted from thecontroller200 and emit the incident image light to the direction in which the user's eyes are located.
Meanwhile, in addition to the constituting elements described above, the electronic device may include one or more image capture means (not shown). The image capture means, being disposed close to at least one of left and right eyes, may capture the image of the front area. Or the image capture means may be disposed so as to capture the image of the side/rear area.
Since the image capture means is disposed close to the eye, the image capture means may obtain the image of a real world seen by the user. The image capture means may be installed at theframe100 or arranged in plural numbers to obtain stereoscopic images.
The electronic device may provide auser input unit130 manipulated to receive control commands. Theuser input unit130 may adopt various methods including a tactile manner in which the user operates the user input unit by sensing a tactile stimulus from a touch or push motion, gesture manner in which the user input unit recognizes the hand motion of the user without a direct touch thereon, or a manner in which the user input unit recognizes a voice command. The present figure illustrates a case where theuser input unit130 is installed at theframe100.
Also, the electronic device may be equipped with a microphone which receives a sound and converts the received sound to electrical voice data and asound output unit140 that outputs a sound. Thesound output unit140 may be configured to transfer a sound through an ordinary sound output scheme or bone conduction scheme. When thesound output unit140 is configured to operate according to the bone conduction scheme, thesound output unit140 is fit to the head when the user wears the electronic device and transmits sound by vibrating the skull.
In what follows, various forms of thedisplay unit300 and various methods for emitting incident image light rays will be described.
FIGS. 7 to 13 illustrate various display methods applicable to thedisplay unit300 according to one embodiment of the present invention.
More specifically,FIG. 7 illustrates one embodiment of a prism-type optical element;FIG. 8 illustrates one embodiment of a waveguide-type optical element;FIGS. 9 and 10 illustrate one embodiment of a pin mirror-type optical element; andFIG. 11 illustrates one embodiment of a surface reflection-type optical element. AndFIG. 12 illustrates one embodiment of a micro-LED type optical element, andFIG. 13 illustrates one embodiment of a display unit used for contact lenses.
As shown inFIG. 7, the display unit300-1 according to one embodiment of the present invention may use a prism-type optical element.
In one embodiment, as shown inFIG. 7(a), a prism-type optical element may use a flat-type glass optical element where thesurface300aon which image light rays are incident and from which the image light rays are emitted is planar or as shown inFIG. 7(b), may use a freeform glass optical element where thesurface300bfrom which the image light rays are emitted is formed by a curved surface without a fixed radius of curvature.
The flat-type glass optical element may receive the image light generated by thecontroller200 through the flat side surface, reflect the received image light by using thetotal reflection mirror300ainstalled inside and emit the reflected image light toward the user. Here, laser is used to form thetotal reflection mirror300ainstalled inside the flat type glass optical element.
The freeform glass optical element is formed so that its thickness becomes thinner as it moves away from the surface on which light is incident, receives image light generated by thecontroller200 through a side surface having a finite radius of curvature, totally reflects the received image light, and emits the reflected light toward the user.
As shown inFIG. 8, the display unit300-2 according to another embodiment of the present invention may use a waveguide-type optical element or light guide optical element (LOE).
As one embodiment, the waveguide or light guide-type optical element may be implemented by using a segmented beam splitter-type glass optical element as shown inFIG. 8(a), saw tooth prism-type glass optical element as shown inFIG. 8(b), glass optical element having a diffractive optical element (DOE) as shown inFIG. 8(c), glass optical element having a hologram optical element (HOE) as shown inFIG. 8(d), glass optical element having a passive grating as shown inFIG. 8(e), and glass optical element having an active grating as shown inFIG. 8(f).
As shown inFIG. 8(a), the segmented beam splitter-type glass optical element may have atotal reflection mirror301awhere an optical image is incident and asegmented beam splitter301bwhere an optical image is emitted.
Accordingly, the optical image generated by thecontroller200 is totally reflected by thetotal reflection mirror301ainside the glass optical element, and the totally reflected optical image is partially separated and emitted by thepartial reflection mirror301band eventually perceived by the user while being guided along the longitudinal direction of the glass.
In the case of the saw tooth prism-type glass optical element as shown inFIG. 8(b), the optical image generated by thecontroller200 is incident on the side surface of the glass in the oblique direction and totally reflected into the inside of the glass, emitted to the outside of the glass by the saw tooth-shapeduneven structure302 formed where the optical image is emitted, and eventually perceived by the user.
The glass optical element having a Diffractive Optical Element (DOE) as shown inFIG. 8(c) may have afirst diffraction unit303aon the surface of the part on which the optical image is incident and asecond diffraction unit303bon the surface of the part from which the optical image is emitted. The first andsecond diffraction units303a,303bmay be provided in a way that a specific pattern is patterned on the surface of the glass or a separate diffraction film is attached thereon.
Accordingly, the optical image generated by thecontroller200 is diffracted as it is incident through thefirst diffraction unit303a, guided along the longitudinal direction of the glass while being totally reflected, emitted through thesecond diffraction unit303b, and eventually perceived by the user.
The glass optical element having a Hologram Optical Element (HOE) as shown inFIG. 8(d) may have an out-coupler304 inside the glass from which an optical image is emitted. Accordingly, the optical image is incoming from thecontroller200 in the oblique direction through the side surface of the glass, guided along the longitudinal direction of the glass by being totally reflected, emitted by the out-coupler304, and eventually perceived by the user. The structure of the HOE may be modified gradually to be further divided into the structure having a passive grating and the structure having an active grating.
The glass optical element having a passive grating as shown inFIG. 8(e) may have an in-coupler305aon the opposite surface of the glass surface on which the optical image is incident and an out-coupler305bon the opposite surface of the glass surface from which the optical image is emitted. Here, the in-coupler305aand the out-coupler305bmay be provided in the form of film having a passive grating.
Accordingly, the optical image incident on the glass surface at the light-incident side of the glass is totally reflected by the in-coupler305ainstalled on the opposite surface, guided along the longitudinal direction of the glass, emitted through the opposite surface of the glass by the out-coupler305b, and eventually perceived by the user.
The glass optical element having an active grating as shown inFIG. 8(f) may have an in-coupler306aformed as an active grating inside the glass through which an optical image is incoming and an out-coupler306bformed as an active grating inside the glass from which the optical image is emitted.
Accordingly, the optical image incident on the glass is totally reflected by the in-coupler306a, guided in the longitudinal direction of the glass, emitted to the outside of the glass by the out-coupler306b, and eventually perceived by the user.
The display unit300-3 according to another embodiment of the present invention may use a pin mirror-type optical element.
The pinhole effect is so called because the hole through which an object is seen is like the one made with the point of a pin and refers to the effect of making an object look more clearly as light is passed through a small hole. This effect results from the nature of light due to refraction of light, and the light passing through the pinhole deepens the depth of field (DOF), which makes the image formed on the retina more vivid.
In what follows, an embodiment for using a pin mirror-type optical element will be described with reference toFIGS. 9 and 10.
Referring toFIG. 9(a), thepinhole mirror310amay be provided on the path of incident light within the display unit300-3 and reflect the incident light toward the user's eye. More specifically, thepinhole mirror310amay be disposed between the front surface (outer surface) and the rear surface (inner surface) of the display unit300-3, and a method for manufacturing the pinhole mirror will be described again later.
Thepinhole mirror310amay be formed to be smaller than the pupil of the eye and to provide a deep depth of field. Therefore, even if the focal length for viewing a real world through the display unit300-3 is changed, the user may still clearly see the real world by overlapping an augmented reality image provided by thecontroller200 with the image of the real world.
And the display unit300-3 may provide a path which guides the incident light to thepinhole mirror310athrough internal total reflection.
Referring toFIG. 9(b), thepinhole mirror310bmay be provided on thesurface300cthrough which light is totally reflected in the display unit300-3. Here, thepinhole mirror310bmay have the characteristic of a prism that changes the path of external light according to the user's eyes. For example, thepinhole mirror310bmay be fabricated as film-type and attached to the display unit300-3, in which case the process for manufacturing the pinhole mirror is made easy.
The display unit300-3 may guide the incident light incoming from thecontroller200 through internal total reflection, the light incident by total reflection may be reflected by thepinhole mirror310binstalled on the surface on which external light is incident, and the reflected light may pass through the display unit300-3 to reach the user's eyes.
Referring toFIG. 9(c), the incident light illuminated by thecontroller200 may be reflected by thepinhole mirror310cdirectly without internal total reflection within the display unit300-3 and reach the user's eyes. This structure is convenient for the manufacturing process in that augmented reality may be provided irrespective of the shape of the surface through which external light passes within the display unit300-3.
Referring toFIG. 9(d), the light illuminated by thecontroller200 may reach the user's eyes by being reflected within the display unit300-3 by thepinhole mirror310dinstalled on thesurface300dfrom which external light is emitted. Thecontroller200 is configured to illuminate light at the position separated from the surface of the display unit300-3 in the direction of the rear surface and illuminate light toward thesurface300dfrom which external light is emitted within the display unit300-3. The present embodiment may be applied easily when thickness of the display unit300-3 is not sufficient to accommodate the light illuminated by thecontroller200. Also, the present embodiment may be advantageous for manufacturing in that it may be applied irrespective of the surface shape of the display unit300-3, and thepinhole mirror310dmay be manufactured in a film shape.
Meanwhile, the pinhole mirror310 may be provided in plural numbers in an array pattern.
FIG. 10 illustrates the shape of a pinhole mirror and structure of an array pattern according to one embodiment of the present invention.
Referring to the figure, the pinhole mirror310 may be fabricated in a polygonal structure including a square or rectangular shape. Here, the length (diagonal length) of a longer axis of the pinhole mirror310 may have a positive square root of the product of the focal length and wavelength of light illuminated in the display unit300-3.
A plurality of pinhole mirrors310 are disposed in parallel, being separated from each other, to form an array pattern. The array pattern may form a line pattern or lattice pattern.
FIGS. 10(a) and (b) illustrate the Flat Pin Mirror scheme, andFIGS. 10(c) and (d) illustrate the freeform Pin Mirror scheme.
When the pinhole mirror310 is installed inside the display unit300-3, thefirst glass300eand thesecond glass300fare combined by aninclined surface300gdisposed being inclined toward the pupil of the eye, and a plurality of pinhole mirrors310eare disposed on theinclined surface300gby forming an array pattern.
Referring toFIGS. 10(a) and (b), a plurality of pinhole mirrors310emay be disposed side by side along one direction on theinclined surface300gand continuously display the augmented reality provided by thecontroller200 on the image of a real world seen through the display unit300-3 even if the user moves the pupil of the eye.
And referring toFIGS. 10(c) and (d), the plurality of pinhole mirrors310fmay form a radial array on theinclined surface300gprovided as a curved surface.
Since the plurality of pinhole mirrors300fare disposed along the radial array, thepinhole mirror310fat the edge in the figure is disposed at the highest position, and thepinhole mirror310fin the middle thereof is disposed at the lowest position, the path of a beam emitted by thecontroller200 may be matched to each pinhole mirror.
As described above, by disposing a plurality ofpinhole arrays310falong the radial array, the double image problem of augmented reality provided by thecontroller200 due to the path difference of light may be resolved.
Similarly, lenses may be attached on the rear surface of the display unit300-3 to compensate for the path difference of the light reflected from the plurality of pinhole mirrors310edisposed side by side in a row.
The surface reflection-type optical element that may be applied to the display unit300-4 according to another embodiment of the present invention may employ the freeform combiner method as shown inFIG. 11(a), Flat HOE method as shown inFIG. 11(b), and freeform HOE method as shown inFIG. 11(c).
The surface reflection-type optical element based on the freeform combiner method as shown inFIG. 11(a) may usefreeform combiner glass300, for which a plurality of flat surfaces having different incidence angles for an optical image are combined to form one glass with a curved surface as a whole to perform the role of a combiner. Thefreeform combiner glass300 emits an optical image to the user by making incidence angle of the optical image differ in the respective areas.
The surface reflection-type optical element based on Flat HOE method as shown inFIG. 11(b) may have a hologram optical element (HOE)311 coated or patterned on the surface of flat glass, where an optical image emitted by thecontroller200 passes through theHOE311, reflects from the surface of the glass, again passes through theHOE311, and is eventually emitted to the user.
The surface reflection-type optical element based on the freeform HOE method as shown inFIG. 11(c) may have aHOE313 coated or patterned on the surface of freeform glass, where the operating principles may be the same as described with reference toFIG. 11(b).
In addition, a display unit300-5 employing micro LED as shown inFIG. 12 and a display unit300-6 employing a contact lens as shown inFIG. 13 may also be used.
Referring toFIG. 12, the optical element of the display unit300-5 may include a Liquid Crystal on Silicon (LCoS) element, Liquid Crystal Display (LCD) element, Organic Light Emitting Diode (OLED) display element, and Digital Micromirror Device (DMD); and the optical element may further include a next-generation display element such as Micro LED and Quantum Dot (QD) LED.
The image data generated by thecontroller200 to correspond to the augmented reality image is transmitted to the display unit300-5 along aconductive input line316, and the display unit300-5 may convert the image signal to light through a plurality of optical elements314 (for example, microLED) and emits the converted light to the user's eye.
The plurality ofoptical elements314 are disposed in a lattice structure (for example, 100×100) to form adisplay area314a. The user may see the augmented reality through thedisplay area314awithin the display unit300-5. And the plurality ofoptical elements314 may be disposed on a transparent substrate.
The image signal generated by thecontroller200 is sent to animage split circuit315 provided at one side of the display unit300-5; the image splitcircuit315 is divided into a plurality of branches, where the image signal is further sent to anoptical element314 disposed at each branch. At this time, the image splitcircuit315 may be located outside the field of view of the user so as to minimize gaze interference.
Referring toFIG. 13, the display unit300-5 may comprise a contact lens. A contact lens300-5 on which augmented reality may be displayed is also called a smart contact lens. The smart contact lens300-5 may have a plurality ofoptical elements317 in a lattice structure at the center of the smart contact lens.
The smart contact lens300-5 may include asolar cell318a,battery318b,controller200,antenna318c, andsensor318din addition to theoptical element317. For example, thesensor318dmay check the blood sugar level in the tear, and thecontroller200 may process the signal of thesensor318dand display the blood sugar level in the form of augmented reality through theoptical element317 so that the user may check the blood sugar level in real-time.
As described above, thedisplay unit300 according to one embodiment of the present invention may be implemented by using one of the prism-type optical element, waveguide-type optical element, light guide optical element (LOE), pin mirror-type optical element, or surface reflection-type optical element. In addition to the above, an optical element that may be applied to thedisplay unit300 according to one embodiment of the present invention may include a retina scan method.
Hereinafter, anelectronic device20 according to the present invention may be implemented as an HMD in the form of glasses worn by a user, as illustrated inFIG. 3. In addition, theelectronic device20 according to the present invention may include adisplay unit500 according to another embodiment of the present invention illustrated inFIG. 13. That is, in the present invention, theelectronic device20 is characterized by including thedisplay unit500 according to the other embodiment of the present invention. Hereinafter, in order to assist in the understanding of description, thedisplay unit500 according to the present embodiment will be described in comparison with thedisplay unit300 according to an embodiment of the present invention illustrated inFIG. 12.
FIG. 12 is a diagram illustrating the embodiment in which a nanopattern is formed on the display unit of the present invention, andFIG. 13 is a diagram illustrating the other embodiment in which a nanopattern is formed on the display unit of the present invention. In particular,FIG. 12 is a perspective view in detail illustrating the nanopattern formed on thedisplay unit300 according to the embodiment of the present invention illustrated inFIG. 3.
First, referring toFIGS. 3 and 12, thedisplay unit300 according to the embodiment of the present invention includes an in-coupling portion51 having a first nanopattern formed thereon and an out-coupling portion52 having a second nanopattern formed thereon.
When a beam for an image, a video, or a content generated by thecontrol unit200 according to the present invention is incident on the in-coupling portion51, the incident beam is emitted to the out-coupling portion52 through diffraction inside thedisplay unit300. Since the out-coupling portion52 is disposed to correspond to the pupil of a user, the beam emitted from the out-coupling portion52 is incident on the eyes of the user. Therefore, the user may recognize the image, the video, the content generated by thecontrol unit200 by the beam emitted from the out-coupling portion52.
In the electronic device using an exit pupil replication-based optical system, on one display unit, an in-coupling region for inputting an image or a video to the display unit and an out-coupling region for emitting the input image or video from the display unit to allow the user to recognize the image or video with the eyes of the user are disposed together.
In this case, since the in-coupling region and the out-coupling region are disposed on the same display unit, and have different functions, the first nanopattern formed on the in-coupling portion51 and the second nanopattern formed on the out-coupling portion52 according to the present embodiment are formed to have different patterns. In addition, as illustrated inFIG. 12, in the present embodiment, the in-coupling portion51 and the out-coupling portion52 are formed to be separated from each other on the surface of thedisplay unit300 facing the eyes of the user.
On the other hand, since each user using theelectronic device20 has a different interpupillary distance (IPD), as illustrated inFIG. 12, thedisplay unit300 according to the present embodiment is formed to have a width w1 of the first size. That is, as described above, since thedisplay unit300 according to the present embodiment is formed to have the width w1 of the first size relatively wider than that of the display unit according to the other embodiment, even if each user has a different IPD, an image or a video may be formed by the pupil of the user.
In contrast, referring toFIG. 13, thedisplay unit500 according to the other embodiment is formed so that an in-coupling portion510 and an out-coupling portion520 are connected to each other without being separated. In addition, in the present embodiment, a first nanopattern formed on the in-coupling portion510 is configured to have an equivalent pattern to or the same pattern as a second nanopattern formed on the out-coupling portion520.
Referring toFIGS. 14 and 15, the structures and shapes of the first nanopattern and the second nanopattern formed on the in-coupling portion510 and the out-coupling portion520, according to the other embodiment of the present invention, respectively, will be described.
FIG. 14 is a diagram illustrating a side of the display unit of the present invention along the XIV line shown inFIG. 13, andFIG. 15 is a diagram illustrating a side of the display unit along the XV line shown inFIG. 14.
Referring toFIGS. 14 and 15, in the present embodiment, the first nanopattern formed on the in-coupling portion510 and the second nanopattern formed on the out-coupling portion520 include a plurality offirst protrusions510aand a plurality ofsecond protrusions520a, respectively.
Since all the first andsecond protrusions510aand520aprotrude from a surface of thedisplay unit500 toward a face of the user, and the protrusions are formed in the same pattern or the equivalent pattern, although heights of the first andsecond protrusions510aand520aprotruding from thedisplay unit500 may be different from each other, the shapes or forms of the first andsecond protrusions510aand520aprotruding from thedisplay unit500 may be the same as or equivalent to each other.
Referring toFIGS. 15(a), 15(b), and 15(c), thefirst protrusions510aand thesecond protrusions520amay be all disposed in parallel to each other. In addition, heights h1, h2, and h3 of thefirst protrusions510aand thesecond protrusions520aprotruding from thedisplay unit500 and the shapes in which thefirst protrusions510aand thesecond protrusions520aprotrude may vary depending on angles r1 and r2 formed by thefirst protrusions510aand thesecond protrusions520awith respect to thedisplay unit500.
Accordingly, the height of thefirst protrusions510aprotruding from the surface of thedisplay unit500 may be configured to be greater than the height of thesecond protrusions520aprotruding from the surface of the display unit, and the height of thefirst protrusions510aprotruding from the surface of thedisplay unit500 may be configured to be smaller than the height of thesecond protrusions520aprotruding from the surface of thedisplay unit500. However, even in this case, the shapes of thefirst protrusions510aand thesecond protrusions520aprotruding from thedisplay unit500 are configured to be the same as or equivalent to each other. In addition, the height of thefirst protrusions510aprotruding from the surface of thedisplay unit500 may be configured to be the same as the height of thesecond protrusions520aprotruding from the surface of thedisplay unit500.
That is, when thefirst protrusions510aprotrude from the surface of thedisplay unit500 as illustrated inFIG. 15(a), thefirst protrusions510aprotrude from the surface of thedisplay unit500 by the height of h1, and it is configured so that thefirst protrusions510aare perpendicular to the surface of thedisplay unit500. As in the first protrusions, as illustrated inFIG. 15(a), thesecond protrusions520aare configured so that thesecond protrusions520aare perpendicular to the surface of thedisplay unit500. However, the height of thesecond protrusions520aprotruding from the surface of thedisplay unit500 may be the same as or different from h1.
When thefirst protrusions510aprotrude from the surface of thedisplay500 by a height of h2 as illustrated inFIG. 15(b), and is configured to form an acute angle with respect to the surface of thedisplay500, thesecond protrusions520amay be configured to form an acute angle with respect to the surface of thedisplay unit500 like thefirst protrusions510a, and the height of thesecond protrusions520aprotruding from thedisplay unit500 may be configured to be the same as or equivalent to the h2.
In addition, when thefirst protrusions510aprotrude from the surface of thedisplay500 by a height of h3 as illustrated inFIG. 15(c), and is configured to form an obtuse angle with respect to the surface of thedisplay500, thesecond protrusions520amay be configured to form an obtuse angle with respect to the surface of thedisplay unit500 like thefirst protrusions510a, and the height of thesecond protrusions520aprotruding from thedisplay unit500 may be configured to be the same as or equivalent to the h3.
As described above, in the present embodiment, the first nanopattern and the second nanopattern formed on the in-coupling portion510 and the out-coupling portion520, respectively, are configured to be the same as or equivalent to each other. In addition, since the in-coupling portion510 and the out-coupling portion520 are connected so as not to be physically distinguishable from each other, although the beam for the image or the video generated by the control unit is incident on any area of thedisplay unit500, the beam for the image or the video may be emitted from the out-coupling portion520. Thus, the light efficiency of the optical system included in theelectronic device20 is greatly increased.
Hereinafter, acontrol unit400 and a movingunit600 configured to move thedisplay unit500 according to the other embodiment of the present disclosure will be described with reference toFIGS. 16 and 17.
FIGS. 16 and 17 are diagrams illustrating the movement of the display unit of the present invention.
Referring toFIGS. 16 and 17, in the other embodiment of the present invention, thecontrol unit400 is disposed to be in contact with an upper end of thedisplay unit500, and configured to be connected to or integrally formed with the movingunit600 capable of horizontally moving thedisplay unit500. That is, thecontrol unit400 is disposed in parallel with the eyebrow line over the eye of the user, as illustrated inFIG. 16. To this end, the movingunit600 may further include afront frame603 on which thecontrol unit400 is mounted. That is, referring toFIG. 16, thecontrol unit400 is disposed in parallel with the eyebrow line of the user by thefront frame603, and thedisplay unit500 may be horizontally moved along a groove formed in thefront frame603.
In this case, thecontrol unit400 includes afirst control module401 corresponding to the left eye of the user and asecond control module402 corresponding to the right eye of the user, and the configurations and functions of thefirst control module401 and thesecond control module402 are identical. In addition, thefirst control module401 and thesecond control module402 are configured to interlock with each other.
On the other hand, thedisplay unit500 also includes afirst display unit501 corresponding to the left eye of the user and asecond display unit502 corresponding to the right eye of the user, and the movingunit600 also includes a first movingunit601 and a second movingunit602 so as to horizontally move thefirst display unit501 and thesecond display unit502, respectively.
As illustrated inFIGS. 16 and 17, thefirst display unit501 is disposed to correspond to the left eye of the user, and thefirst control module401 and the first movingunit601 are disposed to be in contact with an upper end of thefirst display unit501. In addition, thesecond display unit502 is disposed to correspond to the right eye of the user, and thesecond control module402 and the second movingunit602 are disposed to be in contact with an upper end of thesecond display unit502.
The first movingunit601 and the second movingunit602 may horizontally move thefirst display unit501 and thesecond display unit502 to correspond to both pupil positions of the user. To this end, as illustrated inFIG. 18, each of the first movingunit601 and the second movingunit602 may further include aplate612 coupled to a portion of the first andsecond display units501 and502, aspring622 connected to one end of theplate612, and apressurizing unit632 capable of pressurizing the other end of the plate.
FIG. 18 is a diagram illustrating the configuration of the moving unit for moving the display unit according to the other embodiment of the present invention.
As illustrated inFIG. 18, when the pressurizingunit632 positioned on the other end side of theplate612, that is, positioned on the left side of the plate pressurizes the other end of theplate612, thespring622 connected to one end of theplate612 is subjected to compressive deformation, and theplate612 is moved to the right inFIG. 18, which, in turn, thesecond display unit502 is moved to the right.
On the other hand, when the pressure applied to the other end of theplate612 by the pressurizingunit632 is removed, theplate612 is moved back to the left inFIG. 18 by elastic deformation of thespring622, and thesecond display unit502 connected to theplate612 is moved to the left.
In addition, the movingunit600 may be configured to include an air pocket (not illustrated) that accommodates a portion of thedisplay unit500 instead of the configuration of theplate612 and thespring622. In this case, when a portion of the upper end of thedisplay unit500 is accommodated in the air pocket, and the pressurizingunit632 pressurizes one end of the air pocket, the air pocket is deformed and the portion of the upper end of thedisplay unit500 accommodated in the air pocket is pressurized. In this way, thedisplay unit500 is moved. In the present embodiment, using the configuration in which the movingunit600 moves thedisplay unit500, thedisplay unit500 may correspond to different pupil positions for each user.
In addition, thecontrol unit400 according to the present embodiment may recognize different pupil positions for each user and control the movingunit600 so that thedisplay unit500 corresponds to the pupil position. To this end, thecontrol unit400 may further include a pupil recognizing unit capable of recognizing the pupil of the user, and the pupil recognizing unit may be provided with a separate camera to recognize the pupil of the user.
In addition, since the disposing position of thedisplay unit500 according to the present embodiment may be changed to correspond to the pupil position of the user by the movingunit600, thedisplay unit500 may be configured to have a smaller width than thedisplay unit300 according to the embodiment of the present invention illustrated inFIG. 12.
That is, referring toFIG. 13, a width w2 of the second size representing the width of thedisplay unit500 according to the other embodiment of the present invention has a value smaller than the width w1 of the first size representing the width of thedisplay unit300 according to the embodiment of the present invention.
In addition, the width w2 of the second size may be formed to correspond to a diameter of at least one of the pupil, iris, and eye of the user. In addition, the width w2 of the second size may be configured to be half of the length of thefirst control module401 or the length of thesecond control module402.
Accordingly, thedisplay unit500 according to the other embodiment of the present invention may be formed to have a smaller size, volume, and weight than thedisplay unit300 according to the embodiment of the present invention, which makes it is possible to minimize the size, volume, and weight of the entire electronic device.
Particular embodiments or other embodiments of the present invention described above are not mutually exclusive to each other or distinguishable from each other. Individual structures or functions of particular embodiments or other embodiments of the present invention described above may be used in parallel therewith or in combination thereof.
For example, it means that structure A described with reference to a specific embodiment and/or figure and structure B described with reference to other embodiment and/or figure may be combined together. In other words, even if a combination of two different structures is not explicitly indicated, it should be understood that combination thereof is possible unless otherwise stated as impossible.
The detailed descriptions above should be regarded as being illustrative rather than restrictive in every aspect. The technical scope of the present invention should be determined by a reasonable interpretation of the appended claims, and all of the modifications that fall within an equivalent scope of the present invention belong to the technical scope of the present invention.
INDUSTRIAL APPLICABILITY
An electronic device according to the present disclosure may be associated with an artificial intelligence module, robot, augmented reality (AR) device, virtual reality (VR) device, and device related to 5G services.

Claims (13)

What is claimed is:
1. An electronic device that is wearable on a head of a user, the electronic device comprising:
a display unit including an in-coupling portion having a first nanopattern formed thereon, and an out-coupling portion having a second nanopattern formed thereon;
a moving unit moving the display unit to a position corresponding to a pupil position of the user; and
a control unit recognizing the pupil position of the user and controlling the moving unit so that the position of the display unit corresponds to the pupil position,
wherein the first nanopattern and the second nanopattern are the same pattern.
2. The electronic device ofclaim 1,
wherein the first nanopattern includes a plurality of first protrusions protruding from one surface of the display unit to a face of the user,
the second nanopattern includes a plurality of second protrusions protruding from one surface of the display unit to the face of the user, and
the first protrusions and the second protrusions are all parallel to one another.
3. The electronic device ofclaim 2,
wherein a height of the first protrusions protruding from the one surface of the display unit is greater than a height of the second protrusions protruding from the one surface of the display unit.
4. The electronic device ofclaim 2,
wherein a height of the first protrusions protruding from the one surface of the display unit is smaller than a height of the second protrusions protruding from the one surface of the display unit.
5. The electronic device ofclaim 2,
wherein a height of the first protrusions protruding from the one surface of the display unit is the same as a height of the second protrusions protruding from the one surface of the display unit.
6. The electronic device ofclaim 2,
wherein angles formed by the first protrusions and the second protrusions with respect to the one surface of the display unit are perpendicular.
7. The electronic device ofclaim 2,
wherein angles formed by the first protrusions and the second protrusions with respect to the one surface of the display unit are acute.
8. The electronic device ofclaim 2,
wherein angles formed by the first protrusions and the second protrusions with respect to the one surface of the display unit are obtuse.
9. The electronic device ofclaim 1,
wherein a width of the display unit is formed to correspond to a diameter of at least one of the pupil, iris, and eye of the user.
10. The electronic device ofclaim 1, further comprising:
a front frame on which the control unit and the display unit are mounted,
wherein the control unit further includes a first control module and a second control module, the first control module and the second control module are disposed on the front frame to be parallel to eyebrows of the user, and a width of the display unit is half of a length of the first control module or a length of the second control module.
11. The electronic device ofclaim 1,
wherein the moving unit further includes a plate coupled to a portion of the display unit, a spring connected to one end of the plate, and a pressurizing unit pressurizing the other end of the plate.
12. The electronic device ofclaim 1,
wherein the moving unit further includes an air pocket accommodating a portion of the display unit, and a pressurizing unit pressurizing one end of the air pocket.
13. The electronic device ofclaim 1,
wherein the control unit further includes a pupil recognizing unit recognizing the pupil of the user.
US16/495,3792019-08-282019-08-28Wearable electronic device on headActive2040-09-28US11307416B2 (en)

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
PCT/KR2019/010987WO2021040083A1 (en)2019-08-282019-08-28Head-wearable electronic device

Publications (2)

Publication NumberPublication Date
US20210333549A1 US20210333549A1 (en)2021-10-28
US11307416B2true US11307416B2 (en)2022-04-19

Family

ID=68098287

Family Applications (1)

Application NumberTitlePriority DateFiling Date
US16/495,379Active2040-09-28US11307416B2 (en)2019-08-282019-08-28Wearable electronic device on head

Country Status (3)

CountryLink
US (1)US11307416B2 (en)
KR (1)KR102820786B1 (en)
WO (1)WO2021040083A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR102706133B1 (en)*2019-02-282024-09-12삼성디스플레이 주식회사Augmented reality providing device
KR20210085297A (en)2019-12-302021-07-08엘지전자 주식회사Electronic device
US11580839B2 (en)*2020-02-242023-02-14Amirbahman SaatchiSystems, methods, and apparatuses for implementing a GPS directional swimming watch for the eyesight impaired
CN112255801B (en)*2020-11-132022-08-05Oppo广东移动通信有限公司 Optical mounts and headsets

Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20100328794A1 (en)*2009-06-302010-12-30Nokia CorporationOptical apparatus and method
KR20140059213A (en)2011-08-302014-05-15마이크로소프트 코포레이션Head mounted display with iris scan profiling
US20140176528A1 (en)*2012-12-202014-06-26Microsoft CorporationAuto-stereoscopic augmented reality display
US20150138248A1 (en)*2012-05-032015-05-21Martin SchraderImage Providing Apparatus, Method and Computer Program
US20150323792A1 (en)*2014-05-072015-11-12Seiko Epson CorporationImage display apparatus
KR101635892B1 (en)2015-10-082016-07-04엘지전자 주식회사Head Mount Display Device
KR20170064389A (en)2015-12-012017-06-09엘지전자 주식회사Head mounted display and method for controlling the same
US20170299860A1 (en)*2016-04-132017-10-19Richard Andrew WallWaveguide-Based Displays With Exit Pupil Expander
KR20170128608A (en)2015-05-292017-11-22선전 로욜 테크놀로지스 컴퍼니 리미티드 Display adjustment techniques and head-mounted display devices

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103946732B (en)*2011-09-262019-06-14微软技术许可有限责任公司Video based on the sensor input to perspective, near-eye display shows modification
US20160234485A1 (en)*2015-02-092016-08-11Steven John RobbinsDisplay System
NZ742518A (en)*2015-11-042019-08-30Magic Leap IncDynamic display calibration based on eye-tracking

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR20120014597A (en)2009-06-302012-02-17노키아 코포레이션 Optical device and method
US8194325B2 (en)*2009-06-302012-06-05Nokia CorporationOptical apparatus and method
US20100328794A1 (en)*2009-06-302010-12-30Nokia CorporationOptical apparatus and method
KR20140059213A (en)2011-08-302014-05-15마이크로소프트 코포레이션Head mounted display with iris scan profiling
US20150138248A1 (en)*2012-05-032015-05-21Martin SchraderImage Providing Apparatus, Method and Computer Program
US10192358B2 (en)*2012-12-202019-01-29Microsoft Technology Licensing, LlcAuto-stereoscopic augmented reality display
US20140176528A1 (en)*2012-12-202014-06-26Microsoft CorporationAuto-stereoscopic augmented reality display
US20150323792A1 (en)*2014-05-072015-11-12Seiko Epson CorporationImage display apparatus
JP2015213271A (en)2014-05-072015-11-26セイコーエプソン株式会社Image display device
KR20170128608A (en)2015-05-292017-11-22선전 로욜 테크놀로지스 컴퍼니 리미티드 Display adjustment techniques and head-mounted display devices
US20180047369A1 (en)*2015-05-292018-02-15Shenzhen Royole Technologies Co. Ltd.Method for adaptive display adjustment and head-mounted display device
KR101635892B1 (en)2015-10-082016-07-04엘지전자 주식회사Head Mount Display Device
KR20170064389A (en)2015-12-012017-06-09엘지전자 주식회사Head mounted display and method for controlling the same
US20170299860A1 (en)*2016-04-132017-10-19Richard Andrew WallWaveguide-Based Displays With Exit Pupil Expander

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PCT International Application No. PCT/KR2019/010987, International Search Report dated May 27, 2020, 3 pages.

Also Published As

Publication numberPublication date
KR20190110494A (en)2019-09-30
KR102820786B1 (en)2025-06-13
US20210333549A1 (en)2021-10-28
WO2021040083A1 (en)2021-03-04

Similar Documents

PublicationPublication DateTitle
US12025808B2 (en)Electronic device
US10908420B2 (en)Electronic device for virtual reality (VR), augmented reality (AR), or mixed reality (MR)
US11493757B2 (en)Electronic device
US11074754B2 (en)Electronic device
US11885964B2 (en)Electronic device
US20200004023A1 (en)Electronic device
US20210364796A1 (en)Wearable electronic device on head
US20210063742A1 (en)Electronic device
US10859842B2 (en)Electronic device
US11633665B2 (en)Electronic device
US11221458B2 (en)Electronic device
US11275247B2 (en)Electronic device
US11307416B2 (en)Wearable electronic device on head
US20200004022A1 (en)Electronic device
US20210409675A1 (en)Wearable electronic device on head
US20200004028A1 (en)Electronic device
US11662577B2 (en)Electronic device
EP3845952A1 (en)Head mounted display system with an electronic device
US20210065450A1 (en)Electronic device
US11668934B2 (en)Electronic device
EP3845954B1 (en)Head mounted display with an electronic device
US11467405B2 (en)Wearable electronic device on head
US11480792B2 (en)Electronic device
US11378807B2 (en)Electronic device
US20200257124A1 (en)Electronic device

Legal Events

DateCodeTitleDescription
ASAssignment

Owner name:LG ELECTRONICS INC., KOREA, REPUBLIC OF

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SHIN, SEUNGYONG;JO, SEONG;KIM, KYOUNGTAE;AND OTHERS;REEL/FRAME:050427/0858

Effective date:20190826

FEPPFee payment procedure

Free format text:ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPPInformation on status: patent application and granting procedure in general

Free format text:DOCKETED NEW CASE - READY FOR EXAMINATION

STPPInformation on status: patent application and granting procedure in general

Free format text:NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPPInformation on status: patent application and granting procedure in general

Free format text:PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCFInformation on status: patent grant

Free format text:PATENTED CASE

MAFPMaintenance fee payment

Free format text:PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment:4


[8]ページ先頭

©2009-2025 Movatter.jp