Movatterモバイル変換


[0]ホーム

URL:


US20230073750A1 - Augmented reality (ar) imprinting methods and systems - Google Patents

Augmented reality (ar) imprinting methods and systems
Download PDF

Info

Publication number
US20230073750A1
US20230073750A1US18/055,622US202218055622AUS2023073750A1US 20230073750 A1US20230073750 A1US 20230073750A1US 202218055622 AUS202218055622 AUS 202218055622AUS 2023073750 A1US2023073750 A1US 2023073750A1
Authority
US
United States
Prior art keywords
virtual
content
real world
location
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US18/055,622
Inventor
Roger Ray Skidmore
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
EDX Technologies Inc
Original Assignee
EDX Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by EDX Technologies IncfiledCriticalEDX Technologies Inc
Priority to US18/055,622priorityCriticalpatent/US20230073750A1/en
Publication of US20230073750A1publicationCriticalpatent/US20230073750A1/en
Abandonedlegal-statusCriticalCurrent

Links

Images

Classifications

Definitions

Landscapes

Abstract

Augmented reality (AR) content may be created and stored as an imprint on a virtual model, where the virtual model is modeled after or mimics a real world environment. Intuitive mobile device interfaces may be used to link content with objects or surfaces near a mobile device. Subsequent users may access the same content depending on one or more access parameters.

Description

Claims (29)

I claim:
1. A method for augmented reality (AR) using a virtual model, comprising
mapping a real world location and orientation within a geodetic datum to a pose within a virtual model, wherein virtual locations in the virtual model correspond with respective real world locations, wherein the virtual model contains virtual objects which are employable for storing and retrieving visual content, wherein a virtual object at a given virtual location is associated with a particular real world location that corresponds with the given virtual location;
selecting one or more virtual surfaces of one or more virtual objects from among the virtual objects within the virtual model using the pose within the virtual model; and
placing visual content on the selected one or more virtual surfaces such that the visual content is incorporated into and stored with the virtual model.
2. The method ofclaim 1, wherein the selecting step allows selection of surfaces which are only partially within view based on the pose.
3. The method ofclaim 1, wherein the one or more virtual surfaces onto which the visual content is placed includes one or more surfaces which are only partially within view based on the pose.
4. The method ofclaim 1, wherein the selecting step allows selection of surfaces belonging to virtual objects a view of which is obstructed.
5. The method ofclaim 1, further comprising a step of refining boundaries of the selected one or more virtual surfaces.
6. The method ofclaim 1, further comprising a step of restricting access to the placed visual content.
7. The method ofclaim 6, wherein access is restricted to a particular time frame.
8. The method ofclaim 6, wherein access is restricted to users within a predetermined proximity with respect to certain objects or locations.
9. The method ofclaim 6, wherein access is restricted to users within a predetermined distance of a virtual surface onto which the visual content was placed.
10. The method ofclaim 1, wherein the virtual model corresponds to a real world.
11. The method ofclaim 1, wherein the one or more virtual objects correspond with real world buildings.
12. The method ofclaim 1, wherein the one or more virtual objects correspond with real world objects which include landmarks and structures.
13. The method ofclaim 1, further comprising
determining what AR content from the virtual model to serve to a user based one or more considerations; and
serving the determined AR content to a display device of the user.
14. The method ofclaim 13, wherein the one or more considerations comprise a location and orientation of the display device.
15. The methodclaim 14, wherein the one or more considerations further comprise the location and orientation of the display device being within a predetermined viewing distance of a virtual location of the AR content.
16. The method ofclaim 13, further comprising
storing multiple visual contents including the placed visual content with a single virtual object and/or a single surface of the single virtual object,
wherein the determined and served AR content includes at least some of the multiple visual contents.
17. The method ofclaim 16, wherein the multiple visual contents are timestamped, and wherein the one or more considerations comprises the timestamps.
18. The method ofclaim 1, further comprising
generating a new virtual object within the virtual model; and
storing further AR content with the new virtual object, wherein the AR content is stored so as to be in semantic context with respect to a two- or three-dimensional configuration and arrangement of other content of the virtual model including the placed visual content,
wherein the placed visual content comprises an image or video.
19. The method ofclaim 18, further comprising retrieving and serving to a display device the placed visual content and the further AR content.
20. The method ofclaim 1, further comprising updating the one or more virtual objects and augmentations based on those one or more virtual objects when new virtual object data becomes available or existing data becomes outdated or expired.
21. The method ofclaim 1, further comprising generating a new virtual object within the virtual model, wherein the one or more virtual surfaces selected by the selecting step is from the new virtual object, and wherein the visual content placed by the placing step is from user drawn content.
22. The method ofclaim 21, further comprising retrieving and serving to a display device the visual content for display as augmented reality.
23. The method ofclaim 1, further comprising
storing multiple visual contents including the placed visual content with a single virtual object and/or a single surface of the single virtual object; and
granting access to the visual contents on a user-by-user basis.
24. The method ofclaim 1, further comprising
storing multiple visual contents including the placed visual content with a single virtual object and/or a single surface of the single virtual object; and
granting access to the visual contents on a user-by-user basis based on one or more of user identification, user class, lifetime or expiration date, and proximity at time of access request.
25. A non-transitory computer readable medium comprising computer readable instructions which, when executed by one or more processors, cause the one or more processors to perform:
mapping a real world location and orientation within a geodetic datum to a pose within a virtual model, wherein virtual locations in the virtual model correspond with respective real world locations, wherein the virtual model contains virtual objects which are employable for storing and retrieving visual content, wherein a virtual object at a given virtual location is associated with a particular real world location that corresponds with the given virtual location;
selecting one or more virtual surfaces of one or more virtual objects from among the virtual objects within the virtual model using the pose within the virtual model; and
placing visual content on the selected one or more virtual surfaces such that the visual content is incorporated into and stored with the virtual model.
26. A non-transitory computer readable medium comprising computer readable instructions which, when executed by one or more processors, cause the one or more processors to perform:
capturing an image or video of a real world environment with a camera;
recording the real world location and orientation of the camera within a geodetic datum;
mapping the real world location and orientation to a pose within a virtual model, wherein virtual locations in the virtual model correspond with respective real world locations, wherein the virtual model contains virtual objects which are employable for storing and retrieving visual content, wherein a virtual object at a given virtual location is associated with a particular real world location that corresponds with the given virtual location;
generating a new virtual object within the virtual model;
selecting the new virtual object from among the virtual objects within the virtual model using the pose within the virtual model; and
placing visual content on one or more virtual surfaces of the selected new virtual object such that the visual content is incorporated into and stored with the virtual model; and
displaying the visual content to a user as an AR augmentation to the image or video of the real world environment.
27. The non-transitory computer readable medium ofclaim 26, wherein the computer readable instructions execute partly on a user's computer and partly on a remote computer.
28. The non-transitory computer readable medium ofclaim 26, wherein at least the recording step is performed by a first mobile device, and wherein the selecting step is performed by the first mobile device, a second mobile device different from the first mobile device, or a cloud computing device.
29. The non-transitory computer readable medium ofclaim 26, wherein the capturing step is performed by a first device which comprises the camera, and wherein the displaying step is performed by some other output device other than the first device.
US18/055,6222018-11-152022-11-15Augmented reality (ar) imprinting methods and systemsAbandonedUS20230073750A1 (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
US18/055,622US20230073750A1 (en)2018-11-152022-11-15Augmented reality (ar) imprinting methods and systems

Applications Claiming Priority (4)

Application NumberPriority DateFiling DateTitle
US201862767676P2018-11-152018-11-15
PCT/US2019/061751WO2020102687A1 (en)2018-11-152019-11-15Augmented reality (ar) imprinting methods and systems
US202117294188A2021-05-142021-05-14
US18/055,622US20230073750A1 (en)2018-11-152022-11-15Augmented reality (ar) imprinting methods and systems

Related Parent Applications (2)

Application NumberTitlePriority DateFiling Date
US17/294,188ContinuationUS11532138B2 (en)2018-11-152019-11-15Augmented reality (AR) imprinting methods and systems
PCT/US2019/061751ContinuationWO2020102687A1 (en)2018-11-152019-11-15Augmented reality (ar) imprinting methods and systems

Publications (1)

Publication NumberPublication Date
US20230073750A1true US20230073750A1 (en)2023-03-09

Family

ID=70731714

Family Applications (2)

Application NumberTitlePriority DateFiling Date
US17/294,188ActiveUS11532138B2 (en)2018-11-152019-11-15Augmented reality (AR) imprinting methods and systems
US18/055,622AbandonedUS20230073750A1 (en)2018-11-152022-11-15Augmented reality (ar) imprinting methods and systems

Family Applications Before (1)

Application NumberTitlePriority DateFiling Date
US17/294,188ActiveUS11532138B2 (en)2018-11-152019-11-15Augmented reality (AR) imprinting methods and systems

Country Status (5)

CountryLink
US (2)US11532138B2 (en)
EP (1)EP3881294A4 (en)
JP (1)JP2022507502A (en)
CA (1)CA3119609A1 (en)
WO (1)WO2020102687A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
TWI877952B (en)*2023-12-142025-03-21中華電信股份有限公司System, method and computer program product for synchronizing virtual and real world scene content

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US11172248B2 (en)2019-01-222021-11-09Tempus Ex Machina, Inc.Systems and methods for customizing and compositing a video feed at a client device
US11087595B2 (en)*2019-01-242021-08-10IgtSystem and method for wagering on virtual elements overlaying a sports betting field
EP3712781A1 (en)*2019-03-202020-09-23Nokia Technologies OyAn apparatus and associated methods for presentation of presentation data
US11558711B2 (en)*2021-03-022023-01-17Google LlcPrecision 6-DoF tracking for wearable devices
US11587266B2 (en)*2021-07-212023-02-21Tempus Ex Machina, Inc.Adding augmented reality to a sub-view of a high resolution central video feed
EP4164255A1 (en)2021-10-082023-04-12Nokia Technologies Oy6dof rendering of microphone-array captured audio for locations outside the microphone-arrays
JP7441579B1 (en)*2023-06-072024-03-01株式会社センシンロボティクス Information processing system and information processing method
TWI874102B (en)*2024-01-162025-02-21開曼群島商沛嘻科技股份有限公司Method and system for triggering an intelligent dialogue through an augmented-reality image

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20120249741A1 (en)*2011-03-292012-10-04Giuliano MaciocciAnchoring virtual images to real world surfaces in augmented reality systems
US9530426B1 (en)*2015-06-242016-12-27Microsoft Technology Licensing, LlcFiltering sounds for conferencing applications
US10453259B2 (en)*2013-02-012019-10-22Sony CorporationInformation processing device, client device, information processing method, and program
US20190362555A1 (en)*2018-05-252019-11-28Tiff's Treats Holdings Inc.Apparatus, method, and system for presentation of multimedia content including augmented reality content
US10726597B1 (en)*2018-02-222020-07-28A9.Com, Inc.Optically challenging surface detection for augmented reality

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
EP1661310A4 (en)*2003-08-292010-03-31Rgb Networks Inc ADVANCED ADAPTIVE VIDEO MULTIPLEXER SYSTEM
US10528705B2 (en)2006-05-092020-01-07Apple Inc.Determining validity of subscription to use digital content
US8350871B2 (en)2009-02-042013-01-08Motorola Mobility LlcMethod and apparatus for creating virtual graffiti in a mobile virtual and augmented reality system
US20100257252A1 (en)*2009-04-012010-10-07Microsoft CorporationAugmented Reality Cloud Computing
US20110279445A1 (en)2010-05-162011-11-17Nokia CorporationMethod and apparatus for presenting location-based content
EP2572336B1 (en)2010-05-182021-09-15Teknologian Tutkimuskeskus VTTMobile device, server arrangement and method for augmented reality applications
US9117483B2 (en)*2011-06-032015-08-25Michael Edward ZaletelMethod and apparatus for dynamically recording, editing and combining multiple live video clips and still photographs into a finished composition
US10139623B2 (en)*2013-06-182018-11-27Microsoft Technology Licensing, LlcVirtual object orientation and visualization
CN106937531B (en)2014-06-142020-11-06奇跃公司 Method and system for generating virtual and augmented reality
US10471353B2 (en)2016-06-302019-11-12Sony Interactive Entertainment America LlcUsing HMD camera touch button to render images of a user captured during game play

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20120249741A1 (en)*2011-03-292012-10-04Giuliano MaciocciAnchoring virtual images to real world surfaces in augmented reality systems
US10453259B2 (en)*2013-02-012019-10-22Sony CorporationInformation processing device, client device, information processing method, and program
US9530426B1 (en)*2015-06-242016-12-27Microsoft Technology Licensing, LlcFiltering sounds for conferencing applications
US10726597B1 (en)*2018-02-222020-07-28A9.Com, Inc.Optically challenging surface detection for augmented reality
US20190362555A1 (en)*2018-05-252019-11-28Tiff's Treats Holdings Inc.Apparatus, method, and system for presentation of multimedia content including augmented reality content

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
TWI877952B (en)*2023-12-142025-03-21中華電信股份有限公司System, method and computer program product for synchronizing virtual and real world scene content

Also Published As

Publication numberPublication date
WO2020102687A1 (en)2020-05-22
EP3881294A1 (en)2021-09-22
US20220005281A1 (en)2022-01-06
CA3119609A1 (en)2020-05-22
EP3881294A4 (en)2022-08-24
JP2022507502A (en)2022-01-18
US11532138B2 (en)2022-12-20

Similar Documents

PublicationPublication DateTitle
US20230073750A1 (en)Augmented reality (ar) imprinting methods and systems
CN110954083B (en)Positioning of mobile devices
US20250139915A1 (en)Techniques for capturing and displaying partial motion in virtual or augmented reality scenes
US12020667B2 (en)Systems, methods, and media for displaying interactive augmented reality presentations
US10607320B2 (en)Filtering of real-time visual data transmitted to a remote recipient
US11151791B2 (en)R-snap for production of augmented realities
US20200312029A1 (en)Augmented and virtual reality
WO2022022028A1 (en)Virtual object control method and apparatus, and device and computer-readable storage medium
US11410330B2 (en)Methods, devices, and systems for determining field of view and producing augmented reality
US11302067B2 (en)Systems and method for realistic augmented reality (AR) lighting effects
JP2014525089A5 (en)
JP2014525089A (en) 3D feature simulation
US20190244431A1 (en)Methods, devices, and systems for producing augmented reality
CN106157359A (en)A kind of method for designing of virtual scene experiencing system
US20200211295A1 (en)Methods and devices for transitioning among realities mediated by augmented and/or virtual reality devices
CN104486586B (en)A kind of disaster lifesaving simulated training method and system based on video map
US11568579B2 (en)Augmented reality content generation with update suspension
JP2019509540A (en) Method and apparatus for processing multimedia information
CN113678173A (en) Method and apparatus for drawing-based placement of virtual objects
CN111918114A (en)Image display method, image display device, display equipment and computer readable storage medium
US11816759B1 (en)Split applications in a multi-user communication session
CN117897677A (en) Artificial reality device acquisition, control and sharing
JP6680886B2 (en) Method and apparatus for displaying multimedia information
CN112639889A (en)Content event mapping
CN116600045B (en) A schedule display method, a schedule display device and a storage medium

Legal Events

DateCodeTitleDescription
STPPInformation on status: patent application and granting procedure in general

Free format text:DOCKETED NEW CASE - READY FOR EXAMINATION

STPPInformation on status: patent application and granting procedure in general

Free format text:NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STCBInformation on status: application discontinuation

Free format text:ABANDONED -- FAILURE TO PAY ISSUE FEE


[8]ページ先頭

©2009-2025 Movatter.jp