Movatterモバイル変換


[0]ホーム

URL:


US20230291882A1 - Cloud-based Rendering of Interactive Augmented/Virtual Reality Experiences - Google Patents

Cloud-based Rendering of Interactive Augmented/Virtual Reality Experiences
Download PDF

Info

Publication number
US20230291882A1
US20230291882A1US18/197,999US202318197999AUS2023291882A1US 20230291882 A1US20230291882 A1US 20230291882A1US 202318197999 AUS202318197999 AUS 202318197999AUS 2023291882 A1US2023291882 A1US 2023291882A1
Authority
US
United States
Prior art keywords
client device
server
display
user
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US18/197,999
Other versions
US12192434B2 (en
Inventor
Clifford S. Champion
Jonathan J. Hosenpud
Baifang Lu
Alex Shorey
Robert D. Kalnins
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
zSpace Inc
Original Assignee
zSpace Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by zSpace IncfiledCriticalzSpace Inc
Priority to US18/197,999priorityCriticalpatent/US12192434B2/en
Assigned to ZSPACE, INC.reassignmentZSPACE, INC.ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: HOSENPUD, Jonathan J., LU, BAIFANG, CHAMPION, CLIFFORD S., KALNINS, ROBERT D., SHOREY, ALEX
Publication of US20230291882A1publicationCriticalpatent/US20230291882A1/en
Application grantedgrantedCritical
Publication of US12192434B2publicationCriticalpatent/US12192434B2/en
Assigned to 3I, L.P.reassignment3I, L.P.SECURITY INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: ZSPACE, INC.
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Definitions

Landscapes

Abstract

Systems and methods for implementing methods for cloud-based rendering of interactive augmented reality (AR) and/or virtual reality (VR) experiences. A client device may initiate execution of a content application on a server and provide information associated with the content application to the server. The client device may initialize, while awaiting a notification from the server, local systems associated with the content application and, upon receipt of the notification, provide, to the server, information associated with the local systems. Further, the client device may receive, from the server, data associated with the content application and render an AR/VR scene based on the received data. The data may be based, at least in part, on the information associated with the local system. The providing and receiving may be performed periodically, e.g., at a rate to sustain a comfortable viewing environment of the AR/VR scene by a user of the client device.

Description

Claims (20)

We claim:
1. A non-transitory computer readable memory medium storing programming instructions executable by processing circuitry of a client device comprising a three-dimensional (3D) display to:
receive, from a server executing a content application, data based on the content application, wherein the data is received over a network, and wherein the data is based on information the client device provided to the server;
render a 3D scene based on the received data; and
incorporate attributes with semantic hints as input options into an artificial intelligence (AI) image enhancement algorithm, wherein the AI image enhancement algorithm is optimized for different object types within the 3D scene, and wherein the different object types within the 3D scene include one or more of text, two-dimensional (2D) images, or 3D images.
2. The non-transitory computer readable memory medium ofclaim 1,
wherein the information includes first information that is associated with the client device, and wherein the first information includes one or more of a display size, a display resolution, a display model, system specifications, a system language, or a system locale.
3. The non-transitory computer readable memory medium ofclaim 1,
wherein the programming instructions are further executable by the processing circuitry of the client device to:
create a local stereoscopic-enabled window or full-screen surface; and
initialize tracking sub-systems.
4. The non-transitory computer readable memory medium ofclaim 1,
wherein the client device includes one or more of a head tracking sub-system, an eye tracking sub-system, or a user tracking sub-system, and wherein user tracking includes at least one of user hand tracking or stylus tracking.
5. The non-transitory computer readable memory medium ofclaim 1,
wherein the information includes information associated with local systems, and wherein to provide the information associated with the local systems to the server, the programming instructions are further executable by the processing circuitry of the client device to:
receive local tracking system state information;
encode the local tracking system state information;
transmit the local tracking system state information to the server via the network;
encode user input events and user input state information; and
transmit the user input events and user input state information to the server via the network.
6. The non-transitory computer readable memory medium ofclaim 5,
wherein the local tracking system state information includes one or more of:
head tracking state, quality, and poses;
eye tracking state, quality, and poses; or
user tracking state, quality, and poses.
7. The non-transitory computer readable memory medium ofclaim 5,
wherein the user input events include one or more of keyboard events or mouse events.
8. The non-transitory computer readable memory medium ofclaim 1,
wherein the AI image enhancement algorithm receives a recent history of images and enhances image clarity based on the recent history of images.
9. A three-dimensional (3D) display system comprising:
at least one radio, wherein the at least one radio is configured to perform wireless communication using at least one radio access technology (RAT);
at least one processor coupled to the at least one radio, wherein the one or more processors and the at least one radio are configured to perform data communications;
one or more displays, coupled to the at least one processor;
a tracking system comprising two or more cameras and in communication with the at least one processor; and
a memory in communication with the tracking system and the at least one processor, wherein the at least one processor is configured to:
receive, from a server executing a content application, data based on the content application, wherein the data is received over a network, and wherein the data is based on information the 3D display system provided to the server;
render a 3D scene based on the received data; and
incorporate attributes with semantic hints as input options into an artificial intelligence (AI) image enhancement algorithm, wherein the AI image enhancement algorithm is optimized for different object types within the 3D scene, and wherein the different object types within the 3D scene include one or more of text, two-dimensional (2D) images, or 3D images.
10. The 3D display system ofclaim 9,
wherein the at least one processor is further configured to:
predict a movement of a user based on physiological assumptions regarding a target audience of users and one or more network factors, and wherein the one or more network factors include at least one of:
measured network reliability according to a trailing window;
a geographic location of an edge of a cell relative to a current location of the 3D display system; or
a geographic location of a content delivery network cloud facility hosting the server relative to the current location of the 3D display system.
11. The 3D display system ofclaim 9,
wherein the 3D scene includes one or more elements, and wherein, to receive data associated with the content application, the at least one processor is further configured to receive one or more streams of data, wherein each stream of data is associated with a respective element of the one or more elements.
12. The 3D display system ofclaim 11,
wherein the one or more elements include at least one of a menu user interface (UI) element, an interactive 3D model, a static 3D model, a background 3D model, or a background image.
13. The 3D display system ofclaim 11,
wherein each element of the one or more elements is encoded separately according to at least one of a respective frame rate for the element or resolution properties associated with the element.
14. A method for rendering a three-dimension (3D) scene, comprising:
a client device,
receiving, from a server executing a content application, data based on the content application, wherein the data is received over a network, and wherein the data is based on information the client device provided to the server;
rendering a 3D scene based on the received data; and
incorporating attributes with semantic hints as input options into an artificial intelligence (AI) image enhancement algorithm, wherein the AI image enhancement algorithm is optimized for different object types within the 3D scene, and wherein the different object types within the 3D scene include one or more of text, two-dimensional (2D) images, or 3D images.
15. The method ofclaim 14,
wherein the information includes one or more of a display size of a display of the client device, a display resolution of a display of the client device, a display model of a display of the client device, system specifications of the client device, a system language of the client device, or a system locale of the client device.
16. The method ofclaim 14,
wherein the information includes local tracking system state information, and wherein the local tracking system state information includes one or more of:
head tracking state, quality, and poses;
eye tracking state, quality, and poses; or
user tracking state, quality, and poses.
17. The method ofclaim 14,
wherein quality of service (QoS) attributes are assigned for each channel of communication between the client device and server.
18. The method ofclaim 17,
wherein the QoS attributes are assigned based, at least in part, on user preferences, directives provided by the server, or directives provided by the content application.
19. The method ofclaim 17,
wherein the channels of communication between the client device and server include one or more of data channels for head and/or eye tracking state upload, stylus and/or hand tracking state upload, mouse and/or keyboard state upload, final frame video download, or download of individual layers.
20. The method ofclaim 14,
wherein the network operates according to Third Generation Partnership Project (3GPP) Fifth Generation (5G) New Radio (NR); and
wherein the client device comprises a user equipment device (UE).
US18/197,9992021-06-072023-05-16Cloud-based rendering of interactive augmented/virtual reality experiencesActiveUS12192434B2 (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
US18/197,999US12192434B2 (en)2021-06-072023-05-16Cloud-based rendering of interactive augmented/virtual reality experiences

Applications Claiming Priority (2)

Application NumberPriority DateFiling DateTitle
US17/340,901US11843755B2 (en)2021-06-072021-06-07Cloud-based rendering of interactive augmented/virtual reality experiences
US18/197,999US12192434B2 (en)2021-06-072023-05-16Cloud-based rendering of interactive augmented/virtual reality experiences

Related Parent Applications (1)

Application NumberTitlePriority DateFiling Date
US17/340,901ContinuationUS11843755B2 (en)2021-06-072021-06-07Cloud-based rendering of interactive augmented/virtual reality experiences

Publications (2)

Publication NumberPublication Date
US20230291882A1true US20230291882A1 (en)2023-09-14
US12192434B2 US12192434B2 (en)2025-01-07

Family

ID=82218352

Family Applications (6)

Application NumberTitlePriority DateFiling Date
US17/340,901Active2042-03-03US11843755B2 (en)2021-06-072021-06-07Cloud-based rendering of interactive augmented/virtual reality experiences
US18/197,983Active2041-07-08US12439013B2 (en)2021-06-072023-05-16Cloud-based rendering of interactive augmented/virtual reality experiences
US18/197,999ActiveUS12192434B2 (en)2021-06-072023-05-16Cloud-based rendering of interactive augmented/virtual reality experiences
US18/207,998Active2041-06-18US12256052B2 (en)2021-06-072023-06-09Cloud-based rendering of interactive augmented/virtual reality experiences
US18/208,014Active2041-07-16US12256054B2 (en)2021-06-072023-06-09Cloud-based rendering of interactive augmented/virtual reality experiences
US18/208,007Active2041-07-07US12256053B2 (en)2021-06-072023-06-09Cloud-based rendering of interactive augmented/virtual reality experiences

Family Applications Before (2)

Application NumberTitlePriority DateFiling Date
US17/340,901Active2042-03-03US11843755B2 (en)2021-06-072021-06-07Cloud-based rendering of interactive augmented/virtual reality experiences
US18/197,983Active2041-07-08US12439013B2 (en)2021-06-072023-05-16Cloud-based rendering of interactive augmented/virtual reality experiences

Family Applications After (3)

Application NumberTitlePriority DateFiling Date
US18/207,998Active2041-06-18US12256052B2 (en)2021-06-072023-06-09Cloud-based rendering of interactive augmented/virtual reality experiences
US18/208,014Active2041-07-16US12256054B2 (en)2021-06-072023-06-09Cloud-based rendering of interactive augmented/virtual reality experiences
US18/208,007Active2041-07-07US12256053B2 (en)2021-06-072023-06-09Cloud-based rendering of interactive augmented/virtual reality experiences

Country Status (2)

CountryLink
US (6)US11843755B2 (en)
EP (1)EP4102851A3 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US11638040B2 (en)2020-08-242023-04-25Schmied Enterprises LLCEco-friendly codec-based system for low latency transmission
US11843755B2 (en)2021-06-072023-12-12Zspace, Inc.Cloud-based rendering of interactive augmented/virtual reality experiences
CN118214911A (en)*2022-12-152024-06-18北京字跳网络技术有限公司 Information flow processing method and related equipment
US20240242443A1 (en)*2023-01-122024-07-18Qualcomm IncorporatedProximity-based protocol for enabling multi-user extended reality (xr) experience
CN116055670B (en)*2023-01-172023-08-29深圳图为技术有限公司Method for collaborative checking three-dimensional model based on network conference and network conference system
US11954248B1 (en)*2023-03-172024-04-09Microsoft Technology Licensing, Llc.Pose prediction for remote rendering
JP2024136868A (en)*2023-03-242024-10-04パナソニックオートモーティブシステムズ株式会社 Display device and display method
US12436617B2 (en)*2023-06-262025-10-07Adeia Guides Inc.Systems and methods for balancing haptics and graphics rendering processing with content adaptation

Citations (14)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20030156188A1 (en)*2002-01-282003-08-21Abrams Thomas AlgieStereoscopic video
US20070183650A1 (en)*2002-07-022007-08-09Lenny LiptonStereoscopic format converter
US20100194857A1 (en)*2009-02-032010-08-05Bit Cauldron CorporationMethod of stereoscopic 3d viewing using wireless or multiple protocol capable shutter glasses
US20170115488A1 (en)*2015-10-262017-04-27Microsoft Technology Licensing, LlcRemote rendering for virtual images
US20170178272A1 (en)*2015-12-162017-06-22WorldViz LLCMulti-user virtual reality processing
US20180205963A1 (en)*2017-01-172018-07-19Seiko Epson CorporationEncoding Free View Point Data in Movie Data Container
US20190037011A1 (en)*2017-07-272019-01-31Citrix Systems, Inc.Heuristics for selecting nearest zone based on ica rtt and network latency
US20190362151A1 (en)*2016-09-142019-11-28Koninklijke Kpn N.V.Streaming virtual reality video
US20210262800A1 (en)*2020-02-212021-08-26Microsoft Technology Licensing, LlcSystems and methods for deep learning-based pedestrian dead reckoning for exteroceptive sensor-enabled devices
US20210297715A1 (en)*2018-07-182021-09-23Pixellot Ltd.System and method for content-layer based video compression
US20210360196A1 (en)*2020-05-122021-11-18True Meeting IncMethod and System for Virtual 3D Communications
US20220159261A1 (en)*2019-03-212022-05-19Lg Electronics Inc.Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method
US20220191431A1 (en)*2020-05-122022-06-16True Meeting Inc.Generating an alpha channel
US20220277484A1 (en)*2021-02-262022-09-01Subversus Interactive LLCSoftware Engine Enabling Users to Interact Directly with a Screen Using a Camera

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2007040472A1 (en)2005-09-162007-04-12Stereographics CorporationStereoscopic format converter
US10734116B2 (en)2011-10-042020-08-04Quantant Technology, Inc.Remote cloud based medical image sharing and rendering semi-automated or fully automated network and/or web-based, 3D and/or 4D imaging of anatomy for training, rehearsing and/or conducting medical procedures, using multiple standard X-ray and/or other imaging projections, without a need for special hardware and/or systems and/or pre-processing/analysis of a captured image data
WO2013055802A1 (en)2011-10-102013-04-18Genarts, Inc.Network-based rendering and steering of visual effects
US9756375B2 (en)2015-01-222017-09-05Microsoft Technology Licensing, LlcPredictive server-side rendering of scenes
US11843755B2 (en)2021-06-072023-12-12Zspace, Inc.Cloud-based rendering of interactive augmented/virtual reality experiences

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20030156188A1 (en)*2002-01-282003-08-21Abrams Thomas AlgieStereoscopic video
US20070183650A1 (en)*2002-07-022007-08-09Lenny LiptonStereoscopic format converter
US20100194857A1 (en)*2009-02-032010-08-05Bit Cauldron CorporationMethod of stereoscopic 3d viewing using wireless or multiple protocol capable shutter glasses
US20170115488A1 (en)*2015-10-262017-04-27Microsoft Technology Licensing, LlcRemote rendering for virtual images
US20170178272A1 (en)*2015-12-162017-06-22WorldViz LLCMulti-user virtual reality processing
US20190362151A1 (en)*2016-09-142019-11-28Koninklijke Kpn N.V.Streaming virtual reality video
US20180205963A1 (en)*2017-01-172018-07-19Seiko Epson CorporationEncoding Free View Point Data in Movie Data Container
US20190037011A1 (en)*2017-07-272019-01-31Citrix Systems, Inc.Heuristics for selecting nearest zone based on ica rtt and network latency
US20210297715A1 (en)*2018-07-182021-09-23Pixellot Ltd.System and method for content-layer based video compression
US20220159261A1 (en)*2019-03-212022-05-19Lg Electronics Inc.Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method
US20210262800A1 (en)*2020-02-212021-08-26Microsoft Technology Licensing, LlcSystems and methods for deep learning-based pedestrian dead reckoning for exteroceptive sensor-enabled devices
US20210360196A1 (en)*2020-05-122021-11-18True Meeting IncMethod and System for Virtual 3D Communications
US20220191431A1 (en)*2020-05-122022-06-16True Meeting Inc.Generating an alpha channel
US20220277484A1 (en)*2021-02-262022-09-01Subversus Interactive LLCSoftware Engine Enabling Users to Interact Directly with a Screen Using a Camera

Also Published As

Publication numberPublication date
EP4102851A2 (en)2022-12-14
US20230328213A1 (en)2023-10-12
US20220394225A1 (en)2022-12-08
US20230319247A1 (en)2023-10-05
US20230336701A1 (en)2023-10-19
US12192434B2 (en)2025-01-07
US12256052B2 (en)2025-03-18
US20230308623A1 (en)2023-09-28
EP4102851A3 (en)2023-02-22
US12439013B2 (en)2025-10-07
US11843755B2 (en)2023-12-12
US12256053B2 (en)2025-03-18
US12256054B2 (en)2025-03-18

Similar Documents

PublicationPublication DateTitle
US12192434B2 (en)Cloud-based rendering of interactive augmented/virtual reality experiences
US11868675B2 (en)Methods and systems of automatic calibration for dynamic display configurations
US10019831B2 (en)Integrating real world conditions into virtual imagery
US10701346B2 (en)Replacing 2D images with 3D images
US10019849B2 (en)Personal electronic device with a display system
US11024083B2 (en)Server, user terminal device, and control method therefor
US9848184B2 (en)Stereoscopic display system using light field type data
US10866820B2 (en)Transitioning between 2D and stereoscopic 3D webpage presentation
US10613405B2 (en)Pi-cell polarization switch for a three dimensional display system
JP2012085301A (en)Three-dimensional video signal processing method and portable three-dimensional display device embodying the method
CN114080582A (en)System and method for sparse distributed rendering
US10257500B2 (en)Stereoscopic 3D webpage overlay
US10701347B2 (en)Identifying replacement 3D images for 2D images via ranking criteria
US12099264B2 (en)Pi cell drive waveform
CN113784105A (en)Information processing method and system for immersive VR terminal
US20230362342A1 (en)Data processing method, apparatus and electronic device

Legal Events

DateCodeTitleDescription
ASAssignment

Owner name:ZSPACE, INC., CALIFORNIA

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHAMPION, CLIFFORD S.;HOSENPUD, JONATHAN J.;LU, BAIFANG;AND OTHERS;SIGNING DATES FROM 20210611 TO 20210628;REEL/FRAME:063657/0047

FEPPFee payment procedure

Free format text:ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

FEPPFee payment procedure

Free format text:ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STPPInformation on status: patent application and granting procedure in general

Free format text:DOCKETED NEW CASE - READY FOR EXAMINATION

STPPInformation on status: patent application and granting procedure in general

Free format text:NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPPInformation on status: patent application and granting procedure in general

Free format text:AWAITING TC RESP., ISSUE FEE NOT PAID

STPPInformation on status: patent application and granting procedure in general

Free format text:PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCFInformation on status: patent grant

Free format text:PATENTED CASE

ASAssignment

Owner name:3I, L.P., NEW YORK

Free format text:SECURITY INTEREST;ASSIGNOR:ZSPACE, INC.;REEL/FRAME:070826/0887

Effective date:20250411


[8]ページ先頭

©2009-2025 Movatter.jp