Movatterモバイル変換


[0]ホーム

URL:


US20230360315A1 - Systems and Methods for Teleconferencing Virtual Environments - Google Patents

Systems and Methods for Teleconferencing Virtual Environments
Download PDF

Info

Publication number
US20230360315A1
US20230360315A1US18/245,136US202118245136AUS2023360315A1US 20230360315 A1US20230360315 A1US 20230360315A1US 202118245136 AUS202118245136 AUS 202118245136AUS 2023360315 A1US2023360315 A1US 2023360315A1
Authority
US
United States
Prior art keywords
client device
location
audio
virtual environment
client
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US18/245,136
Inventor
Jonathan Leonard Morris
Maxwell Berkowitz
Ana Luiza de Araujo Lima Constantino
Evan Frohlich
Dacre Denny
Gregory Liburd
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nwr Corp
Original Assignee
Nwr Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nwr CorpfiledCriticalNwr Corp
Priority to US18/245,136priorityCriticalpatent/US20230360315A1/en
Assigned to NWR CorporationreassignmentNWR CorporationASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: CONSTANTINO, ANA LUIZA DE ARAUJO LIMA, FROHLICH, EVAN, LIBURD, Gregory, DENNY, Dacre, MORRIS, JONATHAN LEONARD, BERKOWITZ, MAXWELL JARED
Publication of US20230360315A1publicationCriticalpatent/US20230360315A1/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Definitions

Landscapes

Abstract

In some aspects, the disclosure is directed to methods and systems for providing a three-dimensional virtual environment with teleconferencing audio and video feeds placed within the environment via three-dimensional virtual avatars, including indications of directional orientation or facing, and with mixing of spatial audio providing directionality and distance cues. By utilizing a three dimensional environment for display of video streams, video streams corresponding to or displayed on avatars that are farther from the viewer appear smaller within the three dimensional view, and thus can be easily downscaled or reduced in resolution or bit rate without adversely affecting the user experience.

Description

Claims (42)

What is claimed:
1. A method for spatially-aware virtual teleconferencing, comprising:
receiving, by a first computing device, one or more media streams generated by a corresponding one or more additional computing devices, and a location within a virtual environment associated with each of the one or more additional computing devices, the first computing device associated with a first location within the virtual environment;
adjusting, by the first computing device, audio characteristics of each of the one or more media streams according to a difference between the first location and the location within the virtual environment associated with the corresponding additional computing device; and
rendering, by the first computing device via one or more output devices, a viewport into the virtual environment from the first location, each of the one or more media streams at the location within the virtual environment associated with the corresponding additional computing device, and the adjusted audio of the one or more media streams.
2. The method ofclaim 1, wherein adjusting the audio characteristics of each of the one or more media streams further comprises determining a vector between the first location and the location within the virtual environment associated with the corresponding additional computing device, and applying stereo attenuation according to the determined vector.
3. The method ofclaim 2, wherein adjusting the audio characteristics of each of the one or more media streams further comprises determining that a vector between the first location and a second location associated with a first additional computing device passes through a virtual object, and responsive to the determination, increasing an amount of attenuation for the audio characteristics of the corresponding media stream.
4. The method ofclaim 1, wherein adjusting the audio characteristics of each of the one or more media streams further comprises determining a direction and distance between the first location and the location associated with the corresponding additional computing device, and applying spatial processing to the corresponding audio stream based on the determined direction and distance.
5. The method ofclaim 4, wherein applying spatial processing further comprises applying one or more of stereo attenuation, equalization, and reverb according to the determined direction and distance.
6. The method ofclaim 1, wherein receiving the one or more media streams further comprises:
receiving, by the first computing device from a server computing device, an identification of each of the one or more additional computing devices, and an aggregated video stream generated by the first computing device from media streams of each of the one or more additional computing devices; and
retrieving, by the first computing device directly from each of the one or more additional computing devices, audio of the corresponding media stream, responsive to receipt of the identification of the additional computing device from the server computing device.
7. The method ofclaim 6, wherein the aggregated video stream comprises a series of tapestry images of frames from video streams of the one or more additional computing devices, with each frame at a resolution corresponding to the difference between the first location and the location within the virtual environment associated with the corresponding additional computing device.
8. A method for server-side dynamic video aggregation for virtual teleconferencing, comprising:
receiving, by a server device, a media stream from each of a plurality of client devices, each client device associated with a location within a virtual environment;
for each client device of the plurality of client devices:
for each other client device of the plurality of client devices:
calculating a distance between a location of the client device within the virtual environment and a location of the other client device within the virtual environment,
assigning a resolution to the media stream of the other client device corresponding to the calculated distance, and
adding a video frame of the media stream of the other client device to a tapestry image at the assigned resolution; and
transmitting the tapestry image to the client device, receipt of the tapestry image causing the client device to extract each video frame of the media stream of the other client devices and render the video frame at a location corresponding to the location of the other client device within the virtual environment.
9. The method ofclaim 8, wherein adding the frame of the media stream of the other client device to the tapestry image further comprises encoding metadata of the frame in the tapestry image.
10. The method ofclaim 9, wherein encoding metadata of the frame in the tapestry image further comprises adding pixels encoding geometry of the frame to a predetermined region of the tapestry image.
11. The method ofclaim 8, further comprising, for each client device of the plurality of client devices, transmitting, to the client device, audio of the media streams from each other client device and an identification of the location within the virtual environment corresponding to each other client device.
12. The method ofclaim 11, wherein receipt of the audio of the media streams from each other client device and the identification of the location within the virtual environment corresponding to each other client device causes each client device to render audio of the media streams with stereo attenuation based on a distance between the location associated with each corresponding other client device and a location associated with the client device.
13. The method ofclaim 8, further comprising, for each client device of the plurality of client devices, directing the client device to retrieve audio of the media streams of each other client device directly from each other client device.
14. The method ofclaim 13, wherein receipt of the audio of the media streams from each other client device and the identification of the location within the virtual environment corresponding to each other client device causes each client device to render audio of the media streams with stereo attenuation based on a distance between the location associated with each corresponding other client device and a location associated with the client device.
15. A method for server-side dynamic video aggregation for virtual teleconferencing, comprising:
receiving, by a client device from a server device, a tapestry image comprising a video frame from each of one or more additional client devices with a resolution corresponding to a distance between a location associated with the client device within a virtual environment and a location associated with the additional client device;
loading, by the client device, the tapestry image into a graphics buffer; and
iteratively for each of the video frames in the tapestry image:
identifying the location associated with the corresponding additional client device within the virtual environment, and
rendering, from the graphics buffer, a portion of the tapestry image comprising the video frame at the identified location within the virtual environment.
16. The method ofclaim 15, wherein the tapestry image comprises one or more sets of pixels encoding a geometry of the corresponding video frame from each of the one or more additional client devices.
17. The method ofclaim 16, further comprising, for each of the video frames in the tapestry image, decoding the geometry of the video frame from the corresponding set of pixels; and wherein rendering the portion of the tapestry image comprising the video frame at the identified location within the virtual environment comprises rendering the tapestry image with boundaries according to the decoded geometry.
18. The method ofclaim 16, further comprising, for each of the video frames in the tapestry image:
receiving an identifier of the corresponding additional client device; and
determining a location of the set of pixels encoding the geometry of the video frame based on the identifier of the corresponding additional client device.
19. The method ofclaim 15, further comprising:
receiving, by the client device from each of the one or more additional client devices, an audio stream;
adjusting an audio characteristic of each of the received audio streams based on the location associated with the corresponding additional client device within the virtual environment and the location associated with the client device; and
outputting, by the client device, the adjusted audio streams.
20. The method ofclaim 19, wherein adjusting the audio characteristic of each of the received audio streams further comprises determining a direction and distance between the location associated with the client device and the location associated with the corresponding additional client device, and applying spatial processing to the corresponding audio stream based on the determined direction and distance.
21. The method ofclaim 20, wherein applying spatial processing further comprises applying one or more of stereo attenuation, equalization, and reverb according to the determined direction and distance.
22. A method for spatially-aware virtual teleconferencing, comprising:
receiving, by a first computing device, one or more media streams generated by a corresponding one or more additional computing devices, and a location within a virtual environment associated with each of the one or more additional computing devices, the first computing device associated with a first location within the virtual environment;
adjusting, by the first computing device, audio characteristics of each of the one or more media streams according to a difference between the first location and the location within the virtual environment associated with the corresponding additional computing device; and
rendering, by the first computing device via one or more output devices, a viewport into the virtual environment from the first location, each of the one or more media streams at the location within the virtual environment associated with the corresponding additional computing device, and the adjusted audio of the one or more media streams.
23. The method ofclaim 22, wherein adjusting the audio characteristics of each of the one or more media streams further comprises determining a vector between the first location and the location within the virtual environment associated with the corresponding additional computing device, and applying stereo attenuation according to the determined vector.
24. The method ofclaims 22 or 23, wherein adjusting the audio characteristics of each of the one or more media streams further comprises determining that a vector between the first location and a second location associated with a first additional computing device passes through a virtual object, and responsive to the determination, increasing an amount of attenuation for the audio characteristics of the corresponding media stream.
25. The method of any ofclaims 22-24, wherein adjusting the audio characteristics of each of the one or more media streams further comprises determining a direction and distance between the first location and the location associated with the corresponding additional computing device, and applying spatial processing to the corresponding audio stream based on the determined direction and distance.
26. The method ofclaim 25, wherein applying spatial processing further comprises applying one or more of stereo attenuation, equalization, and reverb according to the determined direction and distance.
27. The method of any ofclaims 22-26, wherein receiving the one or more media streams further comprises:
receiving, by the first computing device from a server computing device, an identification of each of the one or more additional computing devices, and an aggregated video stream generated by the first computing device from media streams of each of the one or more additional computing devices; and
retrieving, by the first computing device directly from each of the one or more additional computing devices, audio of the corresponding media stream, responsive to receipt of the identification of the additional computing device from the server computing device.
28. The method ofclaim 27, wherein the aggregated video stream comprises a series of tapestry images of frames from video streams of the one or more additional computing devices, with each frame at a resolution corresponding to the difference between the first location and the location within the virtual environment associated with the corresponding additional computing device.
29. A method for server-side dynamic video aggregation for virtual teleconferencing, comprising:
receiving, by a server device, a media stream from each of a plurality of client devices, each client device associated with a location within a virtual environment;
for each client device of the plurality of client devices:
for each other client device of the plurality of client devices:
calculating a distance between a location of the client device within the virtual environment and a location of the other client device within the virtual environment,
assigning a resolution to the media stream of the other client device corresponding to the calculated distance, and
adding a video frame of the media stream of the other client device to a tapestry image at the assigned resolution; and
transmitting the tapestry image to the client device, receipt of the tapestry image causing the client device to extract each video frame of the media stream of the other client devices and render the video frame at a location corresponding to the location of the other client device within the virtual environment.
30. The method ofclaim 29, wherein adding the frame of the media stream of the other client device to the tapestry image further comprises encoding metadata of the frame in the tapestry image.
31. The method ofclaim 30, wherein encoding metadata of the frame in the tapestry image further comprises adding pixels encoding geometry of the frame to a predetermined region of the tapestry image.
32. The method of any ofclaims 29-31, further comprising, for each client device of the plurality of client devices, transmitting, to the client device, audio of the media streams from each other client device and an identification of the location within the virtual environment corresponding to each other client device.
33. The method ofclaim 32, wherein receipt of the audio of the media streams from each other client device and the identification of the location within the virtual environment corresponding to each other client device causes each client device to render audio of the media streams with stereo attenuation based on a distance between the location associated with each corresponding other client device and a location associated with the client device.
34. The method of any ofclaims 29-33, further comprising, for each client device of the plurality of client devices, directing the client device to retrieve audio of the media streams of each other client device directly from each other client device.
35. The method ofclaim 34, wherein receipt of the audio of the media streams from each other client device and the identification of the location within the virtual environment corresponding to each other client device causes each client device to render audio of the media streams with stereo attenuation based on a distance between the location associated with each corresponding other client device and a location associated with the client device.
36. A method for server-side dynamic video aggregation for virtual teleconferencing, comprising:
receiving, by a client device from a server device, a tapestry image comprising a video frame from each of one or more additional client devices with a resolution corresponding to a distance between a location associated with the client device within a virtual environment and a location associated with the additional client device;
loading, by the client device, the tapestry image into a graphics buffer; and
iteratively for each of the video frames in the tapestry image:
identifying the location associated with the corresponding additional client device within the virtual environment, and
rendering, from the graphics buffer, a portion of the tapestry image comprising the video frame at the identified location within the virtual environment.
37. The method ofclaim 36, wherein the tapestry image comprises one or more sets of pixels encoding a geometry of the corresponding video frame from each of the one or more additional client devices.
38. The method ofclaim 37, further comprising, for each of the video frames in the tapestry image, decoding the geometry of the video frame from the corresponding set of pixels; and wherein rendering the portion of the tapestry image comprising the video frame at the identified location within the virtual environment comprises rendering the tapestry image with boundaries according to the decoded geometry.
39. The method ofclaims 37 or 38, further comprising, for each of the video frames in the tapestry image:
receiving an identifier of the corresponding additional client device; and
determining a location of the set of pixels encoding the geometry of the video frame based on the identifier of the corresponding additional client device.
40. The method of any ofclaims 36-39, further comprising:
receiving, by the client device from each of the one or more additional client devices, an audio stream;
adjusting an audio characteristic of each of the received audio streams based on the location associated with the corresponding additional client device within the virtual environment and the location associated with the client device; and
outputting, by the client device, the adjusted audio streams.
41. The method ofclaim 40, wherein adjusting the audio characteristic of each of the received audio streams further comprises determining a direction and distance between the location associated with the client device and the location associated with the corresponding additional client device, and applying spatial processing to the corresponding audio stream based on the determined direction and distance.
42. The method ofclaim 41, wherein applying spatial processing further comprises applying one or more of stereo attenuation, equalization, and reverb according to the determined direction and distance.
US18/245,1362020-09-142021-09-14Systems and Methods for Teleconferencing Virtual EnvironmentsPendingUS20230360315A1 (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
US18/245,136US20230360315A1 (en)2020-09-142021-09-14Systems and Methods for Teleconferencing Virtual Environments

Applications Claiming Priority (3)

Application NumberPriority DateFiling DateTitle
US202063078201P2020-09-142020-09-14
PCT/US2021/050333WO2022056492A2 (en)2020-09-142021-09-14Systems and methods for teleconferencing virtual environments
US18/245,136US20230360315A1 (en)2020-09-142021-09-14Systems and Methods for Teleconferencing Virtual Environments

Publications (1)

Publication NumberPublication Date
US20230360315A1true US20230360315A1 (en)2023-11-09

Family

ID=78080566

Family Applications (2)

Application NumberTitlePriority DateFiling Date
US18/245,136PendingUS20230360315A1 (en)2020-09-142021-09-14Systems and Methods for Teleconferencing Virtual Environments
US17/475,260ActiveUS11522925B2 (en)2020-09-142021-09-14Systems and methods for teleconferencing virtual environments

Family Applications After (1)

Application NumberTitlePriority DateFiling Date
US17/475,260ActiveUS11522925B2 (en)2020-09-142021-09-14Systems and methods for teleconferencing virtual environments

Country Status (2)

CountryLink
US (2)US20230360315A1 (en)
WO (1)WO2022056492A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20240121280A1 (en)*2022-10-072024-04-11Microsoft Technology Licensing, LlcSimulated choral audio chatter

Families Citing this family (47)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
EP4218010A4 (en)*2020-09-222024-10-30Qsc, Llc TRANSPARENT DATA ENCRYPTION
US11457178B2 (en)2020-10-202022-09-27Katmai Tech Inc.Three-dimensional modeling inside a virtual video conferencing environment with a navigable avatar, and applications thereof
US11651749B2 (en)*2020-11-022023-05-16Panduit Corp.Display layout optimization of multiple media streams
US11621979B1 (en)*2020-12-312023-04-04Benjamin SlotznickMethod and apparatus for repositioning meeting participants within a virtual space view in an online meeting user interface based on gestures made by the meeting participants
US11647123B2 (en)*2021-01-152023-05-09Mycelium, Inc.Virtual conferencing system with layered conversations
KR20250103813A (en)2021-02-082025-07-07사이트풀 컴퓨터스 리미티드Extended reality for productivity
EP4288950A4 (en)2021-02-082024-12-25Sightful Computers LtdUser interactions in extended reality
JP7713189B2 (en)2021-02-082025-07-25サイトフル コンピューターズ リミテッド Content Sharing in Extended Reality
GB2606131A (en)*2021-03-122022-11-02Palringo LtdCommunication platform
US11792031B2 (en)*2021-03-312023-10-17Snap Inc.Mixing participant audio from multiple rooms within a virtual conferencing system
CN115309482A (en)*2021-04-202022-11-08福特全球技术公司 A vehicle interaction system and corresponding vehicle and method
US11706264B2 (en)*2021-07-262023-07-18Cisco Technology, Inc.Virtual position based management of collaboration sessions
WO2023009580A2 (en)2021-07-282023-02-02Multinarity LtdUsing an extended reality appliance for productivity
EP4156692A1 (en)*2021-09-222023-03-29Koninklijke Philips N.V.Presentation of multi-view video data
US12368946B2 (en)2021-09-242025-07-22Apple Inc.Wide angle video conference
US20240096033A1 (en)*2021-10-112024-03-21Meta Platforms Technologies, LlcTechnology for creating, replicating and/or controlling avatars in extended reality
US12184708B2 (en)2021-10-312024-12-31Zoom Video Communications, Inc.Extraction of user representation from video stream to a virtual environment
US11733826B2 (en)2021-10-312023-08-22Zoom Video Communications, Inc.Virtual environment interactivity for video communications participants
US12114099B2 (en)*2021-10-312024-10-08Zoom Video Communications, Inc.Dynamic camera views in a virtual environment
US11948263B1 (en)2023-03-142024-04-02Sightful Computers LtdRecording the complete physical and extended reality environments of a user
US12175614B2 (en)2022-01-252024-12-24Sightful Computers LtdRecording the complete physical and extended reality environments of a user
US12380238B2 (en)*2022-01-252025-08-05Sightful Computers LtdDual mode presentation of user interface elements
US11949564B2 (en)*2022-03-042024-04-02Lan Party Technologies, Inc.Virtual gaming environment
US12192257B2 (en)2022-05-252025-01-07Microsoft Technology Licensing, Llc2D and 3D transitions for renderings of users participating in communication sessions
US12374054B2 (en)2022-05-272025-07-29Microsoft Technology Licensing, LlcAutomation of audio and viewing perspectives for bringing focus to relevant activity of a communication session
US20230388355A1 (en)*2022-05-272023-11-30Microsoft Technology Licensing, LlcAutomation of visual indicators for distinguishing active speakers of users displayed as three-dimensional representations
US20240007593A1 (en)*2022-06-302024-01-04Katmai Tech Inc.Session transfer in a virtual videoconferencing environment
US20240031531A1 (en)*2022-07-202024-01-25Katmai Tech Inc.Two-dimensional view of a presentation in a three-dimensional videoconferencing environment
US12022235B2 (en)2022-07-202024-06-25Katmai Tech Inc.Using zones in a three-dimensional virtual environment for limiting audio and video
US11928774B2 (en)2022-07-202024-03-12Katmai Tech Inc.Multi-screen presentation in a virtual videoconferencing environment
US11700354B1 (en)*2022-07-212023-07-11Katmai Tech Inc.Resituating avatars in a virtual environment
US11741664B1 (en)2022-07-212023-08-29Katmai Tech Inc.Resituating virtual cameras and avatars in a virtual environment
US11956571B2 (en)*2022-07-282024-04-09Katmai Tech Inc.Scene freezing and unfreezing
US20240037837A1 (en)*2022-07-282024-02-01Katmai Tech Inc.Automatic graphics quality downgrading in a three-dimensional virtual environment
US12368821B2 (en)*2022-07-282025-07-22Katmai Tech Inc.Optimizing physics for static objects in a three-dimensional virtual environment
US11741674B1 (en)*2022-09-132023-08-29Katmai Tech Inc.Navigating a virtual camera to a video avatar in a three-dimensional virtual environment, and applications thereof
US11776227B1 (en)2022-09-132023-10-03Katmai Tech Inc.Avatar background alteration
JP7643424B2 (en)*2022-09-212025-03-11トヨタ自動車株式会社 Method, program, and terminal device
US20240104819A1 (en)*2022-09-232024-03-28Apple Inc.Representations of participants in real-time communication sessions
EP4595015A1 (en)2022-09-302025-08-06Sightful Computers LtdAdaptive extended reality content presentation in multiple physical environments
US20240143779A1 (en)*2022-10-272024-05-02Vmware, Inc.Secure peer-to-peer file distribution in an enterprise environment
US20240177435A1 (en)*2022-11-302024-05-30Beijing Zitiao Network Technology Co., Ltd.Virtual interaction methods, devices, and storage media
US20240196050A1 (en)*2022-12-072024-06-13Vimeo.Com, Inc.System, Method and Computer Program For Delivering Video Reactions to a Livestream Display Interface
US12400391B2 (en)*2022-12-292025-08-26Microsoft Technology Licensing, LlcPromotion of meeting engagement by transitioning viewing perspectives to a temporary viewing perspective showing group activity
US20240221273A1 (en)*2022-12-292024-07-04Apple Inc.Presenting animated spatial effects in computer-generated environments
US12149570B2 (en)*2022-12-302024-11-19Microsoft Technology Licensing, LlcAccess control of audio and video streams and control of representations for communication sessions
US12217365B1 (en)2023-07-312025-02-04Katmai Tech Inc.Multiplexing video streams in an aggregate stream for a three-dimensional virtual environment

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US8223186B2 (en)*2006-05-312012-07-17Hewlett-Packard Development Company, L.P.User interface for a video teleconference
US8773494B2 (en)*2006-08-292014-07-08Microsoft CorporationTechniques for managing visual compositions for a multimedia conference call
CN101690150A (en)2007-04-142010-03-31缪斯科姆有限公司virtual reality-based teleconferencing
US9853922B2 (en)*2012-02-242017-12-26Sococo, Inc.Virtual area communications
US20110271192A1 (en)*2010-04-302011-11-03American Teleconferencing Services Ltd.Managing conference sessions via a conference user interface
US9743044B2 (en)*2013-08-292017-08-22Isee Vc Pty LtdQuality controller for video image
US10511807B2 (en)*2015-12-112019-12-17Sony CorporationInformation processing apparatus, information processing method, and program
US10074012B2 (en)*2016-06-172018-09-11Dolby Laboratories Licensing CorporationSound and video object tracking
CN110999281B (en)2017-06-092021-11-26Pcms控股公司Method and device for allowing exploration in virtual landscape
EP3702008A1 (en)2019-02-272020-09-02Nokia Technologies OyDisplaying a viewport of a virtual space
US11228622B2 (en)*2019-04-082022-01-18Imeve, Inc.Multiuser asymmetric immersive teleconferencing
US11082467B1 (en)*2020-09-032021-08-03Facebook, Inc.Live group video streaming

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20240121280A1 (en)*2022-10-072024-04-11Microsoft Technology Licensing, LlcSimulated choral audio chatter

Also Published As

Publication numberPublication date
WO2022056492A3 (en)2022-04-21
US11522925B2 (en)2022-12-06
WO2022056492A2 (en)2022-03-17
US20220086203A1 (en)2022-03-17

Similar Documents

PublicationPublication DateTitle
US11522925B2 (en)Systems and methods for teleconferencing virtual environments
US11563779B2 (en)Multiuser asymmetric immersive teleconferencing
US12107907B2 (en)System and method enabling interactions in virtual environments with virtual presence
US20250124637A1 (en)Integrated input/output (i/o) for a three-dimensional (3d) environment
Apostolopoulos et al.The road to immersive communication
US8743173B2 (en)Video phone system
EP3962076B1 (en)System and method for virtually broadcasting from within a virtual environment
US12273402B2 (en)Ad hoc virtual communication between approaching user graphical representations
US12114099B2 (en)Dynamic camera views in a virtual environment
JP7508586B2 (en) Multi-grouping method, apparatus, and computer program for immersive teleconferencing and telepresence - Patents.com
WO2021257868A1 (en)Video chat with spatial interaction and eye contact recognition
US20240022688A1 (en)Multiuser teleconferencing with spotlight feature
US11985181B2 (en)Orchestrating a multidevice video session
CN118696535A (en) A movable virtual camera for improved meeting views in 3D virtual environments
Nassani et al.Implementation of Attention-Based Spatial Audio for 360° Environments
US20250298465A1 (en)Bilateral exchange of user attention stream for foveated 3d communication streaming
CN118101976A (en) System and method for implementing live broadcast session in virtual environment
WO2023243059A1 (en)Information presentation device, information presentation method, and information presentation program
WO2025029871A1 (en)Multiplexing video streams in an aggregate stream for a three-dimensional virtual environment
JP2023177285A (en)System and method for controlling user interactions in virtual meeting to enable selective pausing
CheokInteractive Theater Experience with 3D Live Captured Actors and Spatial Sound

Legal Events

DateCodeTitleDescription
STPPInformation on status: patent application and granting procedure in general

Free format text:APPLICATION UNDERGOING PREEXAM PROCESSING

ASAssignment

Owner name:NWR CORPORATION, NEW YORK

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MORRIS, JONATHAN LEONARD;BERKOWITZ, MAXWELL JARED;CONSTANTINO, ANA LUIZA DE ARAUJO LIMA;AND OTHERS;SIGNING DATES FROM 20230317 TO 20230418;REEL/FRAME:063363/0838

STPPInformation on status: patent application and granting procedure in general

Free format text:DOCKETED NEW CASE - READY FOR EXAMINATION


[8]ページ先頭

©2009-2025 Movatter.jp