Movatterモバイル変換


[0]ホーム

URL:


WO2022150125A1 - Embedding digital content in a virtual space - Google Patents

Embedding digital content in a virtual space
Download PDF

Info

Publication number
WO2022150125A1
WO2022150125A1PCT/US2021/061685US2021061685WWO2022150125A1WO 2022150125 A1WO2022150125 A1WO 2022150125A1US 2021061685 WUS2021061685 WUS 2021061685WWO 2022150125 A1WO2022150125 A1WO 2022150125A1
Authority
WO
WIPO (PCT)
Prior art keywords
digital content
behavior
virtual space
user
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2021/061685
Other languages
French (fr)
Inventor
Tao Xu
Jie Tang
Yaobo LIANG
Yanan WEI
Xiao Zhang
Zheng Zhang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Technology Licensing LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Technology Licensing LLCfiledCriticalMicrosoft Technology Licensing LLC
Publication of WO2022150125A1publicationCriticalpatent/WO2022150125A1/en
Anticipated expirationlegal-statusCritical
Ceasedlegal-statusCriticalCurrent

Links

Classifications

Definitions

Landscapes

Abstract

The present disclosure proposes a method for embedding digital content in a virtual space. A behavior of a user in the virtual space may be detected. Behavior information associated with the behavior may be obtained. Digital content may be determined based at least on the behavior information. The digital content may be added in the virtual space. The present disclosure also proposes a virtual space application device. The virtual space application device may comprise: a presenting unit, for presenting a virtual space; a behavior identifying unit, for detecting a behavior of a user in virtual space, and obtaining behavior information associated with the behavior; and a processing unit, for determining digital content based at least on the behavior information, and adding the digital content in the virtual space.

Description

EMBEDDING DIGITAL CONTENT IN A VIRTUAL SPACE
BACKGROUND
[0001] Virtual Reality (VR) technology integrates various technologies such as computer graphics technology, human-computer interaction technology, sensor technology, etc., which may use computer to generate a three-dimensional virtual space that can provide multi-sensory experiences related to vision, hearing, and touch, etc. People may obtain such experience through specific devices such as VR devices, as if they have entered the virtual space. The VR technology may be widely used in fields such as entertainment, education, military, and medicine. Taking the application of the VR technology in the entertainment field as an example, a virtual space corresponding to e.g., a concert scene may be reconstructed through the VR technology, so that people may obtain a performance experience as if they were on the scene with VR devices, without having to go to the performance venue.
SUMMARY
[0002] This Summary is provided to introduce a selection of concepts that are further described below in the Detailed Description. It is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
[0003] Embodiments of the present disclosure propose a method for embedding digital content in a virtual space. A behavior of a user in the virtual space may be detected. Behavior information associated with the behavior may be obtained. Digital content may be determined based at least on the behavior information. The digital content may be added in the virtual space. The embodiments of the present disclosure also propose a virtual space application device. The virtual space application device may comprise: a presenting unit, for presenting a virtual space; a behavior identifying unit, for detecting a behavior of a user in virtual space, and obtaining behavior information associated with the behavior; and a processing unit, for determining digital content based at least on the behavior information, and adding the digital content in the virtual space.
[0004] It should be noted that the above one or more aspects comprise the features hereinafter fully described and particularly pointed out in the claims. The following description and the drawings set forth in detail certain illustrative features of the one or more aspects. These features are only indicative of the various ways in which the principles of various aspects may be employed, and this disclosure is intended to include all such aspects and their equivalents.
BRIEF DESCRIPTION OF THE DRAWINGS [0005] The disclosed aspects will hereinafter be described in conjunction with the appended drawings that are provided to illustrate and not to limit the disclosed aspects. [0006] FIG. 1 illustrates an exemplary virtual space service network architecture according to an embodiment of the present disclosure.
[0007] FIG. 2 illustrates an exemplary process for embedding digital content in a virtual space according to an embodiment of the present disclosure.
[0008] FIG. 3 illustrates an exemplary scoring model according to an embodiment of the present disclosure.
[0009] FIG. 4 illustrates an exemplary process for training a scoring model according to an embodiment of the present disclosure.
[0010] FIG. 5 illustrates an exemplary terminal device according to an embodiment of the present disclosure.
[0011] FIG. 6 is a flowchart of an exemplary method for embedding digital content in a virtual space according to an embodiment of the present disclosure.
[0012] FIG. 7 illustrates an exemplary virtual space application device according to an embodiment of the present disclosure.
[0013] FIG. 8 illustrates an exemplary apparatus for embedding digital content in a virtual space according to an embodiment of the present disclosure.
PET All ED DESCRIPTION
[0014] The present disclosure will now be discussed with reference to several exemplary implementations. It should be understood that these implementations are discussed only for enabling those skilled in the art to better understand and thus implement the embodiments of the present disclosure, rather than suggesting any limitations on the scope of the present disclosure.
[0015] It is desirable to embed specific digital content in a virtual space so that a user entering the virtual space can watch the digital content and even interact with it. Currently, research and development have involved embedding digital content in videos. For example, a region suitable for adding digital content may be detect from a video or created in a video, appropriate digital content may be determined, and the determined digital content may be added in the detected or created region in the video. This process requires a large amount of computing resources and bandwidth resources.
[0016] Embodiments of the present disclosure propose a method for embedding digital content in a virtual space. Herein, a virtual space may refer to a three-dimensional space generated by a computer that is different from a real environment where a user is located, and it may also be referred to as a virtual environment, a virtual scene, a virtual world, etc. For example, the VR technology may be adopted to construct a virtual space. Herein, digital content may broadly refer to content intended to be presented with a virtual space for various purposes, e.g., commercial information, public welfare announcements, etc. Digital content may have flexible and rich forms, which may include, e.g., pictures, videos, animations, etc., and may be two-dimensional or three-dimensional. A user may interact with digital content in many forms. For example, an instance of two-dimensional digital content may be a travel video, and a user may watch the travel video in a virtual space. For example, an instance of three-dimensional digital content may be an exhibition hall, and a user may enter the exhibition hall in a virtual space and browse and visit in the exhibition hall. For example, an instance of three-dimensional digital content may be a car, and a user may touch the car in a virtual space or even conduct a test drive. Digital content embedding may be implemented through determining digital content based at least on a detected behavior of a user in a virtual space and adding the determined digital content in the virtual space. The digital content embedding method according to the embodiments of the present disclosure may be performed at a terminal device. Herein, a terminal device may refer to a device that is local to a user and can provide the user with a virtual space experience, which may also be referred to as a virtual space application device. The terminal device according to the embodiments of the present disclosure may include multiple units, which may be co-located, such as in a single machine, or one or more of the units may be separated from the others in location.
[0017] In an aspect, the embodiments of the present disclosure propose that digital content to be added in a virtual space may be determined based at least on a behavior of a user in the virtual space. The behavior of the user in the virtual space may include, e.g., a behavior relevant to existing digital content performed by the user in the virtual space, e.g., watching, walking toward, touching the digital content, etc. Such behavior may indicate, e.g., whether the user is currently interested in the existing digital content, etc. The behavior of the user in the virtual space may also include, e.g., a behavior relevant to an environment object performed by the user in the virtual space, e.g., walking toward a stage, waving to a singer, leaving a seat, etc., in a virtual space about a live concert. Herein, an environment object may refer to an original or inherent object in a virtual space, e.g., a stage, a singer, an auditorium, a seat, etc. in a virtual space about a live concert. Such behavior may indicate current characteristics of a user, e.g., personality, hobbies, habits, etc. Considering the behavior of the user in the virtual space when determining the digital content to be added in the virtual space may facilitate to determine digital content that the user is currently interested in, so that the quality of the added digital content may be improved.
[0018] In another aspect, the embodiments of the present disclosure propose that a behavior of a user in a virtual space may be detected through a behavior identifying unit, and the behavior recognition unit may be integrated into a terminal device together with other units. In this way, the user may switch from a real environment to the virtual space with only a single device, and interact with the virtual space, which facilitates to obtain an immersive experience for the virtual space.
[0019] In another aspect, the embodiments of the present disclosure propose that when determining digital content, each candidate digital content in a set of candidate digital content may be scored based on multiple indexes, and the candidate digital content with a highest score may be selected. These indexes may include, e.g., a predicted type and/or duration of the user’s behavior of interacting with the candidate digital content. The type of behavior may include, e.g., watching, walking, touching, etc. Values of these indexes of the candidate digital content may be predicted through a deep learning model according to the embodiments of the present disclosure, respectively, and the candidate digital content may be scored based on the predicted values. In this way, digital content that a user is interested in may be selected more accurately and comprehensively. In addition, according to actual application requirements, different weights may be set for each index. For example, a larger weight may be set for a more concerned index, digital content that performs better on the index may be selected.
[0020] In another aspect, the embodiments of the present disclosure propose that a deep learning model for scoring candidate digital content is obtained through training an initial deep learning model at a set of terminal devices of a set of users. Herein, a deep learning model for scoring candidate digital content may be referred to as a scoring model. For example, an initial scoring model may be distributed to a set of terminal devices of a set of users. Each user may enter a virtual space embedded with a set of digital content using his or her terminal device. The user's behaviors for each digital content may be used to train a corresponding initial scoring model. The scoring models trained at each terminal device may be aggregated into an updated scoring model. The updated scoring model may be deployed for scoring candidate digital content. In this way, user behaviors are only used locally to the user and will not be uploaded to a server, thus leakage of user privacy may be avoided. In addition, the updated scoring model obtained through aggregating the trained models at various users does not have information that is strongly relevant to a specific user, thus user privacy may be further protected.
[0021] In another aspect, the embodiment of the present disclosure propose that a set of candidate digital content from which digital content is to be selected may be previously received from a server and stored in a terminal device. For example, a predetermined number of digital content may be preliminarily selected from a large number of digital content at the server as a set of candidate digital content. The terminal device may previously receive the set of candidate digital content from the server, e.g., receive at regular intervals, receive before starting to present the virtual space, or receive when a user logs in. In addition, the embodiments of the present disclosure also propose that a location in a virtual space suitable for adding digital content may be previously defined. For example, a set of locations suitable for adding digital content may be previously detected and/or created. Reconstruction of a virtual space requires to collect on-site data in real-time and re-rendering of the virtual space based on the collected data. Taking a virtual space about a live concert as an example, on-site data may include, e.g., lighting effects of a venue, a voice/posture/action of a singer, a location/behavior of a user, etc. This process occupies a large amount of computing resources and/or bandwidth resources. The embodiments of the present disclosure propose that previously receiving candidate digital content and/or previously defining a location where digital content is to be added may avoid excessive occupation of bandwidth and/or computing resources when presenting the virtual space in real-time, which allows to add digital content in a near real-time manner based on a behavior of a user in the virtual space. This is especially beneficial in the case of a virtual space based on a live video.
[0022] It should be appreciated that although the foregoing discussion and the following discussion may involve examples of embedding digital content in a virtual space constructed through adopting the VR technology, the embodiments of the present disclosure are not limited to this, but digital content may be embedded in a virtual space constructed through adopting other technologies such as augmented reality technology, mixed reality technology, etc., in a similar manner.
[0023] FIG. 1 illustrates an exemplary virtual space service network architecture 100 according to an embodiment of the present disclosure. The architecture 100 may present a user 122 with a virtual space embedded with digital content. [0024] The architecture 100 may include various network entities that may be interconnected directly or through a network, e.g., a two-dimensional content source 102, a server 104, a digital content provider 110, a terminal device 112, etc.
[0025] The two-dimensional content source 102 may represent various network entities that can provide the server 104 with two-dimensional content resources suitable for reconstructing a virtual space. Two-dimensional content may include, e.g., images, videos, etc. An image may include, e.g., a game screen, a building picture, etc. A video may include, e.g., a movie, a performance video, a sports event video, etc., and it may be a pre-recorded video or a live video. It should be appreciated that although the above part and other parts of the present disclosure may involve adding digital content in a virtual space reconstructed based on two-dimensional content such as images, videos, etc., the embodiments of the present disclosure are not limited to this, but digital content may be added in a virtual space obtained through other methods, such as a directly created virtual space.
[0026] The digital content provider 110 may refer to various network entities that can provide digital content to the server 104, e.g., a terminal devices, a network platform, etc. operated by a creator, owner, operator, etc. of digital content.
[0027] The server 104 and the terminal device 112 may cooperate to reconstruct a corresponding virtual space based on the two-dimensional content provided by the two- dimensional content source 102, and embed digital content selected from the digital content provided by the digital content provider 110 in the virtual space. The server 104 may be a remote computer or computer system to the user 122. The terminal device 112 may be any local device that can provide a user with a virtual space experience, which may also be referred to as a virtual space application device. Preferably, in order to provide a user with an immersive experience, the terminal device 112 may be a head- mounted display device capable of guiding the user to produce multi-sensory experiences in a virtual space related to vision, hearing, and touch, etc. Taking a head-mounted display device that adopts the VR technology to provide a virtual space as an example, it may also be referred to as VR glasses, VR goggles, a VR helmet, etc. in terms of its appearance. The server 104 may include, e.g., a processing unit 106, a storage unit 108, etc. The terminal device 112 may include, e.g., a presenting unit 114, a behavior identifying unit 116, a processing unit 118, a storage unit 120, etc.
[0028] The storage unit 108 in the server 104 may e.g., store two-dimensional content obtained from the two-dimensional content source 102. The processing unit 106 in the server 104 may, e.g., implement processes relevant to virtual space reconstruction, digital content selection, etc. For example, the processing unit 106 may adopt, e.g., a three- dimensional reconstruction technology to reconstruct the two-dimensional content into an initial virtual space, and provide the initial virtual space to the terminal device 112. The processing unit 118 in the terminal device 112 may further reconstruct the initial virtual space into a virtual space corresponding to the user 122 based on data collected on site, e.g., information of the user 122. For example, the initial virtual space may be transformed into a virtual space suitable for the user 122 based on a current location, perspective, etc., of the user 122.
[0029] The storage unit 108 in the server 104 may also store digital content obtained from the digital content provider 110. The processing unit 106 may preliminarily select a predetermined number of candidate digital content from the digital content stored in the storage unit 108. The preliminary selection may be performed for a specific user and based on the user's basic information. The user's basic information may include, e.g., one or more of user profile, operation history, social network information of the user, and any other user-relevant information. Herein, a user profile may refer to various information about a user, such as identifier, gender, age, preferences, etc. An operation history may include information associated with a user's past operations, such as clicked digital content, purchased goods, etc. Social network information may include information relevant to a user's friends, colleagues, relatives and friends. Taking performing the preliminary selection of digital content for the user 122 of the terminal device 112 as an example, the processing unit 106 may rank the digital content provided by the digital content provider 110 based on the basic information of the user 122, and select a set of predetermined number of top-ranked digital content from it. The set of digital content may be provided to the terminal device 112 as a set of candidate digital content for embedding in the virtual space.
[0030] The terminal device 112 may previously receive the set of candidate digital content from the server 104, e.g., receive at regular intervals, receive before presenting the virtual space, or receive when a user logs in. In this way, bandwidth and/or computing resource usage when presenting the virtual space may be reduced, which is especially beneficial in the case of virtual spaces based on live streaming. The terminal device 112 may store the set of candidate digital content in the storage unit 120 for further use.
[0031] The processing unit 118 in the terminal device 112 may embed digital content in the virtual space, e.g., may determine digital content to be added in the virtual space from the set of candidate digital content and add the determined digital content in the virtual space. For example, the digital content to be added in the virtual space may be determined based at least on information associated with a behavior of a user in the virtual space. An exemplary process for embedding digital content in a virtual space will be described later in conjunction with FIG. 2.
[0032] A behavior of a user in a virtual space may include, e.g., a behavior relevant to existing digital content performed by the user in the virtual space, which may also be referred to as a digital content interactive behavior. The digital content interactive behavior may include, e.g., watching, walking toward, touching digital content, etc. Such behavior may indicate whether the user is currently interested in the existing digital content. The behavior of the user in the virtual space may also include, e.g., a behavior relevant to an environment object performed by the user in the virtual space, which may also be referred to as an environment interactive behavior. The environmental interactive behavior may include, e.g., walking toward a stage, waving to a singer, etc. Such behavior may indicate current characteristics of a user, e.g., personality, hobbies, habits, etc.
[0033] Behavior information associated with a behavior may be extracted. The behavior information may include e.g., various types of information associated with a behavior, such as type, start time, duration, pause time, start position, direction,, amplitude, etc. The type may indicate a category that an action associated with the behavior belongs to, including, e.g., watching, walking, waving, touching, etc.; the start time may indicate the time when the behavior begins to occur; the duration may indicate how long the behavior lasts; the pause time may indicate how long there is a pause during the behavior; the start position may indicate a position where the behavior begins; the direction may indicate a direction in which the behavior proceeds; the amplitude may indicate an extent of progress of the behavior; and the like. Take the behavior "walking toward a stage" as an example to illustrate this behavior information. The type of this behavior may be "walking", the start time may be when the user started walking, the duration may be how long the user has walked, the pause time may be how long the user has paused during walking, the starting position may be the position where the user starts to walk, the direction may be "toward the stage", the amplitude may be the distance of walking, etc. In addition, for the digital content interactive behavior relevant to existing digital content, the associated behavior information may further include an identifier of the digital content. The information associated with the digital content interactive behavior may be referred to as digital content interactive behavior information, and the information associated with the environment interactive behavior may be referred to as environment interactive behavior information.
[0034] Through the behavior identifying unit 116 in the terminal device 112, the behavior of the user 122 in the virtual space may be detected, and behavior information associated with these behaviors may be obtained. The behavior identifying unit 116 may include, e.g., a sensor, a camera, etc. The sensor may include, e.g., a gravity sensor, a location sensor, an eye tracking sensor, etc. The gravity sensor may use a feature that its internal crystal deforms due to acceleration to detect actions relevant to acceleration, such as tilting, lying down, going uphill, going downhill, etc. The location sensor may detect a location of a user and assist in detecting behaviors relevant to the location change, such as walking, running, etc. The eye tracking sensor may track a position of user's sight line to detect the user's viewing behavior. The camera may be used to detect actions of trunk and/or limbs of a user, such as raising a hand, waving a hand, raising a leg, etc.
[0035] The virtual space embedded with digital content may be presented to the user 122 through the presentation unit 114 in the terminal device 112. The presenting unit 114 may be a display screen, a display, etc. In addition, the presenting unit 114 may also present the user 122 with other information, such as a login interface, a usage prompt, etc. [0036] It should be appreciated that all network entities included in the architecture 100 are exemplary, and according to actual application scenarios and requirements, the architecture 100 may include more or fewer network entities, and these network entities may be combined and divided in any manner. For example, multiple units in the terminal device 112 may be separated in location, e.g., the behavior identifying unit 116 may be separated in location from other units in the terminal device 112. In addition, although only one terminal device 112 is shown in the architecture 100, there may be a different number of terminal devices connected to the server 104 through the network. In addition, although the digital content provider 110 is shown as a single network entity, it may also represent multiple network entities capable of providing digital content.
[0037] FIG. 2 illustrates an exemplary process 200 for embedding digital content in a virtual space according to an embodiment of the present disclosure. The process 200 may be performed by a terminal device, e.g., the terminal device 112 in FIG. 1. Through the process 200, digital content 230 may be selected from a set of candidate content 202 based on basic features 214 corresponding to a user's basic information 206 and/or behavior information 224 associated with the user's behaviors in a virtual space 232, and the selected digital content 230 may be added in the virtual space 232, to obtain an updated virtual space 236. When selecting digital content, each candidate digital content in the set of candidate content 202 may be scored through a scoring model 228, and candidate digital content with a highest score may be selected as the digital content 230.
[0038] The set of candidate digital content 202 and its corresponding set of candidate digital content features 204 may be obtained. The set of candidate digital content 202 may include n candidate digital content, such as candidate digital content 202-1, candidate digital content 202-2, ..., candidate digital content 202 -n. The set of candidate digital content features 204 may include candidate digital content features corresponding to each candidate digital content in the set of candidate digital content 202, such as candidate digital content feature 204-1, candidate digital content feature 204-2, ..., candidate digital content feature 204-n. Herein, a feature may refer to an information set in a form that is conducive to be processed by a deep learning model that is generated based on raw data. Taking a candidate digital content feature 204-i (l£i£n) as an example, it may be a information set in a form that is conducive to be processed by the scoring model 228 that is generated based on a candidate digital content 202-i. Preferably, each candidate digital content feature may be generated at the server and provided to a terminal device together with the corresponding candidate digital content.
[0039] The basic features 214 may include, e.g., user profile feature 216, operation history feature 218, social network feature 220, etc., each of which may be generated based on corresponding information in the user's basic information 206. For example, the user profile feature 216 may be generated based on the user profile 208, the operation history feature 218 may be generated based on the operation history 210, the social network feature 220 may be generated based on the social network information 212, etc. Various basic features may be generated at the terminal device. Alternatively, the various basic features may also be generated at the server and provided to the terminal device. [0040] At 222, a behavior identifying operation may be performed, e.g., detecting a behavior of a user in the virtual space 232, and obtaining behavior information 224 associated with the behavior. For example, through a behavior identifying unit in the terminal device, the behavior may be detected, and information associated with the behavior may be obtained.
[0041] At 226, the digital content 230 to be added in the virtual space 232 may be selected from the set of candidate digital content 202 based on the behavior information 224 and/or the basic features 214. In an implementation, each candidate digital content in the set of candidate digital content 202 may be scored through the scoring model 228, and candidate digital content with a highest score in the set of candidate digital content 202 may be selected as the digital content 230. An exemplary structure of the scoring model 228 and the corresponding scoring process will be described later in conjunction with FIG. 3.
[0042] After the digital content 230 is selected, at 234, the digital content 230 may be added in the virtual space 232 to obtain an updated virtual space 236. In an implementation, the digital content may be added to a predefined location in the virtual space. For example, a set of locations suitable for adding digital content may be previously detected and/or created. Taking reconstruction of a virtual space based on a video and adding two-dimensional digital content in the virtual space as an example, some quadrangle regions, such as billboard, picture frame, display screen, etc., may be previously detected from the video. Alternatively or additionally, a set of quadrangle regions may be previously created in the video, e.g., a quadrangle region may be created in an open area detected in the video, wherein an open area may refer to an area that does not contain a significant environmental object or in which an environment object does not produce significant relative movement, such as wall, floor, table, stadium stand, etc. The previously detected or previously created quadrangle region may be used as a location for adding two-dimensional digital content. In addition, the open area detected in the video may also be used as a predefined location suitable for adding three-dimensional digital content. Previously defining a location suitable for adding digital content may reduce bandwidth and/or computing resource usage when presenting a virtual space. In another implementation, the location in the virtual space suitable for adding virtual content may be determined in a real-time and dynamic manner. For example, for a virtual space based on a live video, content appearing in the live video is not completely known before the live video starts. In this case, a quadrangle region suitable for adding two-dimensional digital content may be dynamically detected or created from the video in real-time, and/or an open region suitable for adding three-dimensional digital content may be dynamically detected from the video in real-time.
[0043] It should be appreciated that the process 200 in FIG. 2 is only an example of the process for digital content embedding. According to actual application requirements, the process for realizing digital content embedding may include any other steps, and may include more or fewer steps. For example, when selecting digital content, it is feasible to consider only a user's behavior information without considering the user's basic features. In addition, the specific order or hierarchy of the steps in the process 200 is only exemplary, and the process for embedding digital content may be performed in an order different from the described one.
[0044] FIG.3 illustrates an exemplary scoring model 300 according to an embodiment of the present disclosure. The scoring model 300 may be located in a terminal device, e.g., in the terminal device 112 in FIG. 1, and may correspond to, e.g., the scoring model 228 in FIG. 2. The scoring model 300 may score each candidate digital content in a set of candidate digital content based on behavior information 302 and/or basic features 308 of a user. For example, the user's behavior information 302, the basic features 308 corresponding to the user's basic information, a specific candidate digital content feature corresponding to a specific candidate digital content, etc. may be provided to the scoring model 300 as input. The following takes a candidate digital content feature 316 corresponding to a candidate digital content as an example to illustrate the processing procedure of the scoring model 300. The scoring model 300 may process the behavior information 302, the basic features 308, the candidate digital content feature 316, etc., to generate a score 344 corresponding to the candidate digital content corresponding to the candidate digital content feature 316.
[0045] The behavior information 302 may include, e.g., digital content interactive behavior information 304, environment interactive behavior information 306, etc. The basic features 308 may include, e.g., a user profile feature 310, an operation history feature 312, a social network feature 314, etc.
[0046] The digital content interaction behavior information 304 may be an information set associated with one or more digital content interactive behaviors of a user in a virtual space. Each information subset in the information set corresponds to a digital content interactive behavior, and may include various types of behavior information associated with the digital content interactive behavior, such as digital content identifier, type, start time, duration, pause time, start position, direction, amplitude, etc. A digital content interactive behavior feature 324 corresponding to the digital content interactive behavior information 304 may be generated through an embedding layer 318, a fully-connected layer 320, and a Recurrent Neural Network (RNN) layer 322 in the scoring model 300. Similarly, the environment interactive behavior information 306 may be an information set associated with one or more environment interactive behaviors of a user in a virtual space. Each information subset in the information set corresponds to an environment interactive behavior, and may include various types of behavior information associated with the environment interactive behavior, such as type, start time, duration, pause time, start position, direction, amplitude, etc. An environment interactive behavior feature 332 corresponding to the environment interactive behavior information 306 may be generated through an embedding layer 326, a fully -connected layer 328, and a RNN layer 330 in the scoring model 300. Preferably, the RNN layer 322/330 may be a RNN layer with an attention mechanism introduced to improve the performance of feature generation for long sequence input.
[0047] Through a concatenation and flattening layer 334, the digital content interactive behavior feature 324, the environment interactive behavior feature 332, the basic features 308, and the candidate digital content feature 316 may be concatenated, and the dimensionality of the concatenated features may be reduced through a flattening operation, to obtain a joint representation 336.
[0048] At a multi-task layer 338, the joint representation 336 may be scored through constructing multiple tasks corresponding to multiple indexes, to obtain multiple sub scores 340 for the candidate digital content corresponding to the candidate digital content feature 316. Herein, a score corresponding to each task may be referred to as a sub-score. The index may include, e.g., the type and/or duration of the user’s behavior of interacting with the candidate digital content. Each task in the multi-task layer 338 may predict values of these indexes of the candidate digital content, respectively, and score the candidate digital content based on the predicted values. As an example, for an index about the type of the user’s behavior of interacting with candidate digital content, a task corresponding to it may be to predict the type of the user’s behavior of interacting with the candidate digital content corresponding to the candidate digital content feature 316, and generate a sub score corresponding to the predicted type. For example, a behavior type "watching" may have a higher sub-score, while a behavior type "waving a hand" may have a lower sub score. As another example, for an index about the duration of the user’s behavior of interacting with candidate digital content, a task corresponding to it may be to predict the duration of the user’s behavior of interacting with the candidate digital content corresponding to the candidate digital content feature 316, and generate a score corresponding to the predicted duration. For example, a sub-score corresponding to a longer duration may be higher, while a sub-score corresponding to a shorter duration may be lower.
[0049] At an aggregation layer 342, the multiple sub-scores 340 output by the multi task layer 338 may be aggregated, to obtain a score 344 for the candidate digital content corresponding to the candidate digital content feature 316. In an implementation, the score 344 may be obtained through directly summing the multiple sub-scores 340. In another implementation, the score 344 may be obtained through weighted summation of the multiple sub-scores 340. For example, according to actual application requirements, different weights may be set for various indexes, e.g., a larger weight may be set for a more concerned index, so that candidate digital content that performs better on the index may be selected.
[0050] It should be appreciated that the scoring model 300 shown in FIG. 3 is only an example of the scoring model. According to actual application requirements, the scoring model may have any other structure and may include more or fewer layers.
[0051] The embodiments of the present disclosure propose that a scoring model for scoring candidate digital content, e.g., the scoring model 300 in FIG. 3, is obtained through training an initial scoring model at a set of terminal devices of a set of users. FIG.4 illustrates an exemplary process 400 for training a scoring model according to an embodiment of the present disclosure. In the process 400, an initial scoring model may be distributed to a set of terminal devices of a set of users. Each terminal device in the set of terminal devices may be any device suitable for providing a virtual space experience, such as VR glasses, VR helmets, etc. A user may use his or her terminal device to enter a virtual space embedded with a set of digital content. Each user's behaviors for various digital content may be used to train a corresponding initial scoring model. The scoring models trained at various terminal devices may be aggregated into an updated scoring model.
[0052] First, the initial scoring model 402 may be distributed to a set of terminal devices 404 of a set of users, including, e.g., a terminal device 404-1, a terminal device 404-2, ..., a terminal device 404-n. Behaviors of each user may be used to train the initial scoring model in the user’s terminal device. The following takes any terminal device, e.g., a terminal device 404-i, as an example to illustrate an exemplary process for training the scoring model.
[0053] A user of the terminal device 404-i may use the terminal device to enter and interact with a virtual space. A set of digital content may be embedded in the virtual space. The user's behaviors for each digital content may be detected, and corresponding behavior information may be obtained. Subsequently, training samples may be generated based on these behavioral information. A set of training samples corresponding to a set of digital content may be combined into a training set, and the initial scoring model in the terminal device 404-i may be trained with the training set, to obtain a trained scoring model 406-i. [0054] After obtaining a set of trained scoring models 406, such as a trained scoring model 406-1, a trained scoring model 406-2, ..., a trained scoring model 406-n, the set of trained scoring models 406 may be aggregated into an updated scoring model 408. Preferably, fully trained scoring models may be selected from the set of trained scoring models 406 for aggregation. For example, a trained scoring model that has a high user usage rate and is trained with more training samples may be selected.
[0055] The updated scoring model 408 may be directly deployed for scoring candidate digital content. Preferably, the process 400 may be performed iteratively. For example, after the updated scoring model 408 is obtained, it may be distributed to a set of terminal devices 404, and the above process may be performed again until the performance of the model converges.
[0056] In the process 400, user behaviors are only used locally to a user for training and will not be uploaded to a server, thus leakage of user privacy may be avoided. In addition, the updated scoring model obtained through aggregating the models of various users does not have information that is strongly relevant to a specific user, thus user privacy may be further protected.
[0057] It should be appreciated that the process 400 in FIG. 4 is only an example of the process for a scoring model. According to actual application requirements, the process for a scoring model may include any other steps, and may include more or fewer steps. In addition, the specific order or hierarchy of the steps in the process 400 is only exemplary, and the process for a scoring model may be performed in an order different from the described one.
[0058] FIG. 5 illustrates an exemplary terminal device 500 according to an embodiment of the present disclosure. The terminal device 500 may have an appearance of e.g., glasses. The terminal device 500 is able to present a virtual space and other information such as a login interface and a usage prompt to a user through a presenting unit (not shown). The presenting unit may be a display screen, a display, etc., which may be placed in an area facing the user's eyes in the terminal device 500. In addition, through a behavior identifying unit, behaviors of the user in the virtual space may be detected, and behavior information associated with these behaviors may be obtained. The behavior identifying unit may include, e.g., a gravity sensor 502, a location sensor 504, and cameras 506, 508, 510, and 512, etc.
[0059] The gravity sensor 502 may use a feature that its internal crystal deforms due to acceleration to detect actions relevant to acceleration, such as tilting, lying down, going uphill, going downhill, etc. The location sensor 504 may detect a location of a user and assist in detecting behaviors relevant to the location change, such as walking, running, etc. The camera may be used to detect actions of trunk and/or limbs of a user. For example, the cameras 506 and 508 may be placed in the front of the terminal device 500, and their orientation may be suitable for detecting actions that occur in front of the user, e.g., hand actions such as raising a hand, waving a hand, etc. In addition, the cameras 506 and 508 may also detect environment objects, digital content, etc. in the virtual space. Preferably, the cameras 506 and 508 may be placed on the left and right sides of the front of the terminal device 500, respectively, so as to detect actions of the left hand and the right hand, respectively. The cameras 510 and 512 may be placed at the bottom of the terminal device 500, and their orientation may be suitable for detecting actions occurring in front of or below the user, e.g., actions of trunk and/or limbs of the user such as walking, kicking a leg, turning around, etc.
[0060] These behavior identifying units may determine a behavior of a user based on the detected actions. In an implementation, the behavior may be jointly determined based on the actions detected by various behavior identifying units. For example, when the camera 510 and/or 512 detects that a user is walking, and the location sensor 504 detects that the user's location is changing and approaching a stage, it may be determined that the behavior of the user is "walking toward the stage". After the user behavior is determined, behavior information associated with the behavior may be extracted accordingly.
[0061] It should be appreciated that the terminal device 500 in FIG. 5 is only an example of the terminal device. According to actual application requirements, the terminal device may include any other unit and may include more or fewer units. For example, the terminal device may further comprise a processing unit, for determining digital content based at least on the behavior information and adding the digital content in a virtual space; and a storage unit, for storing a set of candidate digital content. In addition, the location and number of each behavior identifying unit shown in FIG. 5 are exemplary. According to actual application requirements, each behavior identifying unit may be located at other locations, and may have other numbers. In addition, the terminal device 500 may integrate the behavior identifying unit with other units, so that a user may switch between a real environment and a virtual space with only a single device, thereby obtaining a more real immersive experience. It should be appreciated that one or more of these behavior identifying units may also be separated from other units in the terminal device 500 in location, e.g., some sensors or cameras may also be located outside the terminal device 500.
[0062] FIG. 6 illustrates a flowchart of an exemplary method 600 for embedding digital content in a virtual space according to an embodiment of the present disclosure. [0063] At 610, a behavior of a user in the virtual space may be detected.
[0064] At 620, behavior information associated with the behavior may be obtained. [0065] At 630, digital content may be determined based at least on the behavior information.
[0066] At 640, the digital content may be added in the virtual space.
[0067] In an implementation, the virtual space may be constructed by adopting VR technology.
[0068] In an implementation, the behavior may include at least one of: a digital content interactive behavior relevant to existing digital content performed by the user in the virtual space; and an environment interactive behavior relevant to an environment object performed by the user in the virtual space.
[0069] In an implementation, the behavior information may include at least one of the following information associated with the behavior: type, start time, duration, pause time, start position, direction, and amplitude. In the case that the behavior includes a digital content interactive behavior relevant to existing digital content performed by the user in the virtual space, the behavior information may further include an identifier of the existing digital content.
[0070] In an implementation, the determining digital content may be further obtained based at least on one of user profile, operation history, and social network information of the user.
[0071] In an implementation, the virtual space may be presented through a terminal device. The behavior may be detected through a behavior identifying unit in the terminal device.
[0072] The behavior identifying unit may comprise a gravity sensor and/or a location sensor.
[0073] The behavior identifying unit may comprise at least one camera, an orientation of which being suitable for detecting actions of trunk and/or limbs of the user.
[0074] In an implementation, the determining digital content may comprise: selecting the digital content from a set of candidate digital content based at least on the behavior information.
[0075] The selecting the digital content comprises: for each candidate digital content in the set of candidate digital content, scoring the candidate digital content based at least on the behavior information; and selecting candidate digital content with a highest score in the set of candidate digital content as the digital content.
[0076] The scoring the candidate digital content may comprise: predicting a type and/or duration of the user’s behavior of interacting with the candidate digital content based at least on the behavior information and the candidate digital content; and calculating a score of the candidate digital content based on the predicted type and/or duration.
[0077] The scoring the candidate digital content may be performed through a deep learning model in a terminal device. The deep learning model may be obtained through training an initial deep learning model at a set of terminal devices of a set of users.
[0078] The set of candidate digital content may be previously received from a server and stored in a terminal device.
[0079] In an implementation, the adding the digital content may comprise: adding the digital content at a predefined location in the virtual space.
[0080] In an implementation, the method 600 may be performed at a terminal device.
[0081] It should be appreciated that the method 600 may further comprise any step/process for embedding digital content in a virtual space according to the embodiments of the present disclosure as described above.
[0082] FIG. 7 illustrates an exemplary virtual space application device 700 according to an embodiment of the present disclosure.
[0083] The virtual space application device 700 may comprise: a presenting unit 710, for presenting a virtual space; a behavior identifying unit 720, for detecting a behavior of a user in virtual space, and obtaining behavior information associated with the behavior; and a processing unit 730, for determining digital content based at least on the behavior information, and adding the digital content in the virtual space.
[0084] In an implementation, the behavior identifying unit 720 may comprise at least one of: a gravity sensor; a location sensor; and at least one camera, an orientation of which being suitable for detecting actions of trunk and/or limbs of the user.
[0085] In an implementation, the virtual space application device 700 may further comprise: a storage unit 740, for storing a set of candidate digital content. The processing unit 730 may be further for determining the digital content from the set of candidate digital content based at least on the behavior information.
[0086] It should be appreciated that the virtual space application device 700 may further comprise any other units configured for embedding digital content in a virtual space according to the embodiments of the present disclosure as described above. In addition, each unit included in the virtual space application device 700 may also perform any related operations for embedding digital content in a virtual space according to the embodiments of the present disclosure as described above.
[0087] FIG. 8 illustrates an exemplary apparatus 800 for embedding digital content in a virtual space according to an embodiment of the present disclosure.
[0088] The apparatus 800 may comprise at least one processor 810. The apparatus 800 may further comprise a memory 820 coupled with processor 810. The memory 820 may store computer-executable instructions that, when executed, cause the processor 810 to: receive behavior information associated with a behavior of a user in the virtual space; determine digital content based at least on the behavior information; and add the digital content in the virtual space.
[0089] It should be appreciated that the method 810 may further perform any operations for embedding digital content in a virtual space according to the embodiments of the present disclosure as described above.
[0090] The embodiments of the present disclosure propose a computer program product for embedding digital content in a virtual space, comprising a computer program that is executed by at least one processor for: receiving behavior information associated with a behavior of a user in the virtual space; determining digital content based at least on the behavior information; and adding the digital content in the virtual space.
[0091] It should be appreciated that the computer program may also be executed for implementing any operations for embedding digital content in a virtual space according to the embodiments of the present disclosure as described above.
[0092] The embodiments of the present disclosure propose an apparatus for embedding digital content in a virtual space. The apparatus may comprise: a behavior information receiving module, for receiving behavior information associated with a behavior of a user in the virtual space; a digital content determining module, for determining digital content based at least on the behavior information; and a digital content adding module, for adding the digital content in the virtual space.
[0093] It should be appreciated that the apparatus may further comprise any other modules configured for embedding digital content in a virtual space according to the embodiments of the present disclosure as described above.
[0094] The embodiments of the present disclosure may be embodied in non-transitory computer-readable medium. The non-transitory computer readable medium may comprise instructions that, when executed, cause one or more processors to perform any operation of a method for embedding digital content in a virtual space according to embodiments of the present disclosure as described above.
[0095] It should be appreciated that all the operations in the methods described above are merely exemplary, and the present disclosure is not limited to any operations in the methods or sequence orders of these operations, and should cover all other equivalents under the same or similar concepts.
[0096] It should also be appreciated that all the modules in the apparatuses described above may be implemented in various approaches. These modules may be implemented as hardware, software, or a combination thereof. Moreover, any of these modules may be further functionally divided into sub-modules or combined together.
[0097] Processors have been described in connection with various apparatuses and methods. These processors may be implemented using electronic hardware, computer software, or any combination thereof. Whether such processors are implemented as hardware or software will depend upon the particular application and overall design constraints imposed on the system. By way of example, a processor, any portion of a processor, or any combination of processors presented in the present disclosure may be implemented with a microprocessor, microcontroller, digital signal processor (DSP), a field-programmable gate array (FPGA), a programmable logic device (PLD), a state machine, gated logic, discrete hardware circuits, and other suitable processing components configured to perform the various functions described throughout the present disclosure. The functions of a processor, any portion of a processor, or any combination of processors presented in this disclosure may be implemented with software executed by a microprocessor, a microcontroller, a DSP, or other suitable platforms.
[0098] Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, threads of execution, procedures, functions, etc. The software may reside on a computer-readable medium. A computer-readable medium may include, e.g., memory, the memory may be e.g., a magnetic storage device (e.g., hard disk, floppy disk, magnetic strip), an optical disk, a smart card, a flash memory device, random access memory (RAM), read only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), a register, or a removable disk. Although a memory is shown separate from a processor in the various aspects presented throughout the present disclosure, the memory may be internal to the processor, e.g., a cache or register.
[0099] The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein. All structural and functional equivalents to the elements of the various aspects described throughout the present disclosure that are known or later come to be known to those of ordinary skilled in the art are expressly incorporated herein and encompassed by the claims.

Claims

1. A method for embedding digital content in a virtual space, comprising: detecting a behavior of a user in the virtual space; obtaining behavior information associated with the behavior; determining digital content based at least on the behavior information; and adding the digital content in the virtual space.
2. The method of claim 1, wherein the virtual space is constructed by adopting Virtual Reality (VR) technology.
3. The method of claim 1, wherein the behavior includes at least one of: a digital content interactive behavior relevant to existing digital content performed by the user in the virtual space; and an environment interactive behavior relevant to an environment object performed by the user in the virtual space.
4. The method of claim 1, wherein the behavior information includes at least one of the following information associated with the behavior: type, start time, duration, pause time, start position, direction, and amplitude.
5. The method of claim 4, wherein the behavior includes a digital content interactive behavior relevant to existing digital content performed by the user in the virtual space, and the behavior information further includes an identifier of the existing digital content.
6. The method of claim 1, wherein the determining digital content is further based at least on one of user profile, operation history, and social network information of the user.
7. The method of claim 1, wherein the virtual space is presented through a terminal device, and the behavior is detected through a behavior identifying unit in the terminal device.
8. The method of claim 7, wherein the behavior identifying unit comprises a gravity sensor and/or a location sensor.
9. The method of claim 7, wherein the behavior identifying unit comprises at least one camera, an orientation of which being suitable for detecting actions of trunk and/or limbs of the user.
10. The method of claim 1, wherein the determining digital content comprises: selecting the digital content from a set of candidate digital content based at least on the behavior information.
11. The method of claim 10, wherein the selecting the digital content comprises: for each candidate digital content in the set of candidate digital content, scoring the candidate digital content based at least on the behavior information; and selecting candidate digital content with a highest score in the set of candidate digital content as the digital content.
12. The method of claim 11, wherein the scoring the candidate digital content comprises: predicting a type and/or duration of the user’s behavior of interacting with the candidate digital content based at least on the behavior information and the candidate digital content; and calculating a score of the candidate digital content based on the predicted type and/or duration.
13. The method of claim 11, wherein the scoring the candidate digital content is performed through a deep learning model in a terminal device, and the deep learning model is obtained through training an initial deep learning model at a set of terminal devices of a set of users.
14. A virtual space application device, comprising: a presenting unit, for presenting a virtual space; a behavior identifying unit, for detecting a behavior of a user in virtual space, and obtaining behavior information associated with the behavior; and a processing unit, for determining digital content based at least on the behavior information, and adding the digital content in the virtual space.
15. A computer program product for embedding digital content in a virtual space, comprising a computer program that is executed by at least one processor for: receiving behavior information associated with a behavior of a user in the virtual space; determining digital content based at least on the behavior information; and adding the digital content in the virtual space.
PCT/US2021/0616852021-01-062021-12-03Embedding digital content in a virtual spaceCeasedWO2022150125A1 (en)

Applications Claiming Priority (2)

Application NumberPriority DateFiling DateTitle
CN202110011834.22021-01-06
CN202110011834.2ACN114721501B (en)2021-01-062021-01-06 Embedding digital content in virtual space

Publications (1)

Publication NumberPublication Date
WO2022150125A1true WO2022150125A1 (en)2022-07-14

Family

ID=79092952

Family Applications (1)

Application NumberTitlePriority DateFiling Date
PCT/US2021/061685CeasedWO2022150125A1 (en)2021-01-062021-12-03Embedding digital content in a virtual space

Country Status (2)

CountryLink
CN (1)CN114721501B (en)
WO (1)WO2022150125A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20180232921A1 (en)*2017-02-142018-08-16Adobe Systems IncorporatedDigital Experience Content Personalization and Recommendation within an AR or VR Environment
US20180240011A1 (en)*2017-02-222018-08-23Cisco Technology, Inc.Distributed machine learning
US20180365580A1 (en)*2017-06-152018-12-20Microsoft Technology Licensing, LlcDetermining a likelihood of a user interaction with a content element
CN111814985A (en)*2020-06-302020-10-23平安科技(深圳)有限公司Model training method under federated learning network and related equipment thereof

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US11496587B2 (en)*2016-04-282022-11-08Verizon Patent And Licensing Inc.Methods and systems for specification file based delivery of an immersive virtual reality experience
US10269158B2 (en)*2016-05-262019-04-23Disney Enterprises, Inc.Augmented or virtual reality digital content representation and interaction
US11036289B2 (en)*2016-12-312021-06-15Facebook, Inc.Systems and methods to present information in a virtual environment
US10769679B2 (en)*2017-01-252020-09-08Crackle, Inc.System and method for interactive units within virtual reality environments
US10467808B2 (en)*2017-02-092019-11-05Disney Enterprises, Inc.Systems and methods to provide narrative experiences for users of a virtual space
US11030813B2 (en)*2018-08-302021-06-08Snap Inc.Video clip object tracking
CN109696961A (en)*2018-12-292019-04-30广州欧科信息技术股份有限公司Historical relic machine & equipment based on VR technology leads reward and realizes system and method, medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20180232921A1 (en)*2017-02-142018-08-16Adobe Systems IncorporatedDigital Experience Content Personalization and Recommendation within an AR or VR Environment
US20180240011A1 (en)*2017-02-222018-08-23Cisco Technology, Inc.Distributed machine learning
US20180365580A1 (en)*2017-06-152018-12-20Microsoft Technology Licensing, LlcDetermining a likelihood of a user interaction with a content element
CN111814985A (en)*2020-06-302020-10-23平安科技(深圳)有限公司Model training method under federated learning network and related equipment thereof

Also Published As

Publication numberPublication date
CN114721501A (en)2022-07-08
CN114721501B (en)2025-01-07

Similar Documents

PublicationPublication DateTitle
CN110850983B (en)Virtual object control method and device in video live broadcast and storage medium
CN114930399B (en) Image generation using surface-based neural synthesis
CN110227266B (en)Building virtual reality game play environments using real world virtual reality maps
US12067690B2 (en)Image processing method and apparatus, device, and storage medium
CN111491187B (en)Video recommendation method, device, equipment and storage medium
US20250292570A1 (en)Facilitating content acquisition via video stream analysis
US11430158B2 (en)Intelligent real-time multiple-user augmented reality content management and data analytics system
CN115131849B (en)Image generation method and related equipment
US20240297957A1 (en)Aspect ratio conversion for automated image generation
CN111615002B (en)Video background playing control method, device and system and electronic equipment
CN116520982B (en)Virtual character switching method and system based on multi-mode data
CN116841436A (en)Video-based interaction method, apparatus, device, storage medium, and program product
US12277655B2 (en)Traveling in time and space continuum
CN116847112A (en)Live broadcast all-in-one machine, virtual main broadcast live broadcast method and related devices
CN116823390A (en)Virtual experience system, method, computer equipment and storage medium
CN114341944A (en) computer generated reality recorder
CN119004195B (en)Intelligent projection method, system, medium and program product based on gesture recognition
US20220215660A1 (en)Systems, methods, and media for action recognition and classification via artificial reality systems
US20240046674A1 (en)Messaging system for engagement analysis based on labels
CN117197702B (en)Training method, device, equipment and storage medium of video recognition model
WO2022150125A1 (en)Embedding digital content in a virtual space
CN118118748A (en)Method and related device for generating playback video in live broadcast process
CN116597159A (en)Feature extraction method, state identification method of biological object part and electronic equipment
US11816906B2 (en)Messaging system for engagement analysis based on labels
CN116710911A (en) Annotation-based engagement analysis

Legal Events

DateCodeTitleDescription
121Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number:21831426

Country of ref document:EP

Kind code of ref document:A1

NENPNon-entry into the national phase

Ref country code:DE

122Ep: pct application non-entry in european phase

Ref document number:21831426

Country of ref document:EP

Kind code of ref document:A1


[8]ページ先頭

©2009-2025 Movatter.jp