Movatterモバイル変換


[0]ホーム

URL:


CN120010674B - Method and system for evaluating generated graph matching of XR shed - Google Patents

Method and system for evaluating generated graph matching of XR shed

Info

Publication number
CN120010674B
CN120010674BCN202510476407.XACN202510476407ACN120010674BCN 120010674 BCN120010674 BCN 120010674BCN 202510476407 ACN202510476407 ACN 202510476407ACN 120010674 BCN120010674 BCN 120010674B
Authority
CN
China
Prior art keywords
module
user
virtual
shed
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202510476407.XA
Other languages
Chinese (zh)
Other versions
CN120010674A (en
Inventor
唐剑锋
郭超
康黎
廖展
马康宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Guangxin World Media Co ltd
Original Assignee
Sichuan Guangxin World Media Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Guangxin World Media Co ltdfiledCriticalSichuan Guangxin World Media Co ltd
Priority to CN202510476407.XApriorityCriticalpatent/CN120010674B/en
Publication of CN120010674ApublicationCriticalpatent/CN120010674A/en
Application grantedgrantedCritical
Publication of CN120010674BpublicationCriticalpatent/CN120010674B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明提供了一种针对XR棚的生成图匹配评估方法及系统,涉及图像处理技术领域;本发明系统包括XR棚、用户信息捕捉单元、虚拟场景仿真单元、展示内容生成单元、播放模组划分单元、透视图像处理单元、图像匹配评估单元和播放模组管理单元;并配合方法步骤根据用户所处位置和视角的不同,定位其用户信息并针对性进行透视图像处理,从而得到满足用户观影体验的软模组处理图,再通过图像匹配评估单元判断是原始的软模组生成图好,还是软模组处理图的效果更好,进而控制软模组处理图进行播放展示,从而满足用户特定视角的观影需求。

The present invention provides a generation image matching evaluation method and system for an XR studio, and relates to the field of image processing technology; the system of the present invention includes an XR studio, a user information capturing unit, a virtual scene simulation unit, a display content generating unit, a playback module dividing unit, a perspective image processing unit, an image matching evaluation unit and a playback module management unit; and in conjunction with the method steps, according to the different positions and viewing angles of the users, the user information is located and the perspective image processing is performed in a targeted manner, so as to obtain a soft module processing image that satisfies the user's viewing experience, and then the image matching evaluation unit is used to determine whether the original soft module generation image is better or the soft module processing image has a better effect, and then the soft module processing image is controlled to be played and displayed, so as to meet the user's viewing needs from a specific viewing angle.

Description

Method and system for evaluating generated graph matching of XR shed
Technical Field
The invention relates to the technical field of image processing, in particular to a generated image matching evaluation method and system for an XR shed.
Background
The XR shed is novel content display equipment, the appearance of the XR shed is similar to that of a box body with one side being opened, a display wall/display screen for displaying content is arranged in the box body, and a viewer can obtain the immersive viewing experience by standing at the opening.
Due to the special design of the XR shed, the display content is different from that of a traditional single-plane image, and image processing according to perspective relation is needed at some transition positions (such as corners and edges) so that the image transition between different planes is natural, smooth and reasonable.
The existing image processing method generally adopts a manual mode to process frames or adopts an image processing model to process batches, wherein the manual frame-by-frame processing mode can meet the visual requirement of a specific scene, so that the adaptability is better, the display effect is better, and the defects of low processing speed and high cost are overcome.
The image processing model is adopted to rapidly process images, has low cost, can continuously optimize and generate drawing quantity along with the improvement of an algorithm, is a trend of future development, but has the defect that the adaptation of the fixed processing model is poor, and users can often watch the display content of the XR shed from different angles, different positions and different heights, so that the content after the fixed optimization cannot be adapted.
Therefore, a method and a system for evaluating the generated graph matching of the XR shed are needed to solve the technical problem that the content after fixed optimization cannot be adapted because a user views the display content of the XR shed from different angles, different positions and different heights.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a generated graph matching evaluation method and system for an XR shed, and aims to solve the technical problem that the content after fixed optimization cannot be adapted to different watching demands of users.
To achieve the above object, the present application provides a system for evaluating a generated map matching for an XR shed, comprising:
the XR shed is used for playing the display content through a playing module, wherein the playing module comprises a hard module and a soft module;
The user information capturing unit is used for capturing user information at a viewing area in front of the XR shed;
The virtual scene simulation unit is used for carrying out virtual scene simulation on the XR shed and the viewing area;
the display content generation unit is used for generating a plane generation graph which can be played by the XR shed according to the prepared display content;
the playing module dividing unit is used for carrying out playing module division on the plane generating graph to obtain a hard module generating graph and a soft module generating graph;
the perspective image processing unit is used for selecting and performing perspective image processing on the soft module generating image according to the target visual angle to obtain a corresponding soft module processing image;
The image matching evaluation unit is used for carrying out image matching evaluation on the soft module processing diagram to obtain matching evaluation scores;
and the playing module management unit is used for judging whether the XR shed plays and displays the soft module processing diagram according to the matching evaluation score.
The user information capturing unit comprises a personnel identification module, a behavior identification module and a space positioning module, wherein the personnel identification module is used for identifying personnel in a viewing area, the behavior identification module is used for identifying the behaviors of the personnel and marking the personnel with the viewing behaviors as users, and the space positioning module is used for performing space positioning on the head positions of the users to obtain user positioning information.
As a further solution, the user information capturing unit further comprises an eye movement capturing module, wherein the eye movement capturing module is used for capturing the eye viewing angle of the user to obtain the user viewing angle information.
The virtual scene simulation unit comprises a virtual space simulation module, a virtual XR shed simulation module and a virtual view angle simulation module, wherein the virtual space simulation module is used for providing a virtual simulation space comprising a virtual viewing area and a virtual XR shed area, the virtual XR shed simulation module is used for performing XR shed simulation playing in the virtual simulation space, and the virtual view angle simulation module is used for performing virtual view angle simulation according to user information.
As a further solution, the virtual XR shed simulation module comprises a virtual XR shed model and a virtual playing module, wherein the virtual XR shed model is used for simulating the physical structure of the XR shed, and the virtual playing module comprises a virtual hard module and a virtual soft module and is arranged in the virtual XR shed model according to the real installation position.
On the other hand, the invention also provides a generated graph matching evaluation method for the XR shed, which is applied to the generated graph matching evaluation system for the XR shed, and comprises the following steps:
step 1, obtaining a prepared plane generation diagram capable of being played by an XR shed for display content generation, and generating the plane generation diagram capable of being played by the XR shed through a display content generation unit;
step 2, a play module dividing unit divides the play module for the plane generation diagram to obtain a corresponding hard module generation diagram and soft module generation diagram;
Step 3, capturing user information at a viewing area in front of the XR shed through a user information capturing unit;
Step 4, the perspective image processing unit selects a target visual field according to preset selection logic and combining user information;
Step 5, performing perspective image processing on the soft module generating diagram according to the target visual angle by a perspective image processing unit to obtain a corresponding soft module processing diagram;
step 6, performing virtual scene simulation on the XR shed and the viewing area through a virtual scene simulation unit;
step 7, the image matching evaluation unit performs image matching evaluation on the soft module processing diagram in the virtual scene simulation to obtain matching evaluation scores;
step 8, the play module management unit judges whether the matching evaluation score is larger than a replacement threshold value;
if yes, controlling the XR shed to play and display the hard module generating diagram and the soft module processing diagram;
if not, controlling the XR shed to play and display the hard module generating diagram and the soft module generating diagram;
and 9, circularly executing the steps 1 to 8 until the XR shed finishes playing and displaying.
As a further solution, the image matching evaluation unit performs image matching evaluation by:
acquiring a target visual angle and a soft module generation diagram and a soft module processing diagram which need to be evaluated;
Controlling the virtual view angle simulation module to simulate the virtual view angle according to the target view angle;
simulating and playing the soft module generating image through a virtual XR shed simulation module, and obtaining a virtual view angle image at the moment through a virtual view angle simulation module to obtain an original view angle image;
simulating and playing the soft module processing image through a virtual XR shed simulation module, and obtaining a virtual view image at the moment through a virtual view simulation module to obtain a processing view image;
evaluating the deformation degree of the original view image relative to the soft module generation image to obtain original image variable parameters;
Evaluating the deformation degree of the processed view angle graph relative to the soft module generation graph to obtain a processed graph variable parameter;
and obtaining the matching evaluation score of the corresponding soft module processing diagram under the target view angle through the original diagram variable parameter-processing diagram variable parameter.
As a still further solution, when the user is a single user, the selection logic is preset:
if the specified annotation view exists, taking the specified annotation view as a target view;
otherwise, the viewing angle position is determined by the user positioning information of the individual user,
If the eye movement capturing module is arranged, the user visual angle information of the single user is used as a target visual angle;
if the eye movement capturing module is not arranged, the default head-up view angle is used as the target view angle.
As a still further solution, when the user is a plurality of users, the selection logic is preset:
Acquiring user positioning information and user visual angle information of each user;
carrying out space position average value calculation through the user positioning information to obtain the public positioning information;
Carrying out space angle average value calculation through the user visual angle information to obtain the public visual angle information;
And determining the view angle position through the mass positioning information, and taking the mass view angle information as a target view angle.
As a still further solution, when the user is a plurality of users, gaze selection logic is also provided:
Acquiring user positioning information and user visual angle information of each user;
determining a playing module watched by each user through the user positioning information and the user visual angle information;
dividing users watching the same soft module into the same group;
carrying out space position average value calculation through the user positioning information of the same group to obtain the public positioning information of the same group;
carrying out space angle average value calculation through the same group of user visual angle information to obtain the same group of public visual angle information;
Determining a viewing angle position through the same group of public positioning information, and taking the same group of public viewing angle information as a same group of target viewing angle;
and each soft module independently replaces the target view angle according to the corresponding same group of target view angles, and the steps 1 to 8 are respectively and circularly executed until the XR shed finishes playing and displaying.
Compared with the related art, the method and the system for evaluating the generated graph matching of the XR shed have the following advantages:
According to the invention, the user information is positioned according to the difference of the position and the viewing angle of the user, and perspective image processing is performed pertinently, so that a soft module processing diagram meeting the user viewing experience is obtained, and then the image matching evaluation unit is used for judging whether the original soft module generating diagram is good or the soft module processing diagram is better in effect, so that the soft module processing diagram is controlled to play and display, and the viewing requirement of the user at the specific viewing angle is met.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the related art, the drawings that are required to be used in the embodiments or the related technical descriptions will be briefly described, and it will be apparent to those skilled in the art that other drawings can be obtained according to these drawings without inventive effort.
FIG. 1 is a schematic diagram of a system for evaluating the matching of a generated graph for an XR shed;
FIG. 2 is a schematic diagram of a playback module according to the present invention;
FIG. 3 is a schematic view of a virtual scene simulation unit according to the present invention;
FIG. 4 is a schematic view of a view area in front of an XR booth according to the present invention;
Fig. 5 is a schematic diagram of steps of a method for evaluating matching of a generated graph for an XR shed.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Example 1
Referring to fig. 1, an embodiment of the present application provides a generating graph matching evaluation system for an XR shed, including:
the XR shed is used for playing the display content through a playing module, wherein the playing module comprises a hard module and a soft module;
The user information capturing unit is used for capturing user information at a viewing area in front of the XR shed;
The virtual scene simulation unit is used for carrying out virtual scene simulation on the XR shed and the viewing area;
the display content generation unit is used for generating a plane generation graph which can be played by the XR shed according to the prepared display content;
the playing module dividing unit is used for carrying out playing module division on the plane generating graph to obtain a hard module generating graph and a soft module generating graph;
the perspective image processing unit is used for selecting and performing perspective image processing on the soft module generating image according to the target visual angle to obtain a corresponding soft module processing image;
The image matching evaluation unit is used for carrying out image matching evaluation on the soft module processing diagram to obtain matching evaluation scores;
and the playing module management unit is used for judging whether the XR shed plays and displays the soft module processing diagram according to the matching evaluation score.
It should be noted that, as shown in fig. 2, the hard module refers to a playing module which does not need to be bent and deformed, the module can directly play the wanted material without considering the deformation problem, the soft module is a playing module which is bent and deformed at the edge or the diagonal position, and the soft module has a certain deformation and bending, so that a certain deformation can occur when the image content is directly displayed, and the viewing experience of the user is further affected;
the existing XR shed does not distinguish between the soft module and the hard module, and even if deformation bending of the soft module is processed, perspective transformation is performed through a fixed visual angle so as to offset the influence caused by deformation bending, so that better viewing experience can be obtained only at the fixed visual angle, and viewing demands of different visual angles of users cannot be met.
Therefore, according to the embodiment, the user information is positioned according to the difference of the position and the viewing angle of the user, and perspective image processing is performed pertinently, so that a soft module processing diagram meeting the user viewing experience is obtained, whether the original soft module generating diagram is good or the soft module processing diagram is better is judged through the image matching evaluation unit, and further the soft module processing diagram is controlled to be played and displayed, so that the viewing requirement of the user for the specific viewing angle is met.
Specifically, the user information capturing unit comprises a personnel identification module, a behavior identification module and a space positioning module, as shown in fig. 4, people in the viewing area are identified through the personnel identification module, then the behaviors of the people are identified through the behavior identification module, the people with the viewing behaviors are marked as users (because some people do not watch, such as passers-by and security personnel), and finally the head positions of the users are spatially positioned through the space positioning module, so that user positioning information is obtained.
In addition, in some scenes with higher requirements, an eye movement capturing module is further arranged, through the eye movement capturing module, the visual angle of a user can be accurately obtained, and a playing module (a red square block part in fig. 4 is a soft module positioned according to the visual angle of the user) watched by the user can be further positioned, so that more targeted perspective transformation processing is performed.
The prepared display content may be 3D scene material and 2D static material prepared in advance, or may be material generated based on AI, such as a text chart, and the like, and the display content is not limited herein.
The virtual scene simulation unit is mainly used for simulating a real scene, so that relevant information wanted by people is extracted from the simulation scene, and the virtual scene simulation unit mainly comprises a virtual space simulation module, a virtual XR shed simulation module and a virtual view angle simulation module, wherein the virtual space simulation module is used for providing a virtual simulation space and comprises a virtual viewing area and a virtual XR shed area, the virtual XR shed simulation module is used for carrying out XR shed simulation playing in the virtual simulation space, and the virtual view angle simulation module is used for carrying out virtual view angle simulation according to user information.
As shown in FIG. 3, the virtual XR booth simulation module comprises a virtual XR booth model and a virtual playing module, wherein the virtual XR booth model is used for simulating the physical structure of the XR booth, and the virtual playing module comprises a virtual hard module and a virtual soft module and is arranged in the virtual XR booth model according to the real installation position.
The virtual XR shed model and the virtual playing module are matched with each other, so that the display effect of the image can be accurately obtained in the virtual scene, and further, a data basis is provided for subsequent judgment.
Example 2
Referring to fig. 5, on the basis of embodiment 1, the present embodiment provides a method for evaluating matching of generated patterns for XR sheds, comprising the following steps:
step 1, obtaining a prepared plane generation diagram capable of being played by an XR shed for display content generation, and generating the plane generation diagram capable of being played by the XR shed through a display content generation unit;
step 2, a play module dividing unit divides the play module for the plane generation diagram to obtain a corresponding hard module generation diagram and soft module generation diagram;
Step 3, capturing user information at a viewing area in front of the XR shed through a user information capturing unit;
Step 4, the perspective image processing unit selects a target visual field according to preset selection logic and combining user information;
Step 5, performing perspective image processing on the soft module generating diagram according to the target visual angle by a perspective image processing unit to obtain a corresponding soft module processing diagram;
step 6, performing virtual scene simulation on the XR shed and the viewing area through a virtual scene simulation unit;
step 7, the image matching evaluation unit performs image matching evaluation on the soft module processing diagram in the virtual scene simulation to obtain matching evaluation scores;
step 8, the play module management unit judges whether the matching evaluation score is larger than a replacement threshold value;
if yes, controlling the XR shed to play and display the hard module generating diagram and the soft module processing diagram;
if not, controlling the XR shed to play and display the hard module generating diagram and the soft module generating diagram;
and 9, circularly executing the steps 1 to 8 until the XR shed finishes playing and displaying.
Furthermore, the image matching evaluation unit needs to evaluate the soft module generating diagram and the soft module processing diagram, and determines which one is more matched with the viewing requirement of the user, that is, which diagram is more similar to the original viewing content from the view angle of the user, and specifically performs the following steps:
acquiring a target visual angle and a soft module generation diagram and a soft module processing diagram which need to be evaluated;
Controlling the virtual view angle simulation module to simulate the virtual view angle according to the target view angle;
The virtual XR shed simulation module is used for simulating and playing the soft module generation image, and the virtual view angle simulation module is used for obtaining a virtual view angle image at the moment to obtain an original view angle image, wherein the original view angle image is an image seen by a user view angle when the original view angle image is not processed;
Then, the soft module processing image is simulated and played through a virtual XR shed simulation module, and a virtual visual angle image at the moment is obtained through a virtual visual angle simulation module, so that a processing visual angle image is obtained, and the processing visual angle image is an image seen by a user visual angle when processing is carried out;
evaluating the deformation degree of the original view image relative to the soft module generation image to obtain original image variable parameters;
Evaluating the deformation degree of the processed view angle graph relative to the soft module generation graph to obtain a processed graph variable parameter;
The degree of deformation here can be selectively set according to the point of emphasis,
If the similarity of the content is concerned, the pixel value differences of the images before and after the deformation are directly compared by using a pixel-based metric, and the following parameters can be adopted:
Mean square error (MSE, mean Squared Error)
The mean square error of the pixel values of the two images is calculated, and the smaller the numerical value is, the smaller the difference after deformation is.
Mean absolute error (MAE, mean Absolute Error)
The absolute average of the pixel value differences is calculated and is more robust to outliers than MSE.
Peak signal to Noise Ratio (PSNR, peak Signal-to-Noise Ratio)
Numerical indicators based on MSE are commonly used to evaluate image compression or reconstruction quality.
In some scenes (such as building displays) with more attention to structural relevance, the retention degree of structural information of the image content needs to be measured, and the following parameters can be adopted:
structural similarity index (SSIM, structural Similarity Index)
And the brightness, contrast and structural information are integrated, so that the human visual perception is more met.
Multiscale SSIM (MS-SSIM)
SSIM is calculated on multiple scales, and robustness to complex deformation is enhanced.
In some scenes focusing on geometric relationships (such as vehicle mechanics simulation diagram display, etc.), the method is suitable for analyzing geometric deformation (such as affine transformation and elastic deformation) of the image, such as:
Displacement field (DISPLACEMENT FIELD)
The displacement vector describing each pixel is commonly used for non-rigid registration (e.g., medical imaging).
Jacobian matrix determinant (Jacobian Determinant)
The local area was analyzed for volume change (determinant >1 is expansion, <1 is contraction).
Strain Tensor (stress Tensor)
The degree of stretching or shearing of the local deformation (e.g., green-Lagrange strain in engineering mechanics) is described.
Curvature (Curvature)
For analyzing the change in degree of curvature of a curved surface or profile.
After the proper parameters are selected for measurement, the matching evaluation score of the corresponding soft module processing diagram under the target view angle is obtained through the original diagram variable parameters and the processing diagram variable parameters, if the score (positive) exceeds the replacement threshold, the processed soft module processing diagram is explained to meet the viewing requirement of the current view angle more, and the soft module processing diagram is replaced.
Further, when the user is a single user, the selection logic is preset to:
if the specified annotation view exists, taking the specified annotation view as a target view;
otherwise, the viewing angle position is determined by the user positioning information of the individual user,
If the eye movement capturing module is arranged, the user visual angle information of the single user is used as a target visual angle;
if the eye movement capturing module is not arranged, the default head-up view angle is used as the target view angle.
Still further, when the user is a plurality of users, preset selection logic:
Acquiring user positioning information and user visual angle information of each user;
carrying out space position average value calculation through the user positioning information to obtain the public positioning information;
Carrying out space angle average value calculation through the user visual angle information to obtain the public visual angle information;
And determining the view angle position through the mass positioning information, and taking the mass view angle information as a target view angle.
When the user is a plurality of users and is provided with an eye movement capturing module, the user can acquire the soft modules watched by each user, different target visual angles are adopted by different soft modules so as to simultaneously meet the video watching requirements of different users, as shown in fig. 4, the user wearing the cap shirt 1 is watching the zenith, therefore, the zenith corresponds to the soft modules and is adapted according to the visual angle of the user, and similarly, another user watches the soft modules on the right side and is also adapted according to the visual angle of another user, and the watch selection logic is as follows:
Acquiring user positioning information and user visual angle information of each user;
determining a playing module watched by each user through the user positioning information and the user visual angle information;
dividing users watching the same soft module into the same group;
carrying out space position average value calculation through the user positioning information of the same group to obtain the public positioning information of the same group;
carrying out space angle average value calculation through the same group of user visual angle information to obtain the same group of public visual angle information;
Determining a viewing angle position through the same group of public positioning information, and taking the same group of public viewing angle information as a same group of target viewing angle;
and each soft module independently replaces the target view angle according to the corresponding same group of target view angles, and the steps 1 to 8 are respectively and circularly executed until the XR shed finishes playing and displaying.
The scheme can further subdivide different soft modules, further meet different viewing requirements of different users, different viewing angles and different viewing areas.
The foregoing is only a part of embodiments of the present application, and is not intended to limit the scope of the present application, and all the equivalent structural changes made by the description and drawings of the present application or the direct/indirect application in other related technical fields are included in the scope of the present application.

Claims (9)

CN202510476407.XA2025-04-162025-04-16Method and system for evaluating generated graph matching of XR shedActiveCN120010674B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202510476407.XACN120010674B (en)2025-04-162025-04-16Method and system for evaluating generated graph matching of XR shed

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202510476407.XACN120010674B (en)2025-04-162025-04-16Method and system for evaluating generated graph matching of XR shed

Publications (2)

Publication NumberPublication Date
CN120010674A CN120010674A (en)2025-05-16
CN120010674Btrue CN120010674B (en)2025-07-15

Family

ID=95676511

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202510476407.XAActiveCN120010674B (en)2025-04-162025-04-16Method and system for evaluating generated graph matching of XR shed

Country Status (1)

CountryLink
CN (1)CN120010674B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111386698A (en)*2017-12-062020-07-07索尼公司Display device
CN114915699A (en)*2022-05-132022-08-16上海傲驰广告文化集团有限公司Virtual studio simulation method and system based on UE system
CN115223455A (en)*2022-06-302022-10-21深圳市联建光电有限公司Display system for augmented reality and display control method thereof
KR102616646B1 (en)*2022-12-152023-12-21주식회사 글림시스템즈Realtime dynamic image warping system for screen based glasses-free VR and its verification method

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR102111407B1 (en)*2013-08-192020-05-15엘지전자 주식회사Display apparatus and method for operating the same
KR102223792B1 (en)*2014-11-102021-03-04엘지디스플레이 주식회사Apparatus and method for correcting image distortion, curved display device including the same
KR20160061794A (en)*2014-11-242016-06-01삼성전자주식회사Dispaly apparatus and controlling method thereof
JP6768197B2 (en)*2014-12-172020-10-14ソニー株式会社 Information processing equipment and methods
KR102364165B1 (en)*2017-06-302022-02-16엘지디스플레이 주식회사Display device and driving method of the same
US10504421B1 (en)*2018-10-122019-12-10Dell Products L.P.System and method of compensation o a curved display
KR102166106B1 (en)*2018-11-212020-10-15스크린엑스 주식회사Method and system for generating multifaceted images using virtual camera
CN109862339A (en)*2019-02-192019-06-07浙江舜宇光学有限公司Reproducting method, device, system, storage medium and the processor of augmented reality
KR20220060926A (en)*2020-11-052022-05-12삼성전자주식회사Electronic apparatus and displaying method thereof
US11693558B2 (en)*2021-06-082023-07-04Samsung Electronics Co., Ltd.Method and apparatus for displaying content on display
JP2023011262A (en)*2021-07-122023-01-24トヨタ自動車株式会社 Virtual reality simulator and virtual reality simulation program
CN119948434A (en)*2022-09-232025-05-06苹果公司 User interface including context representation
GB202217329D0 (en)*2022-11-182023-01-04Productions LtdA method and a process for displaying video files for immersive experience without-real time rendering
CN119722746B (en)*2025-02-282025-05-09安徽元视界科技有限公司Automatic tracking system based on XR space

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111386698A (en)*2017-12-062020-07-07索尼公司Display device
CN114915699A (en)*2022-05-132022-08-16上海傲驰广告文化集团有限公司Virtual studio simulation method and system based on UE system
CN115223455A (en)*2022-06-302022-10-21深圳市联建光电有限公司Display system for augmented reality and display control method thereof
KR102616646B1 (en)*2022-12-152023-12-21주식회사 글림시스템즈Realtime dynamic image warping system for screen based glasses-free VR and its verification method

Also Published As

Publication numberPublication date
CN120010674A (en)2025-05-16

Similar Documents

PublicationPublication DateTitle
Cao et al.Visual quality of compressed mesh and point cloud sequences
Nehmé et al.Visual quality of 3d meshes with diffuse colors in virtual reality: Subjective and objective evaluation
Ozcinar et al.Visual attention in omnidirectional video for virtual reality applications
Upenik et al.Testbed for subjective evaluation of omnidirectional visual content
Osberger et al.Automatic detection of regions of interest in complex video sequences
US10425634B2 (en)2D-to-3D video frame conversion
CN110827193A (en)Panoramic video saliency detection method based on multi-channel features
US20180357819A1 (en)Method for generating a set of annotated images
CN114025219B (en)Rendering method, device, medium and equipment for augmented reality special effects
Jabar et al.Perceptual analysis of perspective projection for viewport rendering in 360° images
Nader et al.Just noticeable distortion profile for flat-shaded 3D mesh surfaces
CN117176983B (en)Video generation evaluation system based on panoramic image synthesis
Jin et al.Subjective and objective video quality assessment for windowed-6DoF synthesized videos
Liu et al.Perceptual quality assessment of omnidirectional images: A benchmark and computational model
CN112954313A (en)Method for calculating perception quality of panoramic image
CN115373548B (en) A display content adjustment method and system
Cui et al.SJTU-TMQA: A quality assessment database for static mesh with texture map
CN120010674B (en)Method and system for evaluating generated graph matching of XR shed
KR100560464B1 (en) How to configure a multiview image display system adaptive to the observer&#39;s point of view
WO2011108183A1 (en)Image processing device, content delivery system, image processing method, and program
CN117315154B (en)Quantifiable face model reconstruction method and system
Cho et al.Tone-mapping requirements in real-time videos for studying the dynamism of views-out in virtual reality
CN110944165A (en)Stereoscopic image visual comfort level improving method combining perceived depth quality
CN110060291B (en) A Calculation Method of Stereo Apparent Distance Considering Human Factors
Calagari et al.Gradient-based 2D-to-3D conversion for soccer videos

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp