Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a generated graph matching evaluation method and system for an XR shed, and aims to solve the technical problem that the content after fixed optimization cannot be adapted to different watching demands of users.
To achieve the above object, the present application provides a system for evaluating a generated map matching for an XR shed, comprising:
the XR shed is used for playing the display content through a playing module, wherein the playing module comprises a hard module and a soft module;
The user information capturing unit is used for capturing user information at a viewing area in front of the XR shed;
The virtual scene simulation unit is used for carrying out virtual scene simulation on the XR shed and the viewing area;
the display content generation unit is used for generating a plane generation graph which can be played by the XR shed according to the prepared display content;
the playing module dividing unit is used for carrying out playing module division on the plane generating graph to obtain a hard module generating graph and a soft module generating graph;
the perspective image processing unit is used for selecting and performing perspective image processing on the soft module generating image according to the target visual angle to obtain a corresponding soft module processing image;
The image matching evaluation unit is used for carrying out image matching evaluation on the soft module processing diagram to obtain matching evaluation scores;
and the playing module management unit is used for judging whether the XR shed plays and displays the soft module processing diagram according to the matching evaluation score.
The user information capturing unit comprises a personnel identification module, a behavior identification module and a space positioning module, wherein the personnel identification module is used for identifying personnel in a viewing area, the behavior identification module is used for identifying the behaviors of the personnel and marking the personnel with the viewing behaviors as users, and the space positioning module is used for performing space positioning on the head positions of the users to obtain user positioning information.
As a further solution, the user information capturing unit further comprises an eye movement capturing module, wherein the eye movement capturing module is used for capturing the eye viewing angle of the user to obtain the user viewing angle information.
The virtual scene simulation unit comprises a virtual space simulation module, a virtual XR shed simulation module and a virtual view angle simulation module, wherein the virtual space simulation module is used for providing a virtual simulation space comprising a virtual viewing area and a virtual XR shed area, the virtual XR shed simulation module is used for performing XR shed simulation playing in the virtual simulation space, and the virtual view angle simulation module is used for performing virtual view angle simulation according to user information.
As a further solution, the virtual XR shed simulation module comprises a virtual XR shed model and a virtual playing module, wherein the virtual XR shed model is used for simulating the physical structure of the XR shed, and the virtual playing module comprises a virtual hard module and a virtual soft module and is arranged in the virtual XR shed model according to the real installation position.
On the other hand, the invention also provides a generated graph matching evaluation method for the XR shed, which is applied to the generated graph matching evaluation system for the XR shed, and comprises the following steps:
step 1, obtaining a prepared plane generation diagram capable of being played by an XR shed for display content generation, and generating the plane generation diagram capable of being played by the XR shed through a display content generation unit;
step 2, a play module dividing unit divides the play module for the plane generation diagram to obtain a corresponding hard module generation diagram and soft module generation diagram;
Step 3, capturing user information at a viewing area in front of the XR shed through a user information capturing unit;
Step 4, the perspective image processing unit selects a target visual field according to preset selection logic and combining user information;
Step 5, performing perspective image processing on the soft module generating diagram according to the target visual angle by a perspective image processing unit to obtain a corresponding soft module processing diagram;
step 6, performing virtual scene simulation on the XR shed and the viewing area through a virtual scene simulation unit;
step 7, the image matching evaluation unit performs image matching evaluation on the soft module processing diagram in the virtual scene simulation to obtain matching evaluation scores;
step 8, the play module management unit judges whether the matching evaluation score is larger than a replacement threshold value;
if yes, controlling the XR shed to play and display the hard module generating diagram and the soft module processing diagram;
if not, controlling the XR shed to play and display the hard module generating diagram and the soft module generating diagram;
and 9, circularly executing the steps 1 to 8 until the XR shed finishes playing and displaying.
As a further solution, the image matching evaluation unit performs image matching evaluation by:
acquiring a target visual angle and a soft module generation diagram and a soft module processing diagram which need to be evaluated;
Controlling the virtual view angle simulation module to simulate the virtual view angle according to the target view angle;
simulating and playing the soft module generating image through a virtual XR shed simulation module, and obtaining a virtual view angle image at the moment through a virtual view angle simulation module to obtain an original view angle image;
simulating and playing the soft module processing image through a virtual XR shed simulation module, and obtaining a virtual view image at the moment through a virtual view simulation module to obtain a processing view image;
evaluating the deformation degree of the original view image relative to the soft module generation image to obtain original image variable parameters;
Evaluating the deformation degree of the processed view angle graph relative to the soft module generation graph to obtain a processed graph variable parameter;
and obtaining the matching evaluation score of the corresponding soft module processing diagram under the target view angle through the original diagram variable parameter-processing diagram variable parameter.
As a still further solution, when the user is a single user, the selection logic is preset:
if the specified annotation view exists, taking the specified annotation view as a target view;
otherwise, the viewing angle position is determined by the user positioning information of the individual user,
If the eye movement capturing module is arranged, the user visual angle information of the single user is used as a target visual angle;
if the eye movement capturing module is not arranged, the default head-up view angle is used as the target view angle.
As a still further solution, when the user is a plurality of users, the selection logic is preset:
Acquiring user positioning information and user visual angle information of each user;
carrying out space position average value calculation through the user positioning information to obtain the public positioning information;
Carrying out space angle average value calculation through the user visual angle information to obtain the public visual angle information;
And determining the view angle position through the mass positioning information, and taking the mass view angle information as a target view angle.
As a still further solution, when the user is a plurality of users, gaze selection logic is also provided:
Acquiring user positioning information and user visual angle information of each user;
determining a playing module watched by each user through the user positioning information and the user visual angle information;
dividing users watching the same soft module into the same group;
carrying out space position average value calculation through the user positioning information of the same group to obtain the public positioning information of the same group;
carrying out space angle average value calculation through the same group of user visual angle information to obtain the same group of public visual angle information;
Determining a viewing angle position through the same group of public positioning information, and taking the same group of public viewing angle information as a same group of target viewing angle;
and each soft module independently replaces the target view angle according to the corresponding same group of target view angles, and the steps 1 to 8 are respectively and circularly executed until the XR shed finishes playing and displaying.
Compared with the related art, the method and the system for evaluating the generated graph matching of the XR shed have the following advantages:
According to the invention, the user information is positioned according to the difference of the position and the viewing angle of the user, and perspective image processing is performed pertinently, so that a soft module processing diagram meeting the user viewing experience is obtained, and then the image matching evaluation unit is used for judging whether the original soft module generating diagram is good or the soft module processing diagram is better in effect, so that the soft module processing diagram is controlled to play and display, and the viewing requirement of the user at the specific viewing angle is met.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Example 1
Referring to fig. 1, an embodiment of the present application provides a generating graph matching evaluation system for an XR shed, including:
the XR shed is used for playing the display content through a playing module, wherein the playing module comprises a hard module and a soft module;
The user information capturing unit is used for capturing user information at a viewing area in front of the XR shed;
The virtual scene simulation unit is used for carrying out virtual scene simulation on the XR shed and the viewing area;
the display content generation unit is used for generating a plane generation graph which can be played by the XR shed according to the prepared display content;
the playing module dividing unit is used for carrying out playing module division on the plane generating graph to obtain a hard module generating graph and a soft module generating graph;
the perspective image processing unit is used for selecting and performing perspective image processing on the soft module generating image according to the target visual angle to obtain a corresponding soft module processing image;
The image matching evaluation unit is used for carrying out image matching evaluation on the soft module processing diagram to obtain matching evaluation scores;
and the playing module management unit is used for judging whether the XR shed plays and displays the soft module processing diagram according to the matching evaluation score.
It should be noted that, as shown in fig. 2, the hard module refers to a playing module which does not need to be bent and deformed, the module can directly play the wanted material without considering the deformation problem, the soft module is a playing module which is bent and deformed at the edge or the diagonal position, and the soft module has a certain deformation and bending, so that a certain deformation can occur when the image content is directly displayed, and the viewing experience of the user is further affected;
the existing XR shed does not distinguish between the soft module and the hard module, and even if deformation bending of the soft module is processed, perspective transformation is performed through a fixed visual angle so as to offset the influence caused by deformation bending, so that better viewing experience can be obtained only at the fixed visual angle, and viewing demands of different visual angles of users cannot be met.
Therefore, according to the embodiment, the user information is positioned according to the difference of the position and the viewing angle of the user, and perspective image processing is performed pertinently, so that a soft module processing diagram meeting the user viewing experience is obtained, whether the original soft module generating diagram is good or the soft module processing diagram is better is judged through the image matching evaluation unit, and further the soft module processing diagram is controlled to be played and displayed, so that the viewing requirement of the user for the specific viewing angle is met.
Specifically, the user information capturing unit comprises a personnel identification module, a behavior identification module and a space positioning module, as shown in fig. 4, people in the viewing area are identified through the personnel identification module, then the behaviors of the people are identified through the behavior identification module, the people with the viewing behaviors are marked as users (because some people do not watch, such as passers-by and security personnel), and finally the head positions of the users are spatially positioned through the space positioning module, so that user positioning information is obtained.
In addition, in some scenes with higher requirements, an eye movement capturing module is further arranged, through the eye movement capturing module, the visual angle of a user can be accurately obtained, and a playing module (a red square block part in fig. 4 is a soft module positioned according to the visual angle of the user) watched by the user can be further positioned, so that more targeted perspective transformation processing is performed.
The prepared display content may be 3D scene material and 2D static material prepared in advance, or may be material generated based on AI, such as a text chart, and the like, and the display content is not limited herein.
The virtual scene simulation unit is mainly used for simulating a real scene, so that relevant information wanted by people is extracted from the simulation scene, and the virtual scene simulation unit mainly comprises a virtual space simulation module, a virtual XR shed simulation module and a virtual view angle simulation module, wherein the virtual space simulation module is used for providing a virtual simulation space and comprises a virtual viewing area and a virtual XR shed area, the virtual XR shed simulation module is used for carrying out XR shed simulation playing in the virtual simulation space, and the virtual view angle simulation module is used for carrying out virtual view angle simulation according to user information.
As shown in FIG. 3, the virtual XR booth simulation module comprises a virtual XR booth model and a virtual playing module, wherein the virtual XR booth model is used for simulating the physical structure of the XR booth, and the virtual playing module comprises a virtual hard module and a virtual soft module and is arranged in the virtual XR booth model according to the real installation position.
The virtual XR shed model and the virtual playing module are matched with each other, so that the display effect of the image can be accurately obtained in the virtual scene, and further, a data basis is provided for subsequent judgment.
Example 2
Referring to fig. 5, on the basis of embodiment 1, the present embodiment provides a method for evaluating matching of generated patterns for XR sheds, comprising the following steps:
step 1, obtaining a prepared plane generation diagram capable of being played by an XR shed for display content generation, and generating the plane generation diagram capable of being played by the XR shed through a display content generation unit;
step 2, a play module dividing unit divides the play module for the plane generation diagram to obtain a corresponding hard module generation diagram and soft module generation diagram;
Step 3, capturing user information at a viewing area in front of the XR shed through a user information capturing unit;
Step 4, the perspective image processing unit selects a target visual field according to preset selection logic and combining user information;
Step 5, performing perspective image processing on the soft module generating diagram according to the target visual angle by a perspective image processing unit to obtain a corresponding soft module processing diagram;
step 6, performing virtual scene simulation on the XR shed and the viewing area through a virtual scene simulation unit;
step 7, the image matching evaluation unit performs image matching evaluation on the soft module processing diagram in the virtual scene simulation to obtain matching evaluation scores;
step 8, the play module management unit judges whether the matching evaluation score is larger than a replacement threshold value;
if yes, controlling the XR shed to play and display the hard module generating diagram and the soft module processing diagram;
if not, controlling the XR shed to play and display the hard module generating diagram and the soft module generating diagram;
and 9, circularly executing the steps 1 to 8 until the XR shed finishes playing and displaying.
Furthermore, the image matching evaluation unit needs to evaluate the soft module generating diagram and the soft module processing diagram, and determines which one is more matched with the viewing requirement of the user, that is, which diagram is more similar to the original viewing content from the view angle of the user, and specifically performs the following steps:
acquiring a target visual angle and a soft module generation diagram and a soft module processing diagram which need to be evaluated;
Controlling the virtual view angle simulation module to simulate the virtual view angle according to the target view angle;
The virtual XR shed simulation module is used for simulating and playing the soft module generation image, and the virtual view angle simulation module is used for obtaining a virtual view angle image at the moment to obtain an original view angle image, wherein the original view angle image is an image seen by a user view angle when the original view angle image is not processed;
Then, the soft module processing image is simulated and played through a virtual XR shed simulation module, and a virtual visual angle image at the moment is obtained through a virtual visual angle simulation module, so that a processing visual angle image is obtained, and the processing visual angle image is an image seen by a user visual angle when processing is carried out;
evaluating the deformation degree of the original view image relative to the soft module generation image to obtain original image variable parameters;
Evaluating the deformation degree of the processed view angle graph relative to the soft module generation graph to obtain a processed graph variable parameter;
The degree of deformation here can be selectively set according to the point of emphasis,
If the similarity of the content is concerned, the pixel value differences of the images before and after the deformation are directly compared by using a pixel-based metric, and the following parameters can be adopted:
Mean square error (MSE, mean Squared Error)
The mean square error of the pixel values of the two images is calculated, and the smaller the numerical value is, the smaller the difference after deformation is.
Mean absolute error (MAE, mean Absolute Error)
The absolute average of the pixel value differences is calculated and is more robust to outliers than MSE.
Peak signal to Noise Ratio (PSNR, peak Signal-to-Noise Ratio)
Numerical indicators based on MSE are commonly used to evaluate image compression or reconstruction quality.
In some scenes (such as building displays) with more attention to structural relevance, the retention degree of structural information of the image content needs to be measured, and the following parameters can be adopted:
structural similarity index (SSIM, structural Similarity Index)
And the brightness, contrast and structural information are integrated, so that the human visual perception is more met.
Multiscale SSIM (MS-SSIM)
SSIM is calculated on multiple scales, and robustness to complex deformation is enhanced.
In some scenes focusing on geometric relationships (such as vehicle mechanics simulation diagram display, etc.), the method is suitable for analyzing geometric deformation (such as affine transformation and elastic deformation) of the image, such as:
Displacement field (DISPLACEMENT FIELD)
The displacement vector describing each pixel is commonly used for non-rigid registration (e.g., medical imaging).
Jacobian matrix determinant (Jacobian Determinant)
The local area was analyzed for volume change (determinant >1 is expansion, <1 is contraction).
Strain Tensor (stress Tensor)
The degree of stretching or shearing of the local deformation (e.g., green-Lagrange strain in engineering mechanics) is described.
Curvature (Curvature)
For analyzing the change in degree of curvature of a curved surface or profile.
After the proper parameters are selected for measurement, the matching evaluation score of the corresponding soft module processing diagram under the target view angle is obtained through the original diagram variable parameters and the processing diagram variable parameters, if the score (positive) exceeds the replacement threshold, the processed soft module processing diagram is explained to meet the viewing requirement of the current view angle more, and the soft module processing diagram is replaced.
Further, when the user is a single user, the selection logic is preset to:
if the specified annotation view exists, taking the specified annotation view as a target view;
otherwise, the viewing angle position is determined by the user positioning information of the individual user,
If the eye movement capturing module is arranged, the user visual angle information of the single user is used as a target visual angle;
if the eye movement capturing module is not arranged, the default head-up view angle is used as the target view angle.
Still further, when the user is a plurality of users, preset selection logic:
Acquiring user positioning information and user visual angle information of each user;
carrying out space position average value calculation through the user positioning information to obtain the public positioning information;
Carrying out space angle average value calculation through the user visual angle information to obtain the public visual angle information;
And determining the view angle position through the mass positioning information, and taking the mass view angle information as a target view angle.
When the user is a plurality of users and is provided with an eye movement capturing module, the user can acquire the soft modules watched by each user, different target visual angles are adopted by different soft modules so as to simultaneously meet the video watching requirements of different users, as shown in fig. 4, the user wearing the cap shirt 1 is watching the zenith, therefore, the zenith corresponds to the soft modules and is adapted according to the visual angle of the user, and similarly, another user watches the soft modules on the right side and is also adapted according to the visual angle of another user, and the watch selection logic is as follows:
Acquiring user positioning information and user visual angle information of each user;
determining a playing module watched by each user through the user positioning information and the user visual angle information;
dividing users watching the same soft module into the same group;
carrying out space position average value calculation through the user positioning information of the same group to obtain the public positioning information of the same group;
carrying out space angle average value calculation through the same group of user visual angle information to obtain the same group of public visual angle information;
Determining a viewing angle position through the same group of public positioning information, and taking the same group of public viewing angle information as a same group of target viewing angle;
and each soft module independently replaces the target view angle according to the corresponding same group of target view angles, and the steps 1 to 8 are respectively and circularly executed until the XR shed finishes playing and displaying.
The scheme can further subdivide different soft modules, further meet different viewing requirements of different users, different viewing angles and different viewing areas.
The foregoing is only a part of embodiments of the present application, and is not intended to limit the scope of the present application, and all the equivalent structural changes made by the description and drawings of the present application or the direct/indirect application in other related technical fields are included in the scope of the present application.