Movatterモバイル変換


[0]ホーム

URL:


CN115761089A - Image rendering method and device, head-mounted display equipment and readable storage medium - Google Patents

Image rendering method and device, head-mounted display equipment and readable storage medium
Download PDF

Info

Publication number
CN115761089A
CN115761089ACN202211437447.6ACN202211437447ACN115761089ACN 115761089 ACN115761089 ACN 115761089ACN 202211437447 ACN202211437447 ACN 202211437447ACN 115761089 ACN115761089 ACN 115761089A
Authority
CN
China
Prior art keywords
image
rendering
current
user
eyeball
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211437447.6A
Other languages
Chinese (zh)
Inventor
邱绪东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Techology Co Ltd
Original Assignee
Goertek Techology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Techology Co LtdfiledCriticalGoertek Techology Co Ltd
Priority to CN202211437447.6ApriorityCriticalpatent/CN115761089A/en
Publication of CN115761089ApublicationCriticalpatent/CN115761089A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Landscapes

Abstract

The application discloses an image rendering method, an image rendering device, a head-mounted display device and a readable storage medium, wherein the image rendering method comprises the following steps: acquiring a collected current view field environment image; dynamically detecting head movement posture information of a user, and determining an intention observation visual angle of the user according to the head movement posture information; rendering the image in the intention observation view angle in the current view field environment image at a first rendering resolution ratio, performing rendering quality reduction processing on the first rendering resolution ratio to obtain a second rendering resolution ratio, and rendering the image outside the intention observation view angle in the current view field environment image at the second rendering resolution ratio. The application can reduce the rendering pressure of the head-mounted display device on the image.

Description

Image rendering method and device, head-mounted display equipment and readable storage medium
Technical Field
The application relates to the technical field of wearable equipment, in particular to an image rendering method and device, head-mounted display equipment and a readable storage medium.
Background
As an emerging technology, the Extended Reality technology (XR) is gradually entering the human vision and is applied and popularized in various industries. The Augmented Reality technology specifically includes Virtual Reality technology (VR), augmented Reality technology (AR), mixed Reality technology (MR), and the like.
With the development of the augmented reality technology, the resolution and the refresh rate are further improved, which means that when an image is transmitted, the larger the signal quantity output per frame is, the higher the requirement on the transmission bandwidth is, and the rendering capability of the system and the transmission capability from the system end to the display end are challenged greatly. At present, in the face of an ultra-high resolution augmented reality application image, the image rendering pressure of an augmented reality device is high, so that the frame rate of a picture displayed by the augmented reality device is insufficient, the picture has a pause phenomenon, and the smoothness requirement of a user on the picture cannot be met.
Disclosure of Invention
The application mainly aims to provide an image rendering method and device, a head-mounted display device and a readable storage medium, and aims to reduce the problem that the image rendering pressure of an augmented reality device is large.
In order to achieve the above object, the present application provides an image rendering method, where the image rendering method is applied to a head-mounted display device, and the method includes:
acquiring a collected current view field environment image;
dynamically detecting head movement posture information of a user, and determining an intention observation visual angle of the user according to the head movement posture information;
rendering the image in the intended observation view angle in the current view field environment image at a first rendering resolution ratio, performing rendering quality reduction processing on the first rendering resolution ratio to obtain a second rendering resolution ratio, and rendering the image outside the intended observation view angle in the current view field environment image at the second rendering resolution ratio.
Optionally, the step of determining the intended observation angle of view of the user according to the head motion pose information comprises:
determining the head steering trend of the user according to the head movement posture information;
and detecting an eyeball observation point of the user in the current view field environment image, and determining an intended observation view angle of the user according to the eyeball observation point and the head turning trend.
Optionally, the step of determining the intended viewing angle of the user from the eyeball observation point and the head turning tendency comprises:
determining the current gazing direction according to the eyeball observation point;
and inquiring to obtain the observation visual angle mapped by the current gazing direction and the head turning trend from a preset observation visual angle mapping table, and using the mapped observation visual angle as an intention observation visual angle of the user.
Optionally, the step of detecting an eyeball observation point of the user in the current view field environment image includes:
acquiring a current eyeball image of a user, determining an eyeball model with the highest matching degree with the current eyeball image, and taking the eyeball model with the highest matching degree as a current actual eyeball model;
and inquiring to obtain an eyeball observation point mapped by the current actual eyeball model from a preset eyeball model mapping database, and taking the mapped eyeball observation point as the eyeball observation point of the user in the current view field environment image.
Optionally, the step of detecting an eyeball observation point of the user in the current view field environment image further includes:
acquiring a current eyeball image of a user, and carrying out gray processing on the current eyeball image;
determining a pupil area image according to the current eyeball image after the graying processing, and carrying out binarization processing on the pupil area image;
performing edge detection on the pupil area image after binarization processing to obtain pupil edge points, and performing ellipse fitting on the pupil edge points to obtain the current pupil center;
and determining an eyeball observation point of the user in the current view field environment image according to the current pupil center.
Optionally, the step of determining an eyeball observation point of the user in the current field-of-view environment image according to the current pupil center includes:
inquiring to obtain an eyeball observation point mapped by the current pupil center from a pre-calibrated pupil center mapping data table;
and taking the mapped eyeball observation point as the eyeball observation point of the user in the current view field environment image.
Optionally, the step of determining the intended observation angle of view of the user according to the head movement posture information further comprises:
performing brightness display control on the image in the intended observation visual angle in the current visual field environment image by first region backlight brightness;
performing brightness reduction processing on the first area backlight brightness to obtain second area backlight brightness;
and performing brightness display control on the images outside the intended observation visual angle in the current visual field environment image by using the backlight brightness of a second area.
Further, to achieve the above object, the present application provides an image rendering apparatus applied to a head-mounted display device, the image rendering apparatus including:
the acquisition module is used for acquiring the acquired current view field environment image;
the detection module is used for dynamically detecting head movement posture information of a user and determining an intention observation visual angle of the user according to the head movement posture information;
and the rendering module is used for rendering the image in the intended observation visual angle in the current visual field environment image at a first rendering resolution ratio, performing rendering quality reduction processing on the first rendering resolution ratio to obtain a second rendering resolution ratio, and rendering the image outside the intended observation visual angle in the current visual field environment image at the second rendering resolution ratio.
The present application further provides a head mounted display device, the head mounted display device is a physical device, the head mounted display device includes: a memory, a processor and a program of the image rendering method stored on the memory and executable on the processor, which program, when executed by the processor, may implement the steps of the image rendering method as described above.
The present application also provides a readable storage medium, which is a computer readable storage medium having a program for implementing an image rendering method stored thereon, where the program for implementing the image rendering method is executed by a processor to implement the steps of the image rendering method as described above.
The present application also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the image rendering method as described above.
In the conventional image rendering technology, an image displayed on the whole target screen is generally rendered with a higher rendering quality so as to meet the viewing requirements of a user, however, if the main eye attention area of the user in the field of view only covers a part of the area of the screen, the image displayed outside the main eye attention area in the prior art is also rendered with a higher rendering quality, which causes a waste of rendering resources.
According to the technical scheme, the method comprises the steps of acquiring a collected current view field environment image, dynamically detecting head motion posture information of a user, determining an intention observation visual angle of the user according to the head motion posture information, rendering images in the intention observation visual angle in the current view field environment image at a first rendering resolution, and rendering images outside the intention observation visual angle in the current view field environment image at a second rendering resolution, wherein the first rendering resolution is larger than the second rendering resolution, so that the rendering resolution of the images is high in an area concerned by eyes, and the rendering resolution of the images is low in an area not concerned by the eyes (a peripheral visual field), so that the user experience is not influenced, or on the premise of improving the user experience, operating resources for processing the images are saved, more reasonable rendering of the images is realized, the waste of the rendering resources caused by higher rendering quality of the images outside the eye watching area is avoided, redundant rendering existing in the image rendering process is reduced as much as possible, further, the rendering pressure of the extended reality device on the images is reduced, and the requirement of high-resolution and the frame rate of the extended reality device is reduced while the viewing requirement is met.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flowchart illustrating a first embodiment of an image rendering method according to the present application;
FIG. 2 is a flowchart illustrating a second embodiment of an image rendering method according to the present application;
FIG. 3 is a flowchart illustrating a third embodiment of an image rendering method according to the present application;
FIG. 4 is a flowchart illustrating a third embodiment of an image rendering method according to the present application;
FIG. 5 is a schematic diagram illustrating head pose information of a user wearing a head-mounted display device according to an embodiment of the present application;
FIG. 6 is a schematic view of a scene for identifying an intended viewing angle of a user according to an embodiment of the present application;
FIG. 7 is a diagram illustrating a current gaze direction of a user in an embodiment of the present application;
fig. 8 is a schematic device structure diagram of a hardware operating environment related to a head-mounted display device in an embodiment of the present application.
The objectives, features, and advantages of the present application will be further described with reference to the accompanying drawings.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In this embodiment, the head-mounted display device of the present application may be, for example, a Mixed Reality (Mixed Reality) -MR device (e.g., MR glasses or MR helmet), an Augmented Reality (Augmented Reality) -AR device (e.g., AR glasses or AR helmet), a Virtual Reality- (Virtual Reality) -VR device (e.g., VR glasses or VR helmet), an Extended Reality (Extended Reality) -XR device (e.g., XR glasses or XR helmet), or some combination thereof.
Example one
At present, in the face of an ultra-high resolution augmented reality application image, the image rendering pressure of an augmented reality device is high, so that the frame rate of a picture displayed by the augmented reality device is insufficient, the picture has a pause phenomenon, and the smoothness requirement of a user on the picture cannot be met.
Based on this, referring to fig. 1, fig. 1 is a schematic flowchart illustrating a first embodiment of an image rendering method according to the present application, in which the image rendering method is applied to a head-mounted display device, and the method includes:
step S10, acquiring an acquired current view field environment image;
in this embodiment, the head-mounted display device is worn on the head of a user.
It should be noted that the current field environment image refers to the XR content image that is the largest range of view that the user can see in the current head pose. Wherein the current head pose may include a spatial position and an angle of the current head, wherein the angle may include a pitch angle (pitch) rotated based on an X-axis, a yaw angle (yaw) rotated based on a Y-axis, and a roll angle (roll) rotated based on a Z-axis, as shown in fig. 5.
It should be noted that, in an embodiment, the head posture information (i.e. the current head posture) of the user may be dynamically detected through an inertial sensor and/or a camera mounted on the head-mounted display device itself, where the camera may be one or more of a Time of Flight (TOF) camera, an infrared camera, a millimeter wave camera and an ultrasonic camera. In another embodiment, the dynamic detection of the head posture information of the user can be completed by sending the head posture information of the user to the head-mounted display device in real time through other devices in communication connection with the head-mounted display device. For example, a camera installed in an activity place where the head-mounted display device is applied tracks and locates the head-mounted display device (or the head of the user), so as to obtain head posture information of the user, and sends the head posture information to the head-mounted display device in real time, so that the head-mounted display device obtains the dynamically detected head posture information in real time.
In the present embodiment, as known to those skilled in the art, for the augmented reality technology, in order to simulate human eye sensory changes in the real world and improve the immersion of the user in the augmented reality content, the visual field images that can be seen by the user are often different under different head posture information. And the current view field environment image is the view field image which can be seen corresponding to the current head pose (different head pose information corresponds to different view field environment images). That is, at the current head pose (i.e., particular eye position), the maximum extent to which the user can see the XR content image is the user's current field of view environment image. As will be readily understood by those skilled in the art, during the content display process of the head-mounted display device, the head pose information of the user may be changed in real time, and the head-mounted display device may acquire or acquire the head pose information of the user in real time to update the current view field environment image.
Step S20, dynamically detecting head movement posture information of a user, and determining an intention observation visual angle of the user according to the head movement posture information;
in the present embodiment, the head movement posture information may include a displacement value and an angle change value of the head, wherein the angle change value may include a pitch angle (pitch) rotated based on the X-axis, a yaw angle (yaw) rotated based on the Y-axis, and an angle change value of a roll angle (roll) rotated based on the Z-axis, which may be referred to fig. 5.
In one embodiment, the head motion pose information of the user can be dynamically detected through an inertial sensor and/or a camera mounted on the head-mounted display device. In another embodiment, the dynamic detection of the head movement posture information of the user can be completed by transmitting the head movement posture information of the user to the head-mounted display device in real time through other devices in communication connection with the head-mounted display device. For example, a camera installed in an activity place where the head-mounted display device is applied tracks and positions the head-mounted display device (or the head of the user), so as to obtain head motion posture information of the user, and sends the head motion posture information to the head-mounted display device in real time, so that the head-mounted display device obtains the dynamically detected head motion posture information in real time.
It is known that users often make different head movements for different intended viewing angles, for example, turning the head to the right often represents that the user wants to see the right picture, and turning the head to the left often represents that the user wants to see the left picture.
It is readily understood that in the XR content image of the greatest extent that the user can see in the current head pose (i.e., the current field of view environment image), not all of the images of the regions are of interest to the user, there are regions of interest to the user's eyes, and regions of no interest to the user's eyes. It is easy to understand that, in general, the region corresponding to the movement trend of the head movement gesture often represents the region that the user intends to observe (the user wants to see the right picture and often turns the head to the right), and the region not matching the movement trend corresponding to the head movement gesture often represents the region that the user does not intend to observe.
Therefore, in the present embodiment, by detecting the head movement posture information of the user, based on the head movement posture information, it is determined that the region in the current view field environment image that matches the movement trend corresponding to the head movement posture is a region where eyes are concerned (i.e., a region corresponding to an intended observation angle), and the region in the current view field environment image that does not match the movement trend corresponding to the head movement posture is a region where eyes are not concerned (i.e., a region corresponding to an unintended observation angle).
And step S30, rendering the image in the intended observation angle in the current view field environment image at a first rendering resolution, performing rendering quality reduction processing on the first rendering resolution to obtain a second rendering resolution, and rendering the image outside the intended observation angle in the current view field environment image at the second rendering resolution.
In this embodiment, the image within the intended viewing angle is the main area viewed by the user, and a rendering resolution with a larger scale should be arranged for rendering, and the adjustment parameters of the rendering resolution include, but are not limited to, adjustments of image color, resolution, pixels, light and shadow effects, and shadow effects.
In this embodiment, the image within the intended observation angle in the current view field environment image may be rendered first, and then the image outside the intended observation angle in the current view field environment image may be rendered.
When studying the viewing experience of a user on an image in a head-mounted display device, it is found that a process of rendering an image of a current scene in the head-mounted display device generally includes: the method comprises the steps of moving image data materials such as triangles and material maps required for rendering images in a current scene to a GPU (Graphics Processing Unit) through a CPU (Central Processing Unit), rendering the image data materials through a rendering pipeline by the GPU to obtain an initial image, then rendering the initial image by coloring and the like through an image rendering post-Processing technology, and finally obtaining the image which can be displayed for a user under the current extended reality scene.
In the conventional image rendering technology, an image displayed on the whole target screen is generally rendered with a higher rendering quality so as to meet the viewing requirements of a user, however, if the main eye attention area of the user in the field of view only covers a part of the area of the screen, the image displayed outside the main eye attention area in the prior art is also rendered with a higher rendering quality, which causes a waste of rendering resources.
With the rise of head-mounted display equipment, people increasingly use VR/AR equipment products, the augmented reality mobile terminals need to do a large amount of image rendering calculation, the power consumption requirement is high, the equipment endurance is influenced greatly, and if the user can be identified and does not pay attention to a part of content effectively, the part of rendering can be reduced. For example, if it is found that the user does not pay attention to the area image a of the display screen, the quality of image rendering parameters such as image dead pixel repair, noise elimination, color interpolation and the like for the area image a can be reduced, and even these rendering effects for the area image a can be cancelled, so that the reduction of the rendering resolution of the area image a is realized, the image processing work is reduced, and the effect of reducing the power consumption is achieved.
Therefore, according to the technical scheme of this embodiment, the acquired current view field environment image is acquired, head motion posture information of the user is dynamically detected, an intended observation angle of the user is determined according to the head motion posture information, the image within the intended observation angle in the current view field environment image is rendered at a first rendering resolution, and the image outside the intended observation angle in the current view field environment image is rendered at a second rendering resolution, wherein the first rendering resolution is greater than the second rendering resolution, so that the rendering resolution of the image is high in the area concerned by the eyes, and the rendering resolution of the image is low in the area not concerned by the eyes (peripheral view field), so that the user experience is not affected, or on the premise of improving the user experience, operating resources for processing the image are saved, more reasonable rendering of the image is realized, the waste of rendering resources caused by higher rendering quality of the image outside the eye watching area is avoided, redundant rendering existing in the image rendering process is minimized, further, the rendering pressure of the extended reality device on the image is reduced, and the requirement of the high resolution and the frame rate of the extended reality device rendering capability of high-content is reduced while the viewing requirement on the high resolution is met.
As an example, referring to fig. 2, the step of determining the intended observation angle of view of the user according to the head motion pose information includes:
step S21, determining the head turning trend of the user according to the head movement posture information;
the head turning tendency refers to turning tendency of the head to turn left, turn right, turn up, turn down, turn up left, turn down left, turn up right, or turn down right.
And S22, detecting an eyeball observation point of the user in the current view field environment image, and determining an intended observation angle of view of the user according to the eyeball observation point and the head turning trend.
In this embodiment, an eye image of a user may be acquired by an eye detection device mounted on a head-mounted display device, and calculation is performed based on eye feature information extracted from the eye image, so as to obtain coordinates of a fixation point when the eyes of the user look at a display screen, and obtain an eyeball observation point of a current view field environment image. The eye detection device can be a micro electro-Mechanical System (MEMS), and the MEMS includes an infrared scanning mirror, an infrared light source, and an infrared receiver. Currently, the eye detection device may be a capacitive sensor disposed in an eye region of a user, detect eye movement by using a capacitance value between an eye and a capacitive plate of the capacitive sensor, determine current eye position information of the user, and then determine an eye observation point of the user in a current view field environment image according to the current eye position information. In addition, the eye detection device can also be a myoelectric current detector, the myoelectric current detector is connected with electrodes placed at the bridge of the nose, the forehead, the ears and the earlobes of the user, myoelectric current signals of the parts are collected by the electrodes, eyeball movement is detected through a detected myoelectric current signal mode, current eye position information of the user is determined, and then an eyeball observation point of the user in a current view field environment image is determined according to the current eye position information.
To facilitate understanding, an embodiment is illustrated, for example, when a user turns his head to the right, the user may want to see a right-side frame with a high probability, and at this time, the gaze point identification is combined, that is, if the gaze point (i.e., the eye observation point) identified by the eye detection device is also located on the right-side frame (the gaze point is located on the right side of the display centerline), this time, the user often wants to see the right-side frame, and the intended observation angle is an observation angle correspondingly included in the right-side frame, as shown in fig. 6, wherein a frame in a leftmost 1/a equal division area of the frame may be rendered with a lower resolution, so as to achieve the purpose of reducing the image rendering load/pressure of the head-mounted display device. For another example, if the user turns left, the user wants to see the left image with a high probability, and if the gaze point identified by the eye detection device is also on the left image (the gaze point is on the right side of the display centerline), this time often represents that the user wants to see the left image, and the intended observation angle is the observation angle correspondingly included in the left image, where the rendering resolution of the image in the rightmost 1/a equal division area of the image can be reduced, so as to achieve the purpose of reducing the image rendering load/pressure of the head-mounted display device, where it should be noted that a is generally greater than 2, for example, a is equal to 3 or 4. In one example, a is equal to 4, i.e., if the user turns his head to the left, the rendering resolution is reduced for the frame in the rightmost 1/4 equal division of the frame. If the user turns the head to the right, the rendering resolution is reduced for the picture of the leftmost 1/4 equal partition area of the picture.
According to the embodiment, the head turning trend of the user is determined according to the head movement posture information, then the eyeball observation point of the user in the current view field environment image is detected, and the intention observation angle of the user is determined by combining the eyeball observation point of the user on the basis of the head turning trend, so that the accuracy of determining the intention observation angle is improved.
Further, in an implementable manner, the step of determining the intended viewing angle of the user from the eyeball observation point and the head turning tendency includes:
step A10, determining the current gazing direction according to the eyeball observation point;
in this embodiment, an eye image of the user may be acquired based on an eye tracking technology, pupil center and spot position information of the user (a spot is a reflection bright spot formed by a screen of the head-mounted display device on an eye cornea of the user) may be acquired according to the eye image of the user, an eye observation point of the user may be determined according to the pupil center and spot position information of the user, and then a current gaze direction of the user in a current view field environment image may be determined according to the eye observation point, as shown in fig. 7.
And step A20, inquiring and obtaining the observation visual angle mapped by the current gazing direction and the head turning trend from a preset observation visual angle mapping table, and using the mapped observation visual angle as the intention observation visual angle of the user.
In this embodiment, the observation perspective mapping table stores a mapping relationship between two parameters, i.e., a gaze direction and a head turning tendency, and an observation perspective in a one-to-one mapping manner.
To facilitate understanding, as an example, the step of querying the preset viewing angle mapping table to obtain the current gaze region and the viewing angle of the head turning tendency map comprises:
if the head turning trend is turning to the left and the current watching direction is the left watching direction, inquiring and obtaining the current watching area and the watching visual angle mapped by the head turning trend as a left watching visual angle from a preset watching visual angle mapping table; if the head turning trend is turning to the right and the current watching direction is a right watching direction, inquiring to obtain a mapped observation visual angle as a right observation visual angle from a preset observation visual angle mapping table; if the head turning trend is an upward turning trend and the current watching direction is an upward watching direction, inquiring from a preset observation visual angle mapping table to obtain a mapped observation visual angle as an upward observation visual angle; and if the head turning trend is downward turning and the current watching direction is a downward watching direction, inquiring to obtain a mapped watching visual angle as a downward watching visual angle from a preset watching visual angle mapping table.
Certainly, if the head turns to the trend and turns to left upper, just the direction of gazing at present is upper left direction of gazing, then from the observation visual angle mapping table that predetermines, the inquiry obtains the region of gazing at present with the observation visual angle that the head turned to the trend mapping is upper left direction corresponding observation visual angle, still has other heads and turns to the trend, and other observation visual angles that the direction of gazing corresponds the mapping are no longer repeated herein one by one. The above-described examples are only for assisting understanding of the present application, and do not constitute a limitation on the viewing angle mapping table of the present application.
It should be understood that the human eye has different image sharpness for different areas of the image. In the visible range of the user (namely in the current field environment image), the image area mainly concerned by the eyeball is sensitive and clearly imaged, and the imaging of other image areas is fuzzy. The image of the observation part corresponding to the intended observation visual angle is an image area which is mainly concerned by the eyeballs of the user, and the images corresponding to other parts in the current visual field environment image are other image areas which are not concerned by the eyeballs of the user.
According to the method and the device, the current watching direction is determined according to the eyeball observation point, the observation angle mapped by the current watching direction and the head turning trend is obtained by inquiring from the preset observation angle mapping table, and the mapped observation angle is used as the intention observation angle of the user, so that the accuracy of determining the intention observation angle of the user is further improved.
In a possible implementation, the step of detecting an eyeball observation point of the user in the current field-of-view environment image comprises:
step B10, collecting a current eyeball image of a user, determining an eyeball model with the highest matching degree with the current eyeball image, and taking the eyeball model with the highest matching degree as a current actual eyeball model;
in this embodiment, the eye model with the highest matching degree with the current eye image may be identified by performing image recognition based on a preset image recognition algorithm on the current eye image. The preset image recognition algorithm has been studied by those skilled in the art, and is not described herein again.
And step B20, inquiring and obtaining an eyeball observation point mapped by the current actual eyeball model from a preset eyeball model mapping database, and taking the mapped eyeball observation point as an eyeball observation point of the user in the current visual field environment image.
As will be understood by those skilled in the art, different types of eye models (e.g., different information such as exit pupil distance, pupil shape, pupil region position, and current pupil spot position in the eye model) often correspond to different eye observation points.
In the present embodiment, the eyeball model mapping database stores information of a plurality of types of eyeball models and mapping relationships between the eyeball models and the eyeball observation points in a one-to-one mapping manner.
In the embodiment, the current eyeball image of the user is collected, the eyeball model with the highest matching degree with the current eyeball image is determined, the eyeball model with the highest matching degree is used as the current actual eyeball model, the eyeball observation point mapped by the current actual eyeball model is inquired and obtained from the preset eyeball model mapping database, and therefore the eyeball observation point of the user in the current view field environment image is accurately obtained.
In another possible implementation manner, referring to fig. 3, the step of detecting an eyeball observation point of the user in the current view-field environment image further includes:
s41, acquiring a current eyeball image of a user, and carrying out gray processing on the current eyeball image;
in this embodiment, the current eyeball image of the user can be captured by a camera mounted on the head-mounted display device.
Step S42, determining a pupil area image according to the current eyeball image after the graying processing, and carrying out binaryzation processing on the pupil area image;
step S43, performing edge detection on the pupil area image after binarization processing to obtain pupil edge points, and performing ellipse fitting on the pupil edge points to obtain the current pupil center;
and S44, determining an eyeball observation point of the user in the current view field environment image according to the current pupil center.
As an example, the step of determining an eyeball observation point of the user in the current field-of-view environment image according to the current pupil center includes:
step C10, inquiring an eyeball observation point mapped by the current pupil center from a pre-calibrated pupil center mapping data table;
those skilled in the art will appreciate that pupil centers at different locations often correspond to different eye observation points. It should be noted that, in the pre-calibrated pupil center mapping data table, a plurality of pupil centers at different positions and a one-to-one mapping relationship between each pupil center and an eyeball observation point are stored.
And step C20, taking the mapped eyeball observation point as an eyeball observation point of the user in the current view field environment image.
In this embodiment, a current eyeball image of a user is acquired, a graying process is performed on the current eyeball image, a pupil area image is determined according to the current eyeball image after the graying process, a binarization process is performed on the pupil area image, then edge detection is performed on the pupil area image after the binarization process, a pupil edge point is obtained through detection, ellipse fitting is performed on the pupil edge point, a current pupil center is obtained through fitting, and then an eyeball observation point of the user in a current view field environment image can be accurately obtained based on the current pupil center.
Example two
Referring to fig. 4, in another embodiment of the present application, the same or similar contents as those in the first embodiment may be referred to the above description, and are not repeated herein. On this basis, the step of determining the intended observation angle of view of the user according to the head motion posture information further comprises the following steps:
step S51, performing brightness display control on the image in the intended observation visual angle in the current visual field environment image according to the first area backlight brightness;
step S52, carrying out brightness reduction processing on the first area backlight brightness to obtain second area backlight brightness;
and S53, performing brightness display control on the images outside the intended observation visual angle in the current visual field environment image by using the backlight brightness of a second area.
In the conventional image rendering technology, generally, a high area backlight brightness is used to perform brightness display control on an image displayed on a whole target screen so as to meet the viewing requirements of a user, however, if a main eye attention area of the user in a field of view only covers a part of an area of the screen, an image displayed outside the main eye attention area in the prior art is also subjected to the high area backlight brightness, which causes waste of power consumption.
Therefore, in the technical scheme of this embodiment, brightness display control is performed on an image in an intended observation view angle in an environment image of a current view field by using the backlight brightness of the first area, brightness reduction processing is performed on the backlight brightness of the first area to obtain the backlight brightness of the second area, and then brightness display control is performed on an image outside the intended observation view angle in the environment image of the current view field by using the backlight brightness of the second area, so that waste of brightness power consumption of a non-gazing area is reduced, and brightness and definition of an image of a gazing area are improved, so that in an area concerned by eyes, the backlight brightness of the image is high, and in an area not concerned by eyes (in a peripheral view field), the backlight brightness of the image is low, so that user experience is not affected, or on the premise of improving user experience, display energy consumption is saved, more reasonable brightness display control over the image is realized, and energy consumption waste caused by high display brightness of the image outside the area concerned by eyes is avoided.
EXAMPLE III
An embodiment of the present invention further provides an image rendering apparatus, where the image rendering apparatus is applied to a head-mounted display device, and the image rendering apparatus includes:
the acquisition module is used for acquiring the acquired current view field environment image;
the detection module is used for dynamically detecting head movement posture information of a user and determining an intention observation visual angle of the user according to the head movement posture information;
and the rendering module is used for rendering the image in the intended observation visual angle in the current visual field environment image at a first rendering resolution ratio, performing rendering quality reduction processing on the first rendering resolution ratio to obtain a second rendering resolution ratio, and rendering the image outside the intended observation visual angle in the current visual field environment image at the second rendering resolution ratio.
Optionally, the detection module is further configured to:
determining the head steering trend of the user according to the head movement posture information;
and detecting eyeball observation points of the user in the current view field environment image, and determining an intended observation view angle of the user according to the eyeball observation points and the head turning trend.
Optionally, the detection module is further configured to:
determining the current gazing direction according to the eyeball observation point;
and inquiring to obtain the observation visual angle mapped by the current gazing direction and the head turning trend from a preset observation visual angle mapping table, and using the mapped observation visual angle as an intention observation visual angle of the user.
Optionally, the detecting module is further configured to:
acquiring a current eyeball image of a user, determining an eyeball model with the highest matching degree with the current eyeball image, and taking the eyeball model with the highest matching degree as a current actual eyeball model;
and inquiring to obtain an eyeball observation point mapped by the current actual eyeball model from a preset eyeball model mapping database, and taking the mapped eyeball observation point as the eyeball observation point of the user in the current view field environment image.
Optionally, the detecting module is further configured to:
acquiring a current eyeball image of a user, and carrying out gray processing on the current eyeball image;
determining a pupil area image according to the current eyeball image after the graying processing, and carrying out binarization processing on the pupil area image;
performing edge detection on the pupil area image after binarization processing to obtain pupil edge points, and performing ellipse fitting on the pupil edge points to obtain the current pupil center;
and determining an eyeball observation point of the user in the current view field environment image according to the current pupil center.
Optionally, the detecting module is further configured to:
inquiring to obtain an eyeball observation point mapped by the current pupil center from a pre-calibrated pupil center mapping data table;
and taking the mapped eyeball observation point as the eyeball observation point of the user in the current view field environment image.
Optionally, the rendering module is further configured to
Performing brightness display control on the image in the intended observation visual angle in the current visual field environment image by first region backlight brightness;
performing brightness reduction processing on the first area backlight brightness to obtain second area backlight brightness;
and performing brightness display control on the images outside the intended observation visual angle in the current visual field environment image by using the backlight brightness of a second area.
By adopting the image rendering method in the first embodiment or the second embodiment, the image rendering device provided by the embodiment of the invention can reduce the rendering pressure of the head-mounted display equipment on the image. Compared with the prior art, the image rendering device provided by the embodiment of the invention has the same beneficial effects as the image rendering method provided by the embodiment, and other technical features in the image rendering device are the same as those disclosed in the method of the previous embodiment, which are not repeated herein.
Example four
An embodiment of the present invention provides a head-mounted display device, including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to perform the image rendering method according to the first embodiment.
Referring now to FIG. 8, shown is a schematic diagram of a head mounted display device suitable for use in implementing embodiments of the present disclosure. Head mounted display devices in embodiments of the present disclosure may include, but are not limited to, mixed Reality (Mixed Reality) -MR devices, augmented Reality (Augmented Reality) -AR devices, virtual Reality- (Virtual Reality) -VR devices, extended Reality (Extended Reality) -XR devices, or some combination thereof, among others. The head mounted display device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 8, the head mounted display device may include a processing means 1001 (e.g., a central processing unit, a graphic processor, etc.) which may perform various appropriate actions and processes according to a program stored in a read only memory (ROM 1002) or a program loaded from a storage means into a random access memory (RAM 1004). In the RAM1004, various programs and data necessary for the operation of the AR glasses are also stored. The processing device 1001, ROM1002, and RAM1004 are connected to each other through a bus 1005. An input/output (I/O) interface is also connected to bus 1005.
Generally, the following systems may be connected to the I/O interface 1006: an input device 1007 including, for example, a touch screen, a touch pad, a keyboard, a mouse, an image sensor, a microphone, an accelerometer, a gyroscope, or the like; output devices 1008 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, or the like; a storage device 1003 including, for example, a magnetic tape, a hard disk, or the like; and a communication device 1009. The communications apparatus 1009 may allow the AR glasses to communicate wirelessly or by wire with other devices to exchange data. While the figures illustrate AR glasses with various systems, it is to be understood that not all of the illustrated systems are required to be implemented or provided. More or fewer systems may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means, or installed from the storage means 1003, or installed from the ROM 1002. The computer program, when executed by the processing device 1001, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
By adopting the image rendering method in the first embodiment or the second embodiment, the head-mounted display device provided by the invention can reduce the rendering pressure of the head-mounted display device on the image. Compared with the prior art, the beneficial effects of the head-mounted display device provided by the embodiment of the invention are the same as the beneficial effects of the image rendering method provided by the first embodiment, and other technical features of the head-mounted display device are the same as those disclosed in the method of the previous embodiment, which are not repeated herein.
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the foregoing description of embodiments, the particular features, structures, materials, or characteristics may be combined in any suitable manner in any one or more embodiments or examples.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present invention, and shall cover the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.
EXAMPLE five
The embodiment of the invention provides a computer-readable storage medium, which has computer-readable program instructions stored thereon, and the computer-readable program instructions are used for executing the image rendering method in the first embodiment.
The computer readable storage medium provided by the embodiments of the present invention may be, for example, a USB flash disk, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, or device, or any combination thereof. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present embodiment, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable storage medium may be embodied in a head-mounted display device; or may be separate and not incorporated into the head-mounted display device.
The computer readable storage medium carries one or more programs which, when executed by the head mounted display device, cause the head mounted display device to: acquiring a collected current view field environment image; dynamically detecting head movement posture information of a user, and determining an intention observation visual angle of the user according to the head movement posture information; rendering the image in the intended observation view angle in the current view field environment image at a first rendering resolution ratio, performing rendering quality reduction processing on the first rendering resolution ratio to obtain a second rendering resolution ratio, and rendering the image outside the intended observation view angle in the current view field environment image at the second rendering resolution ratio.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented by software or hardware. Wherein the names of the modules do not in some cases constitute a limitation of the unit itself.
The invention provides a computer readable storage medium, which stores computer readable program instructions for executing the image rendering method and can reduce the rendering pressure of a head-mounted display device on the image. Compared with the prior art, the beneficial effects of the computer-readable storage medium provided by the embodiment of the present invention are the same as the beneficial effects of the image rendering method provided by the first embodiment or the second embodiment, and are not described herein again.
Example six
Embodiments of the present invention further provide a computer program product, which includes a computer program, and when the computer program is executed by a processor, the steps of the image rendering method as described above are implemented.
The computer program product provided by the application can reduce the rendering pressure of the head-mounted display device on the image. Compared with the prior art, the beneficial effects of the computer program product provided by the embodiment of the present invention are the same as the beneficial effects of the image rendering method provided by the first embodiment or the second embodiment, and are not described herein again.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all equivalent structures or equivalent processes, which are directly or indirectly applied to other related technical fields, and which are not limited by the present application, are also included in the scope of the present application.

Claims (10)

CN202211437447.6A2022-11-162022-11-16Image rendering method and device, head-mounted display equipment and readable storage mediumPendingCN115761089A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202211437447.6ACN115761089A (en)2022-11-162022-11-16Image rendering method and device, head-mounted display equipment and readable storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202211437447.6ACN115761089A (en)2022-11-162022-11-16Image rendering method and device, head-mounted display equipment and readable storage medium

Publications (1)

Publication NumberPublication Date
CN115761089Atrue CN115761089A (en)2023-03-07

Family

ID=85372278

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202211437447.6APendingCN115761089A (en)2022-11-162022-11-16Image rendering method and device, head-mounted display equipment and readable storage medium

Country Status (1)

CountryLink
CN (1)CN115761089A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN119135861A (en)*2024-11-142024-12-13浙江大学 A virtual reality binocular rendering acceleration method and system
CN119987565A (en)*2025-04-162025-05-13长春大学 Imaging detection method and system for head mounted display device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN119135861A (en)*2024-11-142024-12-13浙江大学 A virtual reality binocular rendering acceleration method and system
CN119987565A (en)*2025-04-162025-05-13长春大学 Imaging detection method and system for head mounted display device

Similar Documents

PublicationPublication DateTitle
US11836289B2 (en)Use of eye tracking to adjust region-of-interest (ROI) for compressing images for transmission
US10775886B2 (en)Reducing rendering computation and power consumption by detecting saccades and blinks
US10739849B2 (en)Selective peripheral vision filtering in a foveated rendering system
US10720128B2 (en)Real-time user adaptive foveated rendering
KR102543341B1 (en)Adaptive parameters in image regions based on eye tracking information
US10859830B2 (en)Image adjustment for an eye tracking system
CN109741289B (en)Image fusion method and VR equipment
CN111710050A (en)Image processing method and device for virtual reality equipment
CN115761089A (en)Image rendering method and device, head-mounted display equipment and readable storage medium
CN109144250B (en)Position adjusting method, device, equipment and storage medium
CN115914603A (en) Image rendering method, head-mounted display device and readable storage medium
CN115713783A (en)Image rendering method and device, head-mounted display equipment and readable storage medium
CN115686219A (en)Image rendering method, head-mounted display device, and readable storage medium
KR20160060582A (en)Device and method for processing visual data, and related computer program product
CN114026603A (en)Rendering computer-generated reality text
CN114911445A (en)Display control method of virtual reality device, and storage medium
CN115576637A (en)Screen capture method, system, electronic device and readable storage medium
US12405662B2 (en)Screen interaction using EOG coordinates
CN114594855B (en)Multi-machine interaction method and system of head display equipment, terminal equipment and storage medium
KR20240099029A (en)Method and device for naked eye 3D displaying vehicle instrument
CN119575677A (en) Head-up display system control method, device, electronic device and storage medium
CN119225514A (en) Eye tracking method, device, storage medium and program product
CN119902619A (en) Virtual-reality fusion method, display module and related equipment
CN119155451A (en)Eyeball tracking-based image processing method, device, equipment and storage medium
CN119893066A (en)Anti-dazzling wide dynamic technology application method based on virtual reality

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp