技术领域technical field
本发明涉及模拟仿真领域,具体涉及一种仿真场景测量方法、准确性测定方法及系统。The invention relates to the field of simulation, in particular to a simulation scene measurement method, an accuracy measurement method and a system.
背景技术Background technique
目前,模拟仿真行业内通用的检测方法是利用系统存在的实物与其通过系统仿真后的相同虚拟模型,进行虚实对比,间接来测定系统的几何仿真精度。At present, the common detection method in the simulation industry is to use the real object in the system and the same virtual model after the system simulation to compare the virtual and real, and indirectly measure the geometric simulation accuracy of the system.
现有的技术方案存在以下缺点:The existing technical solutions have the following disadvantages:
1)操作复杂繁琐,现有技术均采用实际物体与其虚拟对象进行对比,测量范围受限,只能通过间接的方式实现了对虚拟场景几何仿真准确度的测定;1) The operation is complicated and cumbersome. The existing technology uses the comparison between the actual object and its virtual object, and the measurement range is limited. The measurement of the accuracy of the geometric simulation of the virtual scene can only be realized in an indirect way;
2)结果准确性低,虚实比对结果均由人的观测得出,无法进行定量分析。2) The accuracy of the results is low, and the results of the virtual-real comparison are all obtained by human observation, which cannot be quantitatively analyzed.
发明内容Contents of the invention
(一)发明目的(1) Purpose of the invention
本发明的目的是提供一种仿真场景测量方法、准确性测定方法及系统以解决现有虚拟仿真检测技术操作复杂繁琐,结果准确性低的问题。The purpose of the present invention is to provide a simulation scene measurement method, accuracy measurement method and system to solve the problems of complicated operation and low accuracy of the existing virtual simulation detection technology.
(二)技术方案(2) Technical solution
为解决上述问题,本发明的第一方面提供了一种仿真场景测量方法,包括:将双目相机模仿眼置于3D眼镜前;标定所述双目相机的光心位置,并将所述光心位置作为眼点位置;根据所述眼点位置进行VR系统场景实时渲染得到仿真场景;根据所述双目相机的标定参数及采集的图像对所述仿真场景进行测量。In order to solve the above problems, the first aspect of the present invention provides a method for measuring a simulated scene, including: placing a binocular camera imitating eyes in front of 3D glasses; The heart position is used as the eye point position; the VR system scene is rendered in real time according to the eye point position to obtain a simulation scene; the simulation scene is measured according to the calibration parameters of the binocular camera and the collected images.
进一步地,所述光心位置包括初始光心位置和实时追踪的光心位置。Further, the optical center position includes an initial optical center position and a real-time tracked optical center position.
进一步地,所述根据所述眼点位置进行VR系统场景实时渲染得到仿真场景具体包括:预设待测虚拟空间点;根据所述眼点位置对所述虚拟空间点进行立体渲染,在屏幕上显示出左眼图像和右眼图像,形成仿真场景。Further, the real-time rendering of the VR system scene according to the eye point position to obtain the simulated scene specifically includes: preset the virtual space point to be tested; perform stereoscopic rendering on the virtual space point according to the eye point position, and display on the screen A left-eye image and a right-eye image are displayed to form a simulation scene.
进一步地,所述根据所述双目相机的标定参数及采集的图像对所述仿真场景进行测量具体包括:利用所述双目相机通过所述3D眼镜对应采集所述左眼图像和所述右眼图像;根据立体视觉算法利用所述左眼图像、所述右眼图像及所述相机的标定参数计算所述空间点的物理世界坐标值。Further, the measuring the simulation scene according to the calibration parameters of the binocular camera and the collected images specifically includes: using the binocular camera to correspondingly collect the left eye image and the right eye image through the 3D glasses. Eye image: calculate the physical world coordinate value of the space point by using the left eye image, the right eye image and the calibration parameters of the camera according to the stereo vision algorithm.
根据本发明的另一个方面,提供一种仿真场景渲染准确性测定方法,包括:According to another aspect of the present invention, a simulation scene rendering accuracy measurement method is provided, including:
移动所述双目相机的位置;moving the position of the binocular camera;
在多个位置分别执行上述技术方案任一项所述的仿真场景测量方法步骤,得到多个测量结果;Performing the steps of the simulation scene measurement method described in any one of the above technical solutions respectively at multiple locations to obtain multiple measurement results;
比对多个所述测量结果位置之间的偏差,通过所述偏差测定所述仿真场景渲染准确性。Comparing the deviations among the positions of the multiple measurement results, the rendering accuracy of the simulated scene is determined through the deviations.
根据本发明的又一方面,提供一种仿真场景测量系统,包括:According to another aspect of the present invention, a simulation scene measurement system is provided, including:
双目相机,用于模仿人眼采集左眼图像和右眼图像;A binocular camera is used to imitate human eyes to collect left-eye images and right-eye images;
光心定位模块,用于标定所述双目相机的光心位置,并将所述光心位置作为眼点位置;An optical center positioning module, configured to calibrate the optical center position of the binocular camera, and use the optical center position as the eye point position;
场景渲染模块,用于根据所述眼点位置进行VR系统场景渲染得到仿真场景;The scene rendering module is used to perform VR system scene rendering according to the eye point position to obtain a simulation scene;
仿真场景测量模块,用于根据所述双目相机的标定参数及采集的图像对所述仿真场景进行测量。The simulation scene measurement module is used to measure the simulation scene according to the calibration parameters of the binocular camera and the collected images.
进一步地,所述光心位置包括初始光心位置和实时光心位置。Further, the optical center position includes an initial optical center position and a real-time optical center position.
进一步地,所述场景渲染模块包括:Further, the scene rendering module includes:
空间点模拟模块,用于预设待测虚拟空间点;The space point simulation module is used to preset the virtual space point to be tested;
立体渲染模块,用于根据所述眼点位置对所述虚拟空间点进行立体渲染,在屏幕上显示出左眼图像和右眼图像,形成仿真场景。The stereoscopic rendering module is used to perform stereoscopic rendering of the virtual space point according to the position of the eye point, and display the left-eye image and the right-eye image on the screen to form a simulation scene.
进一步地,所述立体渲染模块根据所述眼点位置对所述虚拟空间点进行立体渲染,在屏幕上显示出左眼图像和右眼图像,形成仿真场景具体执行步骤包括:Further, the stereoscopic rendering module performs stereoscopic rendering on the virtual space point according to the position of the eye point, displays a left-eye image and a right-eye image on the screen, and forms a simulation scene. The specific execution steps include:
利用所述双目相机通过所述3D眼镜对应采集所述左眼图像和所述右眼图像;Using the binocular camera to correspondingly collect the left-eye image and the right-eye image through the 3D glasses;
根据立体视觉算法利用所述左眼图像、所述右眼图像及所述相机的标定参数计算所述空间点的物理世界坐标值。The physical world coordinate value of the space point is calculated by using the left-eye image, the right-eye image, and the calibration parameters of the camera according to a stereo vision algorithm.
根据本发明的又一方面,提供一种仿真场景渲染准确性测定系统,包括:According to yet another aspect of the present invention, a simulation scene rendering accuracy measurement system is provided, including:
驱动模块,用于移动所述双目相机的位置;A drive module, used to move the position of the binocular camera;
仿真场景测量模块,用于执行上述方案任一项所述的仿真场景测量方法步骤,得到多个测量结果;The simulation scene measurement module is used to perform the steps of the simulation scene measurement method described in any one of the above schemes to obtain multiple measurement results;
比较模块,用于比对多个所述测量结果位置之间的偏差,通过所述偏差测定所述仿真场景渲染准确性。The comparison module is used to compare the deviation between the positions of the plurality of measurement results, and determine the rendering accuracy of the simulation scene through the deviation.
(三)有益效果(3) Beneficial effects
本发明的上述技术方案具有如下有益的技术效果:The technical solution of the present invention has the following beneficial technical effects:
(1)相对传统通过人眼观测,此方法通过定量的分析手段,更客观更准确;(1) Compared with the traditional human eye observation, this method is more objective and accurate through quantitative analysis;
(2)整个测定过程不涉及人的主观判断,可以实现自动化操作;(2) The whole measurement process does not involve human subjective judgment, and can realize automatic operation;
(3)可以完全脱离实际对象,对虚拟对象进行单独测定,扩大了应用范围;(3) It can be completely separated from the actual object, and the virtual object can be measured independently, which expands the scope of application;
(4)三维空间中任意一点均可通过此方法进行测定,结果具有全面性。(4) Any point in three-dimensional space can be measured by this method, and the result is comprehensive.
附图说明Description of drawings
图1是根据本发明第一实施方式的仿真场景测量方法流程图;Fig. 1 is a flow chart of a simulation scene measurement method according to a first embodiment of the present invention;
图2是根据本发明第一实施方式的实时渲染得到仿真场景流程图;FIG. 2 is a flowchart of a simulation scene obtained by real-time rendering according to the first embodiment of the present invention;
图3是根据本发明第一实施方式的仿真场景测量流程图;Fig. 3 is a flow chart of simulation scene measurement according to the first embodiment of the present invention;
图4是根据本发明第一实施方式的另一方面仿真场景渲染准确性测定方法的流程图;Fig. 4 is a flow chart of another aspect of the simulation scene rendering accuracy measurement method according to the first embodiment of the present invention;
图5是根据本发明一可选实施方式的仿真场景测量方法的原理图;Fig. 5 is a schematic diagram of a simulation scene measurement method according to an optional embodiment of the present invention;
图6是根据本发明一可选实施方式的仿真场景测量的方法流程图;FIG. 6 is a flowchart of a method for measuring a simulated scene according to an optional embodiment of the present invention;
图7是一般测量方法与真实眼位形成偏差原理图;Figure 7 is a schematic diagram of the deviation between the general measurement method and the real eye position;
图8是根据本发明一可选实施方式的坐标系间转换矩阵的示意图。Fig. 8 is a schematic diagram of a conversion matrix between coordinate systems according to an optional implementation manner of the present invention.
具体实施方式Detailed ways
为使本发明的目的、技术方案和优点更加清楚明了,下面结合具体实施方式并参照附图,对本发明进一步详细说明。应该理解,这些描述只是示例性的,而并非要限制本发明的范围。此外,在以下说明中,省略了对公知结构和技术的描述,以避免不必要地混淆本发明的概念。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in combination with specific embodiments and with reference to the accompanying drawings. It should be understood that these descriptions are exemplary only, and are not intended to limit the scope of the present invention. Also, in the following description, descriptions of well-known structures and techniques are omitted to avoid unnecessarily obscuring the concept of the present invention.
显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。Apparently, the described embodiments are some, but not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.
此外,下面所描述的本发明不同实施方式中所涉及的技术特征只要彼此之间未构成冲突就可以相互结合。In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as there is no conflict with each other.
如图1所示,在本发明实施例的第一方面,提供了一种仿真场景测量方法,包括:As shown in Figure 1, in the first aspect of the embodiments of the present invention, a simulation scene measurement method is provided, including:
S1:将双目相机模仿眼置于3D眼镜前;S1: Put the binocular camera imitating the eyes in front of the 3D glasses;
S2:标定双目相机的光心位置,并将光心位置作为眼点位置;可选的,光心位置包括初始光心位置和实时光心位置。具体的,标定初始光心位置,标定的内容包括两个相机光心在统一的世界坐标系下的三维空间坐标,得到了初始光心位置与3D眼镜的相对位置关系;利用追踪系统获取3D眼镜的位置实时信息,根据初始光心位置与3D眼镜的相对位置关系解算出相机的实时光心位置,追踪系统可选的有ART公司的光学追踪系统,也可以是OptiTrack,Vicon以及国内的青瞳等。为了获取实时光心位置坐标,本实施例采取的是利用待标定初始位置的相机系统与追踪系统共同测量同一组物理空间点的方式,为实现测量需要对相机系统本身进行立体视觉标定,包含相机的内参数与外参数。上述方法解决了追踪系统只能获取3D眼镜的位置信息无法获取真实相机的光心位置的问题,进而解决了在设置眼点的时候会出现偏差的问题,通过本方法可以得到准确的实时光心位置。S2: Calibrate the optical center position of the binocular camera, and use the optical center position as the eye point position; optionally, the optical center position includes an initial optical center position and a real-time optical center position. Specifically, the initial optical center position is calibrated. The content of the calibration includes the three-dimensional space coordinates of the two camera optical centers in a unified world coordinate system, and the relative position relationship between the initial optical center position and the 3D glasses is obtained; the tracking system is used to obtain the 3D glasses According to the relative positional relationship between the initial optical center position and the 3D glasses, the real-time optical center position of the camera can be calculated. The tracking system can be ART’s optical tracking system, or OptiTrack, Vicon and domestic Qingtong. wait. In order to obtain real-time optical center position coordinates, this embodiment adopts the method of using the camera system to calibrate the initial position and the tracking system to jointly measure the same group of physical space points. In order to realize the measurement, it is necessary to perform stereo vision calibration on the camera system itself, including the camera internal and external parameters. The above method solves the problem that the tracking system can only obtain the position information of the 3D glasses but cannot obtain the optical center position of the real camera, and further solves the problem of deviation when setting the eye point. Through this method, an accurate real-time optical center can be obtained Location.
S3:根据眼点位置进行VR系统场景实时渲染得到仿真场景;可选的,如图2所示,根据眼点位置进行VR系统场景实时渲染得到仿真场景具体包括:S31预设待测虚拟空间点;S32根据眼点位置对虚拟空间点进行立体渲染,在屏幕上显示出左眼图像和右眼图像,形成仿真场景。S3: Perform real-time rendering of the VR system scene according to the position of the eye point to obtain a simulation scene; optional, as shown in Figure 2, perform real-time rendering of the VR system scene according to the position of the eye point to obtain the simulation scene Specifically include: S31 Preset virtual space points to be tested ; S32 performs three-dimensional rendering of the virtual space point according to the position of the eye point, and displays the left-eye image and the right-eye image on the screen to form a simulation scene.
S4:根据双目相机的标定参数及采集的图像对仿真场景进行测量。可选的,如图3所示,根据双目相机的标定参数及采集的图像对仿真场景进行测量具体包括:S41利用双目相机通过3D眼镜对应采集左眼图像和右眼图像;S42根据立体视觉算法利用左眼图像、右眼图像及相机的标定参数计算空间点的物理世界坐标值。其中对应的含义是双目相机中代替人左眼的一目相机采集左眼观测到的屏幕上的图像,代替人右眼的一目相机采集右眼观测到的屏幕上的图像。S4: Measure the simulation scene according to the calibration parameters of the binocular camera and the collected images. Optionally, as shown in Figure 3, the measurement of the simulation scene according to the calibration parameters of the binocular camera and the collected images specifically includes: S41 using the binocular camera to correspondingly collect left-eye images and right-eye images through 3D glasses; S42 according to the stereo The vision algorithm uses the left-eye image, right-eye image and camera calibration parameters to calculate the physical world coordinate value of the space point. The corresponding meaning is that the one-eye camera replacing the left eye of the binocular camera collects the image on the screen observed by the left eye, and the one-eye camera replacing the right eye of the person collects the image on the screen observed by the right eye.
此方法不需要人为判断,而是通过定量的分析手段,更客观更准确;整个测定过程不涉及人的主观判断,可以实现自动化操作;可以完全脱离实际对象,对虚拟对象进行单独测定,扩大了应用范围;三维空间中任意一点均可通过此方法进行测定,结果具有全面性。This method does not require human judgment, but is more objective and accurate through quantitative analysis means; the whole measurement process does not involve human subjective judgment, and can realize automatic operation; it can completely separate from the actual object and measure the virtual object separately, expanding the Scope of application; any point in three-dimensional space can be measured by this method, and the result is comprehensive.
如图4所示,在本发明实施例的另一个方面,提供一种仿真场景渲染准确性测定方法,包括:As shown in FIG. 4, in another aspect of the embodiment of the present invention, a method for measuring the rendering accuracy of a simulated scene is provided, including:
S′1:移动双目相机的位置;S′1: Move the position of the binocular camera;
S′2:在多个位置分别执行上述实施例的仿真场景测量方法步骤,得到多个测量结果;S'2: Execute the steps of the method for measuring the simulation scene of the above-mentioned embodiment at multiple locations, and obtain multiple measurement results;
S′3:比对多个测量结果位置之间的偏差,测定仿真场景渲染准确性。S'3: Compare the deviation between the positions of multiple measurement results, and measure the rendering accuracy of the simulation scene.
在本发明实施例的又一方面,提供一种仿真场景测量系统,包括:In yet another aspect of the embodiments of the present invention, a simulation scene measurement system is provided, including:
双目相机,用于模仿人眼采集左眼图像和右眼图像;可选的,双目相机为定焦数字相机。The binocular camera is used to imitate human eyes to collect left-eye images and right-eye images; optionally, the binocular camera is a fixed-focus digital camera.
光心定位模块,用于标定双目相机的光心位置,并将光心位置作为眼点位置;可选的,光心位置包括初始光心位置和实时追踪的光心位置。The optical center positioning module is used to calibrate the optical center position of the binocular camera, and use the optical center position as the eye point position; optionally, the optical center position includes an initial optical center position and a real-time tracking optical center position.
场景渲染模块,用于根据眼点位置进行VR系统场景渲染得到仿真场景;可选的,场景渲染模块包括:空间点模拟模块,用于预设待测虚拟空间点;立体渲染模块,用于根据眼点位置对虚拟空间点进行立体渲染,在屏幕上显示出左眼图像和右眼图像,形成仿真场景。可选的,立体渲染模块根据眼点位置对虚拟空间点进行立体渲染,在屏幕上显示出左眼图像和右眼图像,形成仿真场景具体执行步骤包括:双目相机通过3D眼镜对应采集左眼图像和右眼图像;根据立体视觉算法利用左眼图像、右眼图像及相机的标定参数计算空间点的物理世界坐标值。The scene rendering module is used to render the VR system scene according to the position of the eye point to obtain a simulated scene; optionally, the scene rendering module includes: a space point simulation module, used to preset the virtual space point to be tested; a stereo rendering module, used to The eye point position performs three-dimensional rendering on the virtual space point, and displays the left-eye image and the right-eye image on the screen to form a simulation scene. Optionally, the stereoscopic rendering module performs stereoscopic rendering of the virtual space point according to the position of the eye point, and displays the left-eye image and the right-eye image on the screen to form a simulation scene. image and right-eye image; according to the stereo vision algorithm, the physical world coordinate value of the space point is calculated by using the left-eye image, the right-eye image and the calibration parameters of the camera.
仿真场景测量模块,用于根据双目相机的标定参数及采集的图像对仿真场景进行测量。The simulation scene measurement module is used to measure the simulation scene according to the calibration parameters of the binocular camera and the collected images.
在本发明实施例的又一方面,提供一种仿真场景渲染准确性测定系统,包括:In yet another aspect of the embodiments of the present invention, a simulation scene rendering accuracy measurement system is provided, including:
驱动模块,用于移动双目相机的位置;The driving module is used to move the position of the binocular camera;
仿真场景测量模块,用于执行上述实施例的仿真场景测量方法步骤,得到多个测量结果;The simulation scene measurement module is used to execute the steps of the simulation scene measurement method of the above-mentioned embodiments to obtain a plurality of measurement results;
比较模块,用于比对多个测量结果位置之间的偏差,测定仿真场景渲染准确性。The comparison module is used for comparing the deviation between the positions of multiple measurement results, and measuring the rendering accuracy of the simulation scene.
如图5所示,在本发明的一可选实施中,为了在真实世界中对仿真场景进行测量,将一对标定好的定焦数字相机放置于仿真场景中代替人眼对场景进行观测,并将3D眼镜放置于两相机前,两个相机也可以换成双目相机以保证代替人左右眼的相机每一个仅能观测到各自眼点位置下渲染出的图片从而获取仿真场景的视差图;随后对相机光心位置进行初始标定并进行实时追踪,将追踪结果定义为VR系统用于渲染的眼点位置,从而保证渲染出的仿真场景与观察位置间的匹配,避免由于眼点位置的偏差而导致的测量误差;随后启动VR系统,根据追踪到的眼位进行场景实时渲染;进而使用立体视觉算法利用采集到具有视差的两幅图与相机系统的标定参数对仿真场景进行测量;移动相机系统变更观测位置,重新以新的眼点位置重复上述测量过程,获取不同观测位置的测量结果;最后通过将仿真场景多次测量的平均结果与原始设计数据进行比对,实现对场景几何尺寸渲染准确度的测量。As shown in Figure 5, in an optional implementation of the present invention, in order to measure the simulation scene in the real world, a pair of calibrated fixed-focus digital cameras are placed in the simulation scene instead of human eyes to observe the scene, And place the 3D glasses in front of the two cameras, and the two cameras can also be replaced with binocular cameras to ensure that each camera that replaces the left and right eyes of a person can only observe the rendered pictures at the positions of their respective eye points to obtain the disparity map of the simulated scene ; Then perform initial calibration on the optical center position of the camera and perform real-time tracking, and define the tracking result as the eye point position used by the VR system for rendering, so as to ensure the matching between the rendered simulation scene and the observation position, and avoid the The measurement error caused by deviation; then start the VR system, and perform real-time rendering of the scene according to the tracked eye position; then use the stereo vision algorithm to measure the simulation scene by using the two images with parallax collected and the calibration parameters of the camera system; move The camera system changes the observation position, repeats the above measurement process with the new eye point position, and obtains the measurement results of different observation positions; finally, by comparing the average result of multiple measurements of the simulation scene with the original design data, the geometric size of the scene is realized. A measure of rendering accuracy.
如图6所示,对仿真场景几何尺寸的测量,追究其根本即针对虚拟环境中的任意一虚拟空间点的位置进行测量,具体操作步骤如下:As shown in Figure 6, the measurement of the geometric dimensions of the simulation scene is based on the measurement of the position of any virtual space point in the virtual environment. The specific operation steps are as follows:
第一步,将标定好的立体相机测量系统(立体相机测量系统将两个标定好的定焦相机固连在一个刚体支架上,支架上加持有一个用于跟踪定位的标记物(Track Marker),整个系统作为一个完整刚体各部件间不会出现相对位置变化)放置于VR系统中任意位置,并将3D眼镜置于相机前;The first step is to connect the calibrated stereo camera measurement system (the stereo camera measurement system connects two calibrated fixed-focus cameras to a rigid body bracket, and a marker (Track Marker) for tracking and positioning is added to the bracket. ), the whole system as a complete rigid body and there will be no relative position changes between the components) is placed anywhere in the VR system, and the 3D glasses are placed in front of the camera;
第二步,进行相机光心初始位置标定,获取相机光心位置作为渲染眼点初始位置;The second step is to calibrate the initial position of the optical center of the camera, and obtain the position of the optical center of the camera as the initial position of the rendering eye point;
第三步,根据相机光心初始位置的标定结果开始对相机光心位置进行实时追踪;The third step is to start real-time tracking of the optical center position of the camera according to the calibration result of the initial position of the optical center of the camera;
第四步,设定待测虚拟空间点P;The fourth step is to set the virtual space point P to be tested;
第五步,将相机空间位置定义为眼点位置,并根据眼点位置对虚拟空间点P进行立体渲染,在屏幕S上显示出IL与IR两幅图像;The fifth step is to define the spatial position of the camera as the position of the eye point, and perform three-dimensional rendering on the virtual space point P according to the position of the eye point, and display two images of IL and IR on the screen S;
第六步,使用相机在3D眼镜的辅助下分别采集左右眼图像,并使用立体视觉算法利用采集到的具有视差的两幅图与相机系统的标定参数计算点P的物理世界坐标值;The sixth step is to use the camera to collect left and right eye images with the assistance of 3D glasses, and use the stereo vision algorithm to calculate the physical world coordinate value of point P by using the two images with parallax collected and the calibration parameters of the camera system;
第七步,移动相机,更改观测眼位重复上述第三步至第六步获取多组不同观测位置处的测量结果,最后对比点P的虚拟空间设定位置与多组实际测量位置重心之间的偏差,测定点P的几何渲染准确性。The seventh step is to move the camera and change the observation eye position. Repeat the third to sixth steps above to obtain multiple sets of measurement results at different observation positions, and finally compare the difference between the virtual space setting position of point P and the center of gravity of multiple sets of actual measurement positions. The deviation of , measures the geometric rendering accuracy of point P.
第二步中,相机光心的标定需要测算左右两相机光心位置在虚拟空间坐标中的表示,从而为渲染提供赖以计算的眼点位置。在VR显示系统正常应用时,此眼点位置通常直接使用3D眼镜的镜片中心位置,并通过动捕系统测量后传递给渲染系统用以场景渲染。由于镜片中心位置与真实眼位间存在偏差,因此对于观察者而言所显示的场景并不准确,其偏差如图7所示。In the second step, the calibration of the optical center of the camera needs to calculate the representation of the optical center positions of the left and right cameras in the virtual space coordinates, so as to provide the eye point position for rendering. In the normal application of the VR display system, the eye point position usually directly uses the center position of the lens of the 3D glasses, and is measured by the motion capture system and then passed to the rendering system for scene rendering. Due to the deviation between the center position of the lens and the real eye position, the displayed scene is not accurate for the observer, and the deviation is shown in FIG. 7 .
当观察者为人时,由于缺乏对于尺寸的精确感知,此误差往往被忽略,但当使用相机进行精确测量时此误差将对测量结果产生极大地影响,因此不能直接使用动捕设备追踪到的眼位进行测量。本提案针对这一问题设计了一种针对相机的真实眼位测算方法,从而精确获取相机光心在渲染系统中的位置。此部分内容涉及到若干坐标系间转换矩阵的计算,具体方法如下:When the observer is a human, this error is often ignored due to the lack of precise perception of size, but when using a camera for precise measurement, this error will have a great impact on the measurement result, so the eye tracked by the motion capture device cannot be directly used bit to measure. Aiming at this problem, this proposal designs a method for calculating the real eye position of the camera, so as to accurately obtain the position of the camera's optical center in the rendering system. This part involves the calculation of conversion matrix between several coordinate systems, the specific method is as follows:
在标定过程中主要涉及的坐标系如图8所示,包括:The coordinate system mainly involved in the calibration process is shown in Figure 8, including:
(1)VR系统物理世界坐标系COW,用于描述真实世界物理空间点的位置(1) VR system physical world coordinate system COW, which is used to describe the position of physical space points in the real world
(2)运动捕捉系统坐标系COT,用于描述动捕系统所追踪的真实世界物理定位点位置,同时由于追踪系统的定位追踪结果可用于为虚拟场景渲染提供眼位信息。因此,此坐标系下的坐标也可以用来描述虚拟空间中的点(2) The coordinate system COT of the motion capture system is used to describe the position of the physical anchor point in the real world tracked by the motion capture system. At the same time, the positioning and tracking results of the tracking system can be used to provide eye position information for virtual scene rendering. Therefore, coordinates in this coordinate system can also be used to describe points in virtual space
(3)VR系统虚拟空间坐标系COV,用于描述虚拟空间点的位置(3) VR system virtual space coordinate system COV, used to describe the position of virtual space points
(4)左侧相机坐标系COCL,用于描述相机测量结果,其原点位置为渲染时左眼点位置,Z轴为左眼点的观察方向(4) The left camera coordinate system COCL is used to describe the measurement results of the camera. The origin position is the position of the left eye point during rendering, and the Z axis is the observation direction of the left eye point.
(5)右侧相机坐标系COCR,用于描述相机测量结果,其原点位置为渲染时右眼点位置,Z轴为右眼点的观察方向(5) The right camera coordinate system COCR is used to describe the measurement results of the camera. The origin position is the position of the right eye point during rendering, and the Z axis is the observation direction of the right eye point.
精确获取眼点位置,即通过计算获取COCL与COCR的坐标原点与坐标轴方向向量在COV中的表达方式。从数学角度分析,即计算COCL与COCR与COV间的坐标系转化关系。由于COCL与COCR所描述空间为真实物理空间,而COV所描述为虚拟空间,无法直接获取两者间的关系,因此需要借助其他坐标系进行计算。为简化问题,首先可以将COW定义为COT,即定义COT为物理世界坐标系,从而COT与COW间的转化关系已知。其次,如前所述,动捕系统用于测量实际物理空间中的眼位并将这一结果提供给渲染系统进行渲染,因此COT与COV间的转化关系已知。最后,对于两相机COCL与COCR可以通过相机立体标定获取两相机间的相互转化关系,因此COCL与COCR间的转化关系已知。综上,问题转化为寻找COT与COCL间的转化关系。Accurately obtain the eye point position, that is, obtain the expression of the coordinate origin and coordinate axis direction vector of COCL and COCR in COV through calculation. From a mathematical point of view, it is to calculate the coordinate system conversion relationship between COCL, COCR and COV. Since the space described by COCL and COCR is a real physical space, while that described by COV is a virtual space, the relationship between the two cannot be obtained directly, so other coordinate systems are needed for calculation. To simplify the problem, firstly COW can be defined as COT, that is, COT is defined as the physical world coordinate system, so that the conversion relationship between COT and COW is known. Secondly, as mentioned earlier, the motion capture system is used to measure the eye position in the actual physical space and provide this result to the rendering system for rendering, so the conversion relationship between COT and COV is known. Finally, for the two cameras COCL and COCR, the mutual conversion relationship between the two cameras can be obtained through camera stereo calibration, so the conversion relationship between COCL and COCR is known. In summary, the problem is transformed into finding the transformation relationship between COT and COCL.
由于动捕系统与相机系统均可以测量真实世界物体,因此,本专利通过使用两套系统对同一物体的测量来反推两坐标间的COT与COCL间的转化关系。Since both the motion capture system and the camera system can measure real-world objects, this patent uses two sets of systems to measure the same object to deduce the conversion relationship between COT and COCL between the two coordinates.
对于物理空间中任意n个空间待测点,设为该组点在COT中的测量坐标;/>为该组点在COCL中的测量坐标;s∈R为COT与COCL间相互转化的比例系数、R∈R3×3坐标系间旋转矩阵,T∈R3×1为两坐标系间的平移矩阵,则有:For any n space points to be measured in the physical space, set is the measured coordinates of the group of points in COT;/> is the measurement coordinates of the group of points in COCL; s∈R is the proportional coefficient for mutual conversion between COT and COCL, R∈R3×3 rotation matrix between coordinate systems, T∈R3×1 is the translation between two coordinate systems matrix, then:
MT=s*R*MC+[T...T]MT =s*R*MC +[T...T]
现记为MC第j列的列向量,则点组的重心为:Remember now is the column vector of the jth column of MC , then the center of gravity of the point group is:
平均半径为:The average radius is:
对于MT同理。The same is true forMT .
令则目标函数可以化简为:make Then the objective function can be simplified as:
原问题转化成了正交procrustes问题,即求解:The original problem is transformed into an orthogonal procrustes problem, that is, to solve:
1.求解可得R=UVT,其中U,VT是对M=QPT的奇异值分解得到的两个正交矩阵。由于(s,R,T)共有7个未知变量,因此只要n大于等于3即可完成求解,从而可以获取COT与COCL间的转化关系并完成针对相机的真实眼位测算。其中,:=表示定义为,即用一个简单的符号代表一个表达式;Raxb表示a行b列的一个矩阵,其元素为实数;||||表示欧氏距离;SO(3)表示三维旋转矩阵群1. Solve to obtain R=UVT , where U, VT are two orthogonal matrices obtained by singular value decomposition of M=QPT . Since (s, R, T) has 7 unknown variables, the solution can be completed as long as n is greater than or equal to 3, so that the conversion relationship between COT and COCL can be obtained and the real eye position calculation for the camera can be completed. Among them, := means that it is defined as, that is, an expression is represented by a simple symbol; Raxb represents a matrix of a row and b column, and its elements are real numbers; |||| represents Euclidean distance; SO(3) represents three-dimensional rotation matrix group
表示R为使得表达式||ΩP-Q||取值最小的Ω的值,Ω属于三维旋转矩阵群。 It means that R is the value of Ω that makes the expression ||ΩP-Q|| take the smallest value, and Ω belongs to the group of three-dimensional rotation matrices.
同在第三步中,完成眼位测算后需要对眼位进行实时跟踪,从而实现在移动相机位置后仍可获取正确的眼位信息而不需要从新进行标定。Also in the third step, after the eye position calculation is completed, the eye position needs to be tracked in real time, so that the correct eye position information can still be obtained after moving the camera position without re-calibration.
由于整个双目相机系统为一整体系统,两个相机的相对位置关系不会变化,因此可将整个系统看作一个刚体,其完成初始眼位标定后的运动均为相对初始标定位置的刚体运动。基于这一分析,本提案采用在相机系统上增加定位点的方式实现实时眼位追踪。在眼位标定完成时使用动捕系统记录下当前6自由度位姿信息Po0随后实时追踪标记点的6自由度位姿信息Pot,计算Pot与Po0间的位置变换,将此变换应用于初始标定眼位即可实时获取准确的眼位信息。Since the entire binocular camera system is an integral system, the relative positional relationship between the two cameras will not change, so the entire system can be regarded as a rigid body, and its motion after the initial eye position calibration is a rigid body motion relative to the initial calibration position . Based on this analysis, this proposal implements real-time eye tracking by adding anchor points to the camera system. When the eye position calibration is completed, use the motion capture system to record the current 6-degree-of-freedom pose information Po0, then track the 6-degree-of-freedom pose information Pot of the marker point in real time, calculate the position transformation between Pot and Po0, and apply this transformation to the initial calibration Eye position can obtain accurate eye position information in real time.
在第七步中,完成全部测量后需要将在不同观测位置下进行n次测量得到的一组虚拟点P的真实世界坐标Pri(0<i<n),与建模时设定的虚拟空间坐标Pv进行比较从而获取系统在渲染虚拟点P时的误差。为得到更加具有统计意义的结果,我们定义为所有针对点P的测量结果的均值,即Pri(0<i<n)的重心位置,并采用/>与Pv间的欧氏距离作为标准进行对点P几何尺寸仿真准确度的评判。In the seventh step, after completing all the measurements, it is necessary to compare the real-world coordinates Pri(0<i<n) of a set of virtual points P obtained by n measurements at different observation positions with the virtual space set during modeling The coordinates Pv are compared to obtain the error when the system renders the virtual point P. To obtain more statistically significant results, we define is the mean value of all the measurement results for point P, that is, the center of gravity position of Pri(0<i<n), and adopts /> The Euclidean distance from Pv is used as a standard to judge the accuracy of point P geometric simulation simulation.
进一步,如进行多空间点的测量,可在进行多组数据采集后借用统计学分析中的均方根误差RMSE以及确定系数R-square作为全系统尺度还原能力的测定指标。记采集到的测试样本总数为m,则测定指标的计算如下式所示:Further, if the measurement of multiple spatial points is carried out, the root mean square error RMSE and the coefficient of determination R-square in the statistical analysis can be used as the measurement index of the scale reduction ability of the whole system after collecting multiple sets of data. Note that the total number of test samples collected is m, then the calculation of the measurement index is shown in the following formula:
RMSE值越接近0,R-square值越接近于1则代表几何仿真越接近真实。The closer the RMSE value is to 0, and the closer the R-square value is to 1, the closer the geometric simulation is to reality.
本方法利用相机与人眼具有相仿的结构与功能,且具有定量计算物理尺寸能力的特性,使用相机代替人眼对仿真场景进行观测,实现对VR系统几何仿真准确度的测量与测定。This method takes advantage of the similar structure and function of the camera and the human eye, and the ability to quantitatively calculate the physical size, and uses the camera instead of the human eye to observe the simulation scene, and realizes the measurement and determination of the geometric simulation accuracy of the VR system.
同时使用立体相机测量系统与动捕系统对一组相同的真实空间点进行测量,反推出立体相机测量系统坐标系与动捕系统坐标系间的转化关系,进而综合其余已知坐标系转化关系,实现立体相机测量系统眼位的准确标定,并通过为立体相机测量系统添加标记点实现标定眼位的实时更新。At the same time, the stereo camera measurement system and the motion capture system are used to measure a group of the same real space points, and the conversion relationship between the stereo camera measurement system coordinate system and the motion capture system coordinate system is deduced, and then the other known coordinate system conversion relationships are integrated. Realize the accurate calibration of the eye position of the stereo camera measurement system, and realize the real-time update of the calibration eye position by adding marking points for the stereo camera measurement system.
本发明旨在保护一种仿真场景测量方法,包括:将双目相机模仿眼置于3D眼镜前;标定所述双目相机的光心位置;将所述光心位置作为眼点位置;根据所述眼点位置进行VR系统场景实时渲染得到仿真场景;根据所述双目相机的标定参数及采集的图像对所述仿真场景进行测量。相对传统通过人眼观测,此方法通过定量的分析手段,更客观更准确;整个测定过程不涉及人的主观判断,可以实现自动化操作;可以完全脱离实际对象,对虚拟对象进行单独测定,扩大了应用范围;三维空间中任意一点均可通过此方法进行测定,结果具有全面性。The present invention aims to protect a method for measuring a simulated scene, including: placing a binocular camera imitating eyes in front of 3D glasses; calibrating the position of the optical center of the binocular camera; using the position of the optical center as the position of the eye point; according to the Perform real-time rendering of the VR system scene at the eye point position to obtain a simulation scene; measure the simulation scene according to the calibration parameters of the binocular camera and the collected images. Compared with the traditional human eye observation, this method is more objective and accurate through quantitative analysis; the whole measurement process does not involve human subjective judgment, and can realize automatic operation; it can completely separate from the actual object and measure the virtual object separately, which expands the Scope of application; any point in three-dimensional space can be measured by this method, and the result is comprehensive.
应当理解的是,本发明的上述具体实施方式仅仅用于示例性说明或解释本发明的原理,而不构成对本发明的限制。因此,在不偏离本发明的精神和范围的情况下所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。此外,本发明所附权利要求旨在涵盖落入所附权利要求范围和边界、或者这种范围和边界的等同形式内的全部变化和修改例。It should be understood that the above specific embodiments of the present invention are only used to illustrate or explain the principles of the present invention, and not to limit the present invention. Therefore, any modification, equivalent replacement, improvement, etc. made without departing from the spirit and scope of the present invention shall fall within the protection scope of the present invention. Furthermore, it is intended that the appended claims of the present invention embrace all changes and modifications that come within the scope and metesques of the appended claims, or equivalents of such scope and metes and bounds.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910637565.3ACN110414101B (en) | 2019-07-15 | 2019-07-15 | A simulation scene measurement method, accuracy measurement method and system |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910637565.3ACN110414101B (en) | 2019-07-15 | 2019-07-15 | A simulation scene measurement method, accuracy measurement method and system |
| Publication Number | Publication Date |
|---|---|
| CN110414101A CN110414101A (en) | 2019-11-05 |
| CN110414101Btrue CN110414101B (en) | 2023-08-04 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201910637565.3AActiveCN110414101B (en) | 2019-07-15 | 2019-07-15 | A simulation scene measurement method, accuracy measurement method and system |
| Country | Link |
|---|---|
| CN (1) | CN110414101B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113012270A (en)* | 2021-03-24 | 2021-06-22 | 纵深视觉科技(南京)有限责任公司 | Stereoscopic display method and device, electronic equipment and storage medium |
| CN113658474A (en)* | 2021-08-18 | 2021-11-16 | 中国商用飞机有限责任公司 | An aircraft emergency evacuation training system |
| CN115118880A (en)* | 2022-06-24 | 2022-09-27 | 中广建融合(北京)科技有限公司 | XR virtual shooting system based on immersive video terminal is built |
| CN116664652A (en)* | 2023-06-02 | 2023-08-29 | 杭州海康机器人股份有限公司 | A depth image acquisition method, device and electronic equipment |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9946076B2 (en)* | 2010-10-04 | 2018-04-17 | Gerard Dirk Smits | System and method for 3-D projection and enhancements for interactivity |
| CN102801994B (en)* | 2012-06-19 | 2014-08-20 | 西北工业大学 | Physical image information fusion device and method |
| ES2988206T3 (en)* | 2015-02-23 | 2024-11-19 | Fittingbox | Method for trying on realistic-looking and physically accurate glasses in real time |
| CN107277495B (en)* | 2016-04-07 | 2019-06-25 | 深圳市易瞳科技有限公司 | A kind of intelligent glasses system and its perspective method based on video perspective |
| CN107093195B (en)* | 2017-03-10 | 2019-11-05 | 西北工业大学 | A kind of locating mark points method of laser ranging in conjunction with binocular camera |
| CN109598796A (en)* | 2017-09-30 | 2019-04-09 | 深圳超多维科技有限公司 | Real scene is subjected to the method and apparatus that 3D merges display with dummy object |
| CN107820075A (en)* | 2017-11-27 | 2018-03-20 | 中国计量大学 | A kind of VR equipment delayed test devices based on light stream camera |
| CN108413941A (en)* | 2018-02-06 | 2018-08-17 | 四川大学 | A kind of simple and efficient distance measuring method based on cheap binocular camera |
| CN111951332B (en)* | 2020-07-20 | 2022-07-19 | 燕山大学 | Glasses design method based on sight estimation and binocular depth estimation and glasses thereof |
| Publication number | Publication date |
|---|---|
| CN110414101A (en) | 2019-11-05 |
| Publication | Publication Date | Title |
|---|---|---|
| CN110414101B (en) | A simulation scene measurement method, accuracy measurement method and system | |
| CN108765498B (en) | Monocular vision tracking, device and storage medium | |
| CN115410233B (en) | Gesture attitude estimation method based on Kalman filtering and deep learning | |
| CN108171673B (en) | Image processing method and device, vehicle-mounted head-up display system and vehicle | |
| CN104173054B (en) | Measuring method and measuring device for height of human body based on binocular vision technique | |
| CN107507235B (en) | Registration method of color image and depth image acquired based on RGB-D equipment | |
| CN102072706B (en) | Multi-camera positioning and tracking method and system | |
| WO2023011339A1 (en) | Line-of-sight direction tracking method and apparatus | |
| CN105424006B (en) | Unmanned plane hovering accuracy measurement method based on binocular vision | |
| CN109360240A (en) | A Binocular Vision-Based Small UAV Localization Method | |
| CN103607584B (en) | Real-time registration method for depth maps shot by kinect and video shot by color camera | |
| CN111667536A (en) | Parameter calibration method based on zoom camera depth estimation | |
| CN110617814A (en) | Monocular vision and inertial sensor integrated remote distance measuring system and method | |
| CN114998499A (en) | A method and system for binocular 3D reconstruction based on line laser galvanometer scanning | |
| WO2021179772A1 (en) | Calibration method, position determination method and apparatus, electronic device and storage medium | |
| CN104075688A (en) | Distance measurement method of binocular stereoscopic gazing monitoring system | |
| CN105654547A (en) | Three-dimensional reconstruction method | |
| CN110264527A (en) | Real-time binocular stereo vision output method based on ZYNQ | |
| CN118982636B (en) | Virtual-real alignment method for MR equipment | |
| CN104732586B (en) | A kind of dynamic body of 3 D human body and three-dimensional motion light stream fast reconstructing method | |
| CN114926538B (en) | External parameter calibration method and device for monocular laser speckle projection system | |
| CN106580329A (en) | Height measurement system and method based on binocular stereovision technology | |
| CN113487726B (en) | Motion capture system and method | |
| CN110068308A (en) | A kind of distance measuring method and range-measurement system based on more mesh cameras | |
| CN119541031A (en) | A wearable visual distance measurement method |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |