Movatterモバイル変換


[0]ホーム

URL:


CN110414101A - A simulation scene measurement method, accuracy measurement method and system - Google Patents

A simulation scene measurement method, accuracy measurement method and system
Download PDF

Info

Publication number
CN110414101A
CN110414101ACN201910637565.3ACN201910637565ACN110414101ACN 110414101 ACN110414101 ACN 110414101ACN 201910637565 ACN201910637565 ACN 201910637565ACN 110414101 ACN110414101 ACN 110414101A
Authority
CN
China
Prior art keywords
scene
simulation scene
rendering
optical center
eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910637565.3A
Other languages
Chinese (zh)
Other versions
CN110414101B (en
Inventor
吴程程
吕毅
许澍虹
薛阳
成天壮
武玉芬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Commercial Aircraft Corp of China Ltd
Beijing Aeronautic Science and Technology Research Institute of COMAC
Original Assignee
Commercial Aircraft Corp of China Ltd
Beijing Aeronautic Science and Technology Research Institute of COMAC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Commercial Aircraft Corp of China Ltd, Beijing Aeronautic Science and Technology Research Institute of COMACfiledCriticalCommercial Aircraft Corp of China Ltd
Priority to CN201910637565.3ApriorityCriticalpatent/CN110414101B/en
Publication of CN110414101ApublicationCriticalpatent/CN110414101A/en
Application grantedgrantedCritical
Publication of CN110414101BpublicationCriticalpatent/CN110414101B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Landscapes

Abstract

The invention discloses a kind of simulating scenes measurement methods, comprising: before binocular camera imitation eye is placed in 3D glasses;Demarcate the optical center position of the binocular camera;Using the optical center position as position of eye point;VR system scenarios real-time rendering, which is carried out, according to the position of eye point obtains simulating scenes;The simulating scenes are measured according to the calibrating parameters of the binocular camera and the image of acquisition.For tradition relatively by human eye observation, the method is more objective more acurrate by quantitative analysis means;Whole measurement process is not related to the subjective judgement of people, and automatic operation may be implemented;Practical object can be completely disengaged, virtual objects are individually measured, application range is expanded;Any point can be measured by the method in three-dimensional space, as a result be had comprehensive.

Description

Translated fromChinese
一种仿真场景测量方法、准确性测定方法及系统A simulation scene measurement method, accuracy measurement method and system

技术领域technical field

本发明涉及模拟仿真领域,具体涉及一种仿真场景测量方法、准确性测定方法及系统。The invention relates to the field of simulation, in particular to a simulation scene measurement method, an accuracy measurement method and a system.

背景技术Background technique

目前,模拟仿真行业内通用的检测方法是利用系统存在的实物与其通过系统仿真后的相同虚拟模型,进行虚实对比,间接来测定系统的几何仿真精度。At present, the common detection method in the simulation industry is to use the physical object existing in the system and the same virtual model after the system simulation to compare the virtual and the real, and indirectly measure the geometric simulation accuracy of the system.

现有的技术方案存在以下缺点:The existing technical solutions have the following disadvantages:

1)操作复杂繁琐,现有技术均采用实际物体与其虚拟对象进行对比,测量范围受限,只能通过间接的方式实现了对虚拟场景几何仿真准确度的测定;1) The operation is complicated and tedious, and the existing technologies all use the actual object to compare with its virtual object, the measurement range is limited, and the measurement of the accuracy of the geometric simulation of the virtual scene can only be realized in an indirect way;

2)结果准确性低,虚实比对结果均由人的观测得出,无法进行定量分析。2) The accuracy of the results is low, and the results of the actual and actual comparisons are all obtained by human observation, which cannot be quantitatively analyzed.

发明内容SUMMARY OF THE INVENTION

(一)发明目的(1) Purpose of the invention

本发明的目的是提供一种仿真场景测量方法、准确性测定方法及系统以解决现有虚拟仿真检测技术操作复杂繁琐,结果准确性低的问题。The purpose of the present invention is to provide a simulation scene measurement method, an accuracy determination method and a system to solve the problems of complicated and tedious operation and low result accuracy of the existing virtual simulation detection technology.

(二)技术方案(2) Technical solutions

为解决上述问题,本发明的第一方面提供了一种仿真场景测量方法,包括:将双目相机模仿眼置于3D眼镜前;标定所述双目相机的光心位置,并将所述光心位置作为眼点位置;根据所述眼点位置进行VR系统场景实时渲染得到仿真场景;根据所述双目相机的标定参数及采集的图像对所述仿真场景进行测量。In order to solve the above problems, a first aspect of the present invention provides a method for measuring a simulated scene, including: placing a binocular camera imitating eye in front of 3D glasses; calibrating the position of the optical center of the binocular camera, and The center position is used as the eye point position; the VR system scene is rendered in real time according to the eye point position to obtain a simulation scene; the simulation scene is measured according to the calibration parameters of the binocular camera and the collected images.

进一步地,所述光心位置包括初始光心位置和实时追踪的光心位置。Further, the optical center position includes an initial optical center position and a real-time tracking optical center position.

进一步地,所述根据所述眼点位置进行VR系统场景实时渲染得到仿真场景具体包括:预设待测虚拟空间点;根据所述眼点位置对所述虚拟空间点进行立体渲染,在屏幕上显示出左眼图像和右眼图像,形成仿真场景。Further, performing the real-time rendering of the VR system scene according to the eyepoint position to obtain the simulation scene specifically includes: presetting a virtual space point to be measured; The left eye image and the right eye image are displayed to form a simulated scene.

进一步地,所述根据所述双目相机的标定参数及采集的图像对所述仿真场景进行测量具体包括:利用所述双目相机通过所述3D眼镜对应采集所述左眼图像和所述右眼图像;根据立体视觉算法利用所述左眼图像、所述右眼图像及所述相机的标定参数计算所述空间点的物理世界坐标值。Further, the measuring the simulated scene according to the calibration parameters of the binocular camera and the collected image specifically includes: using the binocular camera to correspondingly collect the left-eye image and the right-eye image through the 3D glasses. an eye image; according to a stereo vision algorithm, the physical world coordinate value of the spatial point is calculated using the left eye image, the right eye image and the calibration parameters of the camera.

根据本发明的另一个方面,提供一种仿真场景渲染准确性测定方法,包括:According to another aspect of the present invention, a method for determining the rendering accuracy of a simulation scene is provided, comprising:

移动所述双目相机的位置;moving the position of the binocular camera;

在多个位置分别执行上述技术方案任一项所述的仿真场景测量方法步骤,得到多个测量结果;Execute the steps of the simulation scene measurement method described in any one of the above technical solutions at multiple locations to obtain multiple measurement results;

比对多个所述测量结果位置之间的偏差,通过所述偏差测定所述仿真场景渲染准确性。The deviations between a plurality of the measurement result positions are compared, and the rendering accuracy of the simulation scene is determined by the deviations.

根据本发明的又一方面,提供一种仿真场景测量系统,包括:According to another aspect of the present invention, a simulation scene measurement system is provided, comprising:

双目相机,用于模仿人眼采集左眼图像和右眼图像;Binocular camera, used to imitate the human eye to collect left-eye images and right-eye images;

光心定位模块,用于标定所述双目相机的光心位置,并将所述光心位置作为眼点位置;an optical center positioning module, used for calibrating the optical center position of the binocular camera, and using the optical center position as the eye point position;

场景渲染模块,用于根据所述眼点位置进行VR系统场景渲染得到仿真场景;a scene rendering module, used for rendering the VR system scene according to the eye point position to obtain a simulated scene;

仿真场景测量模块,用于根据所述双目相机的标定参数及采集的图像对所述仿真场景进行测量。The simulation scene measurement module is used for measuring the simulation scene according to the calibration parameters of the binocular camera and the collected images.

进一步地,所述光心位置包括初始光心位置和实时光心位置。Further, the optical center position includes an initial optical center position and a real-time optical center position.

进一步地,所述场景渲染模块包括:Further, the scene rendering module includes:

空间点模拟模块,用于预设待测虚拟空间点;The space point simulation module is used to preset the virtual space point to be measured;

立体渲染模块,用于根据所述眼点位置对所述虚拟空间点进行立体渲染,在屏幕上显示出左眼图像和右眼图像,形成仿真场景。The stereo rendering module is configured to perform stereo rendering on the virtual space point according to the position of the eye point, and display the left eye image and the right eye image on the screen to form a simulation scene.

进一步地,所述立体渲染模块根据所述眼点位置对所述虚拟空间点进行立体渲染,在屏幕上显示出左眼图像和右眼图像,形成仿真场景具体执行步骤包括:Further, the stereo rendering module performs stereo rendering on the virtual space point according to the eye point position, and displays the left eye image and the right eye image on the screen, and the specific execution steps of forming a simulation scene include:

利用所述双目相机通过所述3D眼镜对应采集所述左眼图像和所述右眼图像;Use the binocular camera to correspondingly collect the left eye image and the right eye image through the 3D glasses;

根据立体视觉算法利用所述左眼图像、所述右眼图像及所述相机的标定参数计算所述空间点的物理世界坐标值。The physical world coordinate value of the spatial point is calculated using the left eye image, the right eye image and the calibration parameters of the camera according to a stereo vision algorithm.

根据本发明的又一方面,提供一种仿真场景渲染准确性测定系统,包括:According to another aspect of the present invention, a simulation scene rendering accuracy determination system is provided, comprising:

驱动模块,用于移动所述双目相机的位置;a driving module for moving the position of the binocular camera;

仿真场景测量模块,用于执行上述方案任一项所述的仿真场景测量方法步骤,得到多个测量结果;A simulation scene measurement module, configured to perform the steps of the simulation scene measurement method described in any one of the above solutions, to obtain multiple measurement results;

比较模块,用于比对多个所述测量结果位置之间的偏差,通过所述偏差测定所述仿真场景渲染准确性。A comparison module, configured to compare deviations between a plurality of the measurement result positions, and determine the rendering accuracy of the simulation scene through the deviations.

(三)有益效果(3) Beneficial effects

本发明的上述技术方案具有如下有益的技术效果:The above-mentioned technical scheme of the present invention has the following beneficial technical effects:

(1)相对传统通过人眼观测,此方法通过定量的分析手段,更客观更准确;(1) Compared with the traditional observation through human eyes, this method is more objective and accurate through quantitative analysis methods;

(2)整个测定过程不涉及人的主观判断,可以实现自动化操作;(2) The whole measurement process does not involve human subjective judgment, and can realize automatic operation;

(3)可以完全脱离实际对象,对虚拟对象进行单独测定,扩大了应用范围;(3) It can be completely separated from the actual object, and the virtual object can be independently measured, which expands the scope of application;

(4)三维空间中任意一点均可通过此方法进行测定,结果具有全面性。(4) Any point in the three-dimensional space can be measured by this method, and the results are comprehensive.

附图说明Description of drawings

图1是根据本发明第一实施方式的仿真场景测量方法流程图;1 is a flowchart of a simulation scene measurement method according to a first embodiment of the present invention;

图2是根据本发明第一实施方式的实时渲染得到仿真场景流程图;2 is a flow chart of a simulation scene obtained by real-time rendering according to the first embodiment of the present invention;

图3是根据本发明第一实施方式的仿真场景测量流程图;FIG. 3 is a flow chart of a simulation scene measurement according to the first embodiment of the present invention;

图4是根据本发明第一实施方式的另一方面仿真场景渲染准确性测定方法的流程图;4 is a flowchart of a method for determining the rendering accuracy of a simulation scene according to another aspect of the first embodiment of the present invention;

图5是根据本发明一可选实施方式的仿真场景测量方法的原理图;5 is a schematic diagram of a simulation scene measurement method according to an optional embodiment of the present invention;

图6是根据本发明一可选实施方式的仿真场景测量的方法流程图;6 is a flowchart of a method for measuring a simulated scene according to an optional embodiment of the present invention;

图7是一般测量方法与真实眼位形成偏差原理图;Figure 7 is a schematic diagram of the deviation between the general measurement method and the real eye position;

图8是根据本发明一可选实施方式的坐标系间转换矩阵的示意图。FIG. 8 is a schematic diagram of a conversion matrix between coordinate systems according to an optional embodiment of the present invention.

具体实施方式Detailed ways

为使本发明的目的、技术方案和优点更加清楚明了,下面结合具体实施方式并参照附图,对本发明进一步详细说明。应该理解,这些描述只是示例性的,而并非要限制本发明的范围。此外,在以下说明中,省略了对公知结构和技术的描述,以避免不必要地混淆本发明的概念。In order to make the objectives, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail below with reference to the specific embodiments and the accompanying drawings. It should be understood that these descriptions are exemplary only and are not intended to limit the scope of the invention. Also, in the following description, descriptions of well-known structures and techniques are omitted to avoid unnecessarily obscuring the concepts of the present invention.

显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。Obviously, the described embodiments are some, but not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.

此外,下面所描述的本发明不同实施方式中所涉及的技术特征只要彼此之间未构成冲突就可以相互结合。In addition, the technical features involved in the different embodiments of the present invention described below can be combined with each other as long as they do not conflict with each other.

如图1所示,在本发明实施例的第一方面,提供了一种仿真场景测量方法,包括:As shown in FIG. 1 , in a first aspect of the embodiments of the present invention, a simulation scene measurement method is provided, including:

S1:将双目相机模仿眼置于3D眼镜前;S1: Put the binocular camera imitation eye in front of the 3D glasses;

S2:标定双目相机的光心位置,并将光心位置作为眼点位置;可选的,光心位置包括初始光心位置和实时光心位置。具体的,标定初始光心位置,标定的内容包括两个相机光心在统一的世界坐标系下的三维空间坐标,得到了初始光心位置与3D眼镜的相对位置关系;利用追踪系统获取3D眼镜的位置实时信息,根据初始光心位置与3D眼镜的相对位置关系解算出相机的实时光心位置,追踪系统可选的有ART公司的光学追踪系统,也可以是OptiTrack,Vicon以及国内的青瞳等。为了获取实时光心位置坐标,本实施例采取的是利用待标定初始位置的相机系统与追踪系统共同测量同一组物理空间点的方式,为实现测量需要对相机系统本身进行立体视觉标定,包含相机的内参数与外参数。上述方法解决了追踪系统只能获取3D眼镜的位置信息无法获取真实相机的光心位置的问题,进而解决了在设置眼点的时候会出现偏差的问题,通过本方法可以得到准确的实时光心位置。S2: Calibrate the optical center position of the binocular camera, and use the optical center position as the eye point position; optionally, the optical center position includes the initial optical center position and the real-time optical center position. Specifically, the initial optical center position is calibrated, and the calibration content includes the three-dimensional space coordinates of the optical centers of the two cameras in a unified world coordinate system, and the relative position relationship between the initial optical center position and the 3D glasses is obtained; the tracking system is used to obtain the 3D glasses. The real-time position information of the camera, according to the relative position relationship between the initial optical center position and the 3D glasses, the real-time optical center position of the camera is calculated. The optional tracking system includes the optical tracking system of ART company, and can also be OptiTrack, Vicon and domestic Qingtong. Wait. In order to obtain the real-time optical center position coordinates, this embodiment adopts the method of using the camera system whose initial position is to be calibrated and the tracking system to jointly measure the same set of physical space points. The internal and external parameters of . The above method solves the problem that the tracking system can only obtain the position information of the 3D glasses and cannot obtain the position of the optical center of the real camera, and further solves the problem of deviation when setting the eye point. Through this method, an accurate real-time optical center can be obtained. Location.

S3:根据眼点位置进行VR系统场景实时渲染得到仿真场景;可选的,如图2所示,根据眼点位置进行VR系统场景实时渲染得到仿真场景具体包括:S31 预设待测虚拟空间点;S32根据眼点位置对虚拟空间点进行立体渲染,在屏幕上显示出左眼图像和右眼图像,形成仿真场景。S3: Perform real-time rendering of the VR system scene according to the eyepoint position to obtain a simulation scene; optionally, as shown in Figure 2, perform real-time rendering of the VR system scene according to the eyepoint position to obtain the simulation scene, which specifically includes: S31 Preset virtual space point to be measured ; S32 performs stereoscopic rendering on the virtual space point according to the position of the eye point, and displays the left eye image and the right eye image on the screen to form a simulation scene.

S4:根据双目相机的标定参数及采集的图像对仿真场景进行测量。可选的,如图3所示,根据双目相机的标定参数及采集的图像对仿真场景进行测量具体包括:S41利用双目相机通过3D眼镜对应采集左眼图像和右眼图像;S42根据立体视觉算法利用左眼图像、右眼图像及相机的标定参数计算空间点的物理世界坐标值。其中对应的含义是双目相机中代替人左眼的一目相机采集左眼观测到的屏幕上的图像,代替人右眼的一目相机采集右眼观测到的屏幕上的图像。S4: Measure the simulation scene according to the calibration parameters of the binocular camera and the collected images. Optionally, as shown in FIG. 3 , measuring the simulated scene according to the calibration parameters of the binocular camera and the collected images specifically includes: S41 using the binocular camera to correspondingly collect the left eye image and the right eye image through 3D glasses; S42 according to the stereoscopic image. The vision algorithm uses the left eye image, the right eye image and the calibration parameters of the camera to calculate the physical world coordinate value of the space point. The corresponding meaning is that the monocular camera in the binocular camera that replaces the human left eye collects the image on the screen observed by the left eye, and the monocular camera that replaces the human right eye captures the image on the screen observed by the right eye.

此方法不需要人为判断,而是通过定量的分析手段,更客观更准确;整个测定过程不涉及人的主观判断,可以实现自动化操作;可以完全脱离实际对象,对虚拟对象进行单独测定,扩大了应用范围;三维空间中任意一点均可通过此方法进行测定,结果具有全面性。This method does not require human judgment, but through quantitative analysis methods, which is more objective and accurate; the entire measurement process does not involve human subjective judgment, and can be automated; it can be completely separated from the actual object, and the virtual object can be measured separately, expanding the Application range; any point in the three-dimensional space can be measured by this method, and the results are comprehensive.

如图4所示,在本发明实施例的另一个方面,提供一种仿真场景渲染准确性测定方法,包括:As shown in FIG. 4 , in another aspect of the embodiment of the present invention, a method for determining the rendering accuracy of a simulation scene is provided, including:

S′1:移动双目相机的位置;S'1: Move the position of the binocular camera;

S′2:在多个位置分别执行上述实施例的仿真场景测量方法步骤,得到多个测量结果;S'2: respectively execute the steps of the simulation scene measurement method of the above embodiment at multiple positions to obtain multiple measurement results;

S′3:比对多个测量结果位置之间的偏差,测定仿真场景渲染准确性。S'3: Compare the deviation between the positions of multiple measurement results, and determine the rendering accuracy of the simulation scene.

在本发明实施例的又一方面,提供一种仿真场景测量系统,包括:In yet another aspect of the embodiments of the present invention, a simulation scene measurement system is provided, including:

双目相机,用于模仿人眼采集左眼图像和右眼图像;可选的,双目相机为定焦数字相机。The binocular camera is used to imitate the human eye to collect the image of the left eye and the image of the right eye; optionally, the binocular camera is a fixed-focus digital camera.

光心定位模块,用于标定双目相机的光心位置,并将光心位置作为眼点位置;可选的,光心位置包括初始光心位置和实时追踪的光心位置。The optical center positioning module is used to calibrate the optical center position of the binocular camera, and use the optical center position as the eye point position; optionally, the optical center position includes the initial optical center position and the real-time tracking optical center position.

场景渲染模块,用于根据眼点位置进行VR系统场景渲染得到仿真场景;可选的,场景渲染模块包括:空间点模拟模块,用于预设待测虚拟空间点;立体渲染模块,用于根据眼点位置对虚拟空间点进行立体渲染,在屏幕上显示出左眼图像和右眼图像,形成仿真场景。可选的,立体渲染模块根据眼点位置对虚拟空间点进行立体渲染,在屏幕上显示出左眼图像和右眼图像,形成仿真场景具体执行步骤包括:双目相机通过3D眼镜对应采集左眼图像和右眼图像;根据立体视觉算法利用左眼图像、右眼图像及相机的标定参数计算空间点的物理世界坐标值。The scene rendering module is used to render the VR system scene according to the eye point position to obtain a simulated scene; optionally, the scene rendering module includes: a space point simulation module, used for presetting the virtual space point to be tested; a stereo rendering module, used for according to The position of the eye point performs stereo rendering on the virtual space point, and displays the left eye image and the right eye image on the screen to form a simulated scene. Optionally, the stereo rendering module performs stereo rendering on the virtual space point according to the position of the eye point, displays the left eye image and the right eye image on the screen, and forms a simulation scene. Image and right-eye image; according to the stereo vision algorithm, the physical world coordinate value of the space point is calculated by using the left-eye image, the right-eye image and the calibration parameters of the camera.

仿真场景测量模块,用于根据双目相机的标定参数及采集的图像对仿真场景进行测量。The simulation scene measurement module is used to measure the simulation scene according to the calibration parameters of the binocular camera and the collected images.

在本发明实施例的又一方面,提供一种仿真场景渲染准确性测定系统,包括:In yet another aspect of the embodiments of the present invention, a simulation scene rendering accuracy measurement system is provided, including:

驱动模块,用于移动双目相机的位置;The drive module is used to move the position of the binocular camera;

仿真场景测量模块,用于执行上述实施例的仿真场景测量方法步骤,得到多个测量结果;a simulation scene measurement module, configured to execute the steps of the simulation scene measurement method of the above embodiment, to obtain a plurality of measurement results;

比较模块,用于比对多个测量结果位置之间的偏差,测定仿真场景渲染准确性。The comparison module is used to compare the deviation between the positions of multiple measurement results and determine the rendering accuracy of the simulation scene.

如图5所示,在本发明的一可选实施中,为了在真实世界中对仿真场景进行测量,将一对标定好的定焦数字相机放置于仿真场景中代替人眼对场景进行观测,并将3D眼镜放置于两相机前,两个相机也可以换成双目相机以保证代替人左右眼的相机每一个仅能观测到各自眼点位置下渲染出的图片从而获取仿真场景的视差图;随后对相机光心位置进行初始标定并进行实时追踪,将追踪结果定义为VR系统用于渲染的眼点位置,从而保证渲染出的仿真场景与观察位置间的匹配,避免由于眼点位置的偏差而导致的测量误差;随后启动VR系统,根据追踪到的眼位进行场景实时渲染;进而使用立体视觉算法利用采集到具有视差的两幅图与相机系统的标定参数对仿真场景进行测量;移动相机系统变更观测位置,重新以新的眼点位置重复上述测量过程,获取不同观测位置的测量结果;最后通过将仿真场景多次测量的平均结果与原始设计数据进行比对,实现对场景几何尺寸渲染准确度的测量。As shown in FIG. 5 , in an optional implementation of the present invention, in order to measure the simulated scene in the real world, a pair of calibrated fixed-focus digital cameras are placed in the simulated scene to observe the scene instead of human eyes, Put the 3D glasses in front of the two cameras, and the two cameras can also be replaced with binocular cameras to ensure that each camera that replaces the left and right eyes of the human can only observe the rendered pictures at the respective eye point positions to obtain the parallax map of the simulated scene. ; Then the camera optical center position is initially calibrated and real-time tracking is performed, and the tracking result is defined as the eye point position used by the VR system for rendering, so as to ensure the matching between the rendered simulation scene and the observation position, and avoid the eye point position due to the matching. The measurement error caused by the deviation; then start the VR system to render the scene in real time according to the tracked eye position; then use the stereo vision algorithm to use the two images with parallax collected and the calibration parameters of the camera system to measure the simulated scene; move The camera system changes the observation position, repeats the above measurement process with the new eye point position, and obtains the measurement results of different observation positions; finally, by comparing the average results of multiple measurements of the simulated scene with the original design data, the geometric size of the scene is realized. A measure of rendering accuracy.

如图6所示,对仿真场景几何尺寸的测量,追究其根本即针对虚拟环境中的任意一虚拟空间点的位置进行测量,具体操作步骤如下:As shown in Figure 6, the measurement of the geometric size of the simulation scene is rooted in the measurement of the position of any virtual space point in the virtual environment. The specific operation steps are as follows:

第一步,将标定好的立体相机测量系统(立体相机测量系统将两个标定好的定焦相机固连在一个刚体支架上,支架上加持有一个用于跟踪定位的标记物 (Track Marker),整个系统作为一个完整刚体各部件间不会出现相对位置变化)放置于VR系统中任意位置,并将3D眼镜置于相机前;The first step is to fix the calibrated stereo camera measurement system (the stereo camera measurement system fixes the two calibrated fixed-focus cameras on a rigid body bracket, and the bracket holds a marker for tracking and positioning (Track Marker). ), the whole system as a complete rigid body will not have relative position changes among the components) placed in any position in the VR system, and the 3D glasses are placed in front of the camera;

第二步,进行相机光心初始位置标定,获取相机光心位置作为渲染眼点初始位置;The second step is to calibrate the initial position of the camera optical center, and obtain the position of the camera optical center as the initial position of the rendering eye point;

第三步,根据相机光心初始位置的标定结果开始对相机光心位置进行实时追踪;The third step is to start real-time tracking of the position of the camera's optical center according to the calibration result of the initial position of the camera's optical center;

第四步,设定待测虚拟空间点P;The fourth step is to set the virtual space point P to be measured;

第五步,将相机空间位置定义为眼点位置,并根据眼点位置对虚拟空间点P 进行立体渲染,在屏幕S上显示出IL与IR两幅图像;In the fifth step, the camera space position is defined as the eye point position, and the virtual space point P is stereoscopically rendered according to the eye point position, and two images of IL and IR are displayed on the screen S;

第六步,使用相机在3D眼镜的辅助下分别采集左右眼图像,并使用立体视觉算法利用采集到的具有视差的两幅图与相机系统的标定参数计算点P的物理世界坐标值;The sixth step, using the camera to collect the left and right eye images with the aid of the 3D glasses, and using the stereo vision algorithm to calculate the physical world coordinate value of the point P by using the collected two images with parallax and the calibration parameters of the camera system;

第七步,移动相机,更改观测眼位重复上述第三步至第六步获取多组不同观测位置处的测量结果,最后对比点P的虚拟空间设定位置与多组实际测量位置重心之间的偏差,测定点P的几何渲染准确性。The seventh step is to move the camera and change the observation eye position. Repeat the above third to sixth steps to obtain multiple sets of measurement results at different observation positions, and finally compare the virtual space setting position of point P with the center of gravity of multiple sets of actual measurement positions. The deviation of , determines the geometric rendering accuracy of point P.

第二步中,相机光心的标定需要测算左右两相机光心位置在虚拟空间坐标中的表示,从而为渲染提供赖以计算的眼点位置。在VR显示系统正常应用时,此眼点位置通常直接使用3D眼镜的镜片中心位置,并通过动捕系统测量后传递给渲染系统用以场景渲染。由于镜片中心位置与真实眼位间存在偏差,因此对于观察者而言所显示的场景并不准确,其偏差如图7所示。In the second step, the calibration of the optical center of the camera needs to measure the representation of the optical center position of the left and right cameras in the virtual space coordinates, so as to provide the eye point position for rendering. In the normal application of the VR display system, the eye point position usually directly uses the lens center position of the 3D glasses, and is measured by the motion capture system and then passed to the rendering system for scene rendering. Due to the deviation between the center position of the lens and the real eye position, the displayed scene is not accurate for the observer, and the deviation is shown in Figure 7.

当观察者为人时,由于缺乏对于尺寸的精确感知,此误差往往被忽略,但当使用相机进行精确测量时此误差将对测量结果产生极大地影响,因此不能直接使用动捕设备追踪到的眼位进行测量。本提案针对这一问题设计了一种针对相机的真实眼位测算方法,从而精确获取相机光心在渲染系统中的位置。此部分内容涉及到若干坐标系间转换矩阵的计算,具体方法如下:When the observer is a human, due to the lack of accurate perception of size, this error is often ignored, but when using a camera for accurate measurement, this error will have a great impact on the measurement results, so the eye tracked by the motion capture device cannot be used directly. bit to measure. Aiming at this problem, this proposal designs a real eye position measurement method for the camera, so as to accurately obtain the position of the camera optical center in the rendering system. This part involves the calculation of transformation matrices between several coordinate systems. The specific methods are as follows:

在标定过程中主要涉及的坐标系如图8所示,包括:The coordinate system mainly involved in the calibration process is shown in Figure 8, including:

(1)VR系统物理世界坐标系COW,用于描述真实世界物理空间点的位置(1) The VR system physical world coordinate system COW is used to describe the position of the real world physical space point

(2)运动捕捉系统坐标系COT,用于描述动捕系统所追踪的真实世界物理定位点位置,同时由于追踪系统的定位追踪结果可用于为虚拟场景渲染提供眼位信息。因此,此坐标系下的坐标也可以用来描述虚拟空间中的点(2) The motion capture system coordinate system COT is used to describe the position of the real-world physical positioning points tracked by the motion capture system, and the positioning and tracking results of the tracking system can be used to provide eye position information for virtual scene rendering. Therefore, coordinates in this coordinate system can also be used to describe points in virtual space

(3)VR系统虚拟空间坐标系COV,用于描述虚拟空间点的位置(3) The virtual space coordinate system COV of the VR system is used to describe the position of the virtual space point

(4)左侧相机坐标系COCL,用于描述相机测量结果,其原点位置为渲染时左眼点位置,Z轴为左眼点的观察方向(4) The left camera coordinate system COCL is used to describe the camera measurement results. Its origin is the position of the left eye point during rendering, and the Z axis is the observation direction of the left eye point.

(5)右侧相机坐标系COCR,用于描述相机测量结果,其原点位置为渲染时右眼点位置,Z轴为右眼点的观察方向(5) The right camera coordinate system COCR is used to describe the camera measurement results. The origin position is the position of the right eye point during rendering, and the Z axis is the observation direction of the right eye point.

精确获取眼点位置,即通过计算获取COCL与COCR的坐标原点与坐标轴方向向量在COV中的表达方式。从数学角度分析,即计算COCL与COCR与COV间的坐标系转化关系。由于COCL与COCR所描述空间为真实物理空间,而COV所描述为虚拟空间,无法直接获取两者间的关系,因此需要借助其他坐标系进行计算。为简化问题,首先可以将COW定义为COT,即定义COT为物理世界坐标系,从而COT与COW间的转化关系已知。其次,如前所述,动捕系统用于测量实际物理空间中的眼位并将这一结果提供给渲染系统进行渲染,因此COT与COV间的转化关系已知。最后,对于两相机COCL与COCR可以通过相机立体标定获取两相机间的相互转化关系,因此COCL与COCR间的转化关系已知。综上,问题转化为寻找COT与COCL间的转化关系。Accurately obtain the position of the eye point, that is, to obtain the expression of the coordinate origin and coordinate axis direction vector of COCL and COCR in COV by calculation. From a mathematical point of view, the coordinate system transformation relationship between COCL and COCR and COV is calculated. Since the space described by COCL and COCR is a real physical space, and COV is described as a virtual space, the relationship between the two cannot be directly obtained, so other coordinate systems are required for calculation. In order to simplify the problem, we can first define COW as COT, that is, define COT as the physical world coordinate system, so that the transformation relationship between COT and COW is known. Secondly, as mentioned earlier, the motion capture system is used to measure the eye position in the actual physical space and provide this result to the rendering system for rendering, so the conversion relationship between COT and COV is known. Finally, for the two cameras COCL and COCR, the mutual transformation relationship between the two cameras can be obtained through camera stereo calibration, so the transformation relationship between COCL and COCR is known. To sum up, the problem turns into finding the transformation relationship between COT and COCL.

由于动捕系统与相机系统均可以测量真实世界物体,因此,本专利通过使用两套系统对同一物体的测量来反推两坐标间的COT与COCL间的转化关系。Since both the motion capture system and the camera system can measure real-world objects, this patent reverses the conversion relationship between COT and COCL between the two coordinates by using the two systems to measure the same object.

对于物理空间中任意n个空间待测点,设为该组点在COT中的测量坐标;为该组点在COCL中的测量坐标;s∈R为COT与COCL间相互转化的比例系数、R∈R3×3坐标系间旋转矩阵, T∈R3×1为两坐标系间的平移矩阵,则有:For any n spatial points to be measured in the physical space, set is the measurement coordinates of this group of points in COT; is the measurement coordinates of the group of points in COCL; s∈R is the proportional coefficient of mutual conversion between COT and COCL, R∈R is the rotation matrix between the3×3 coordinate systems, and T∈R3×1 is the translation between the two coordinate systems matrix, there are:

MT=s*R*MC+[T...T]MT =s*R *MC +[T...T]

现记为MC第j列的列向量,则点组的重心为:now is the column vector of the jth column of MC , then the center of gravity of the point group is:

平均半径为:The average radius is:

对于MT同理。The same is true forMT .

则目标函数可以化简为:make Then the objective function can be simplified as:

原问题转化成了正交procrustes问题,即求解:The original problem is transformed into an orthogonal procrutes problem, that is, the solution:

1.求解可得R=UVT,其中U,VT是对M=QPT的奇异值分解得到的两个正交矩阵。由于(s,R,T)共有7个未知变量,因此只要n大于等于3即可完成求解,从而可以获取COT与COCL间的转化关系并完成针对相机的真实眼位测算。其中,:=表示定义为,即用一个简单的符号代表一个表达式;Raxb表示a行b列的一个矩阵,其元素为实数;||||表示欧氏距离;SO(3)表示三维旋转矩阵群1. Solving can get R=UVT , where U, VT are two orthogonal matrices obtained by singular value decomposition of M=QPT . Since (s, R, T) has a total of 7 unknown variables, the solution can be completed as long as n is greater than or equal to 3, so that the conversion relationship between COT and COCL can be obtained and the real eye position calculation for the camera can be completed. Among them, := means to be defined as, that is to use a simple symbol to represent an expression; Raxb means a matrix with a row and b column, and its elements are real numbers; |||| means Euclidean distance; SO(3) means three-dimensional Rotation matrix group

表示R为使得表达式||ΩP-Q||取值最小的Ω的值,Ω属于三维旋转矩阵群。 Denote that R is the value of Ω that minimizes the value of the expression ||ΩP-Q||, and Ω belongs to the group of three-dimensional rotation matrices.

同在第三步中,完成眼位测算后需要对眼位进行实时跟踪,从而实现在移动相机位置后仍可获取正确的眼位信息而不需要从新进行标定。Also in the third step, the eye position needs to be tracked in real time after the eye position measurement is completed, so that the correct eye position information can still be obtained after moving the camera position without re-calibration.

由于整个双目相机系统为一整体系统,两个相机的相对位置关系不会变化,因此可将整个系统看作一个刚体,其完成初始眼位标定后的运动均为相对初始标定位置的刚体运动。基于这一分析,本提案采用在相机系统上增加定位点的方式实现实时眼位追踪。在眼位标定完成时使用动捕系统记录下当前6自由度位姿信息Po0随后实时追踪标记点的6自由度位姿信息Pot,计算Pot与Po0间的位置变换,将此变换应用于初始标定眼位即可实时获取准确的眼位信息。Since the entire binocular camera system is a whole system, the relative positional relationship between the two cameras will not change, so the entire system can be regarded as a rigid body, and its motion after the initial eye position calibration is a rigid body motion relative to the initial calibration position . Based on this analysis, this proposal implements real-time eye position tracking by adding anchor points to the camera system. When the eye position calibration is completed, use the motion capture system to record the current 6-DOF pose information Po0 and then track the 6-DOF pose information Pot of the marker point in real time, calculate the position transformation between Pot and Po0, and apply this transformation to the initial calibration The eye position can obtain accurate eye position information in real time.

在第七步中,完成全部测量后需要将在不同观测位置下进行n次测量得到的一组虚拟点P的真实世界坐标Pri(0<i<n),与建模时设定的虚拟空间坐标Pv进行比较从而获取系统在渲染虚拟点P时的误差。为得到更加具有统计意义的结果,我们定义为所有针对点P的测量结果的均值,即Pri(0<i<n)的重心位置,并采用与Pv间的欧氏距离作为标准进行对点P几何尺寸仿真准确度的评判。In the seventh step, after all the measurements are completed, the real world coordinates Pri(0<i<n) of a set of virtual points P obtained by performing n measurements at different observation positions need to be compared with the virtual space set during modeling. The coordinates Pv are compared to obtain the error of the system when rendering the virtual point P. To obtain more statistically significant results, we define is the mean value of all measurement results for point P, that is, the position of the center of gravity of Pri(0<i<n), and adopts The Euclidean distance between P v and Pv is used as a standard to judge the accuracy of the simulation of the geometric size of point P.

进一步,如进行多空间点的测量,可在进行多组数据采集后借用统计学分析中的均方根误差RMSE以及确定系数R-square作为全系统尺度还原能力的测定指标。记采集到的测试样本总数为m,则测定指标的计算如下式所示:Further, if the measurement of multiple spatial points is carried out, the root mean square error RMSE and the coefficient of determination R-square in statistical analysis can be borrowed after collecting multiple sets of data as the measurement index of the scale reduction ability of the whole system. Note that the total number of test samples collected is m, the calculation of the measurement index is as follows:

RMSE值越接近0,R-square值越接近于1则代表几何仿真越接近真实。The closer the RMSE value is to 0, and the closer the R-square value is to 1, the closer the geometric simulation is to reality.

本方法利用相机与人眼具有相仿的结构与功能,且具有定量计算物理尺寸能力的特性,使用相机代替人眼对仿真场景进行观测,实现对VR系统几何仿真准确度的测量与测定。The method utilizes the similar structure and function of the camera and the human eye, and the ability to quantitatively calculate the physical size, and uses the camera instead of the human eye to observe the simulated scene, so as to realize the measurement and determination of the accuracy of the geometric simulation of the VR system.

同时使用立体相机测量系统与动捕系统对一组相同的真实空间点进行测量,反推出立体相机测量系统坐标系与动捕系统坐标系间的转化关系,进而综合其余已知坐标系转化关系,实现立体相机测量系统眼位的准确标定,并通过为立体相机测量系统添加标记点实现标定眼位的实时更新。At the same time, the stereo camera measurement system and the motion capture system are used to measure a set of the same real space points, and the transformation relationship between the coordinate system of the stereo camera measurement system and the motion capture system is deduced, and then the transformation relationship between the other known coordinate systems is synthesized. The accurate calibration of the eye position of the stereo camera measurement system is realized, and the real-time update of the calibration eye position is realized by adding markers to the stereo camera measurement system.

本发明旨在保护一种仿真场景测量方法,包括:将双目相机模仿眼置于3D 眼镜前;标定所述双目相机的光心位置;将所述光心位置作为眼点位置;根据所述眼点位置进行VR系统场景实时渲染得到仿真场景;根据所述双目相机的标定参数及采集的图像对所述仿真场景进行测量。相对传统通过人眼观测,此方法通过定量的分析手段,更客观更准确;整个测定过程不涉及人的主观判断,可以实现自动化操作;可以完全脱离实际对象,对虚拟对象进行单独测定,扩大了应用范围;三维空间中任意一点均可通过此方法进行测定,结果具有全面性。The invention aims to protect a simulation scene measurement method, which includes: placing a binocular camera imitation eye in front of 3D glasses; calibrating the optical center position of the binocular camera; using the optical center position as the eye point position; Real-time rendering of the VR system scene at the eye point position to obtain a simulation scene; and the simulation scene is measured according to the calibration parameters of the binocular camera and the collected images. Compared with the traditional observation through human eyes, this method is more objective and accurate through quantitative analysis methods; the entire measurement process does not involve human subjective judgment, and can realize automatic operation; it can be completely separated from the actual object, and the virtual object can be measured separately, expanding the Application range; any point in the three-dimensional space can be measured by this method, and the results are comprehensive.

应当理解的是,本发明的上述具体实施方式仅仅用于示例性说明或解释本发明的原理,而不构成对本发明的限制。因此,在不偏离本发明的精神和范围的情况下所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。此外,本发明所附权利要求旨在涵盖落入所附权利要求范围和边界、或者这种范围和边界的等同形式内的全部变化和修改例。It should be understood that the above-mentioned specific embodiments of the present invention are only used to illustrate or explain the principle of the present invention, but not to limit the present invention. Therefore, any modifications, equivalent replacements, improvements, etc. made without departing from the spirit and scope of the present invention should be included within the protection scope of the present invention. Furthermore, the appended claims of this invention are intended to cover all changes and modifications that fall within the scope and boundaries of the appended claims, or the equivalents of such scope and boundaries.

Claims (10)

Translated fromChinese
1.一种仿真场景测量方法,其特征在于,包括:1. a simulation scene measurement method, is characterized in that, comprises:将双目相机模仿眼置于3D眼镜前;Put the binocular camera imitation eye in front of the 3D glasses;标定所述双目相机的光心位置,并将所述光心位置作为眼点位置;Calibrating the optical center position of the binocular camera, and using the optical center position as the eye point position;根据所述眼点位置进行VR系统场景实时渲染得到仿真场景;Real-time rendering of the VR system scene according to the eye point position to obtain a simulated scene;根据所述双目相机的标定参数及采集的图像对所述仿真场景进行测量。The simulation scene is measured according to the calibration parameters of the binocular camera and the collected images.2.根据权利要求1所述的仿真场景测量方法,其特征在于,所述光心位置包括初始光心位置和实时光心位置。2 . The method for measuring a simulated scene according to claim 1 , wherein the optical center position includes an initial optical center position and a real-time optical center position. 3 .3.根据权利要求1所述的仿真场景测量方法,其特征在于,所述根据所述眼点位置进行VR系统场景实时渲染得到仿真场景具体包括:3. The simulation scene measurement method according to claim 1, wherein the real-time rendering of the VR system scene according to the eye point position to obtain the simulation scene specifically comprises:预设待测虚拟空间点;Preset virtual space points to be tested;根据所述眼点位置对所述虚拟空间点进行立体渲染,在屏幕上显示出左眼图像和右眼图像,形成仿真场景。Stereoscopic rendering is performed on the virtual space point according to the eye point position, and the left eye image and the right eye image are displayed on the screen to form a simulation scene.4.根据权利要求1所述的仿真场景测量方法,其特征在于,所述根据所述双目相机的标定参数及采集的图像对所述仿真场景进行测量具体包括:4. The simulation scene measurement method according to claim 1, wherein the measuring the simulation scene according to the calibration parameters of the binocular camera and the collected images specifically includes:利用所述双目相机通过所述3D眼镜对应采集所述左眼图像和所述右眼图像;Use the binocular camera to correspondingly collect the left eye image and the right eye image through the 3D glasses;根据立体视觉算法利用所述左眼图像、所述右眼图像及所述相机的标定参数计算所述空间点的物理世界坐标值。The physical world coordinate value of the spatial point is calculated using the left eye image, the right eye image and the calibration parameters of the camera according to a stereo vision algorithm.5.一种仿真场景渲染准确性测定方法,其特征在于,包括:5. a simulation scene rendering accuracy determination method, is characterized in that, comprises:移动所述双目相机的位置;moving the position of the binocular camera;在多个位置分别执行权利要求1-4任一项所述的仿真场景测量方法步骤,得到多个测量结果;Execute the steps of the simulation scene measurement method according to any one of claims 1-4 at multiple positions to obtain multiple measurement results;比对多个所述测量结果位置之间的偏差,通过所述偏差测定所述仿真场景渲染准确性。The deviations between a plurality of the measurement result positions are compared, and the rendering accuracy of the simulation scene is determined by the deviations.6.一种仿真场景测量系统,其特征在于,包括:6. A simulation scene measurement system, characterized in that, comprising:双目相机,用于模仿人眼采集左眼图像和右眼图像;Binocular camera, used to imitate the human eye to collect left-eye images and right-eye images;光心定位模块,用于标定所述双目相机的光心位置,并将所述光心位置作为眼点位置;an optical center positioning module, used for calibrating the optical center position of the binocular camera, and using the optical center position as the eye point position;场景渲染模块,用于根据所述眼点位置进行VR系统场景渲染得到仿真场景;a scene rendering module, used for rendering the VR system scene according to the eye point position to obtain a simulated scene;仿真场景测量模块,用于根据所述双目相机的标定参数及采集的图像对所述仿真场景进行测量。The simulation scene measurement module is used for measuring the simulation scene according to the calibration parameters of the binocular camera and the collected images.7.根据权利要求6所述的仿真场景测量系统,其特征在于,所述光心位置包括初始光心位置和实时光心位置。7 . The simulation scene measurement system according to claim 6 , wherein the optical center position includes an initial optical center position and a real-time optical center position. 8 .8.根据权利要求6所述的仿真场景测量系统,其特征在于,所述场景渲染模块包括:8. The simulation scene measurement system according to claim 6, wherein the scene rendering module comprises:空间点模拟模块,用于预设待测虚拟空间点;The space point simulation module is used to preset the virtual space point to be measured;立体渲染模块,用于根据所述眼点位置对所述虚拟空间点进行立体渲染,在屏幕上显示出左眼图像和右眼图像,形成仿真场景。The stereo rendering module is configured to perform stereo rendering on the virtual space point according to the position of the eye point, and display the left eye image and the right eye image on the screen to form a simulation scene.9.根据权利要求6所述的仿真场景测量系统,其特征在于,所述立体渲染模块根据所述眼点位置对所述虚拟空间点进行立体渲染,在屏幕上显示出左眼图像和右眼图像,形成仿真场景具体执行步骤包括:9 . The simulation scene measurement system according to claim 6 , wherein the stereo rendering module performs stereo rendering on the virtual space point according to the eye point position, and displays a left eye image and a right eye image on the screen. 10 . image to form a simulation scene. The specific execution steps include:利用所述双目相机通过所述3D眼镜对应采集所述左眼图像和所述右眼图像;Use the binocular camera to correspondingly collect the left eye image and the right eye image through the 3D glasses;根据立体视觉算法利用所述左眼图像、所述右眼图像及所述相机的标定参数计算所述空间点的物理世界坐标值。The physical world coordinate value of the spatial point is calculated using the left eye image, the right eye image and the calibration parameters of the camera according to a stereo vision algorithm.10.一种仿真场景渲染准确性测定系统,其特征在于,包括:10. A simulation scene rendering accuracy measurement system, characterized in that, comprising:驱动模块,用于移动所述双目相机的位置;a driving module for moving the position of the binocular camera;仿真场景测量模块,用于执行权利要求1-4任一项所述的仿真场景测量方法步骤,得到多个测量结果;A simulation scene measurement module, configured to perform the steps of the simulation scene measurement method described in any one of claims 1-4, to obtain a plurality of measurement results;比较模块,用于比对多个所述测量结果位置之间的偏差,通过所述偏差测定所述仿真场景渲染准确性。A comparison module, configured to compare deviations between a plurality of the measurement result positions, and determine the rendering accuracy of the simulation scene through the deviations.
CN201910637565.3A2019-07-152019-07-15 A simulation scene measurement method, accuracy measurement method and systemActiveCN110414101B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201910637565.3ACN110414101B (en)2019-07-152019-07-15 A simulation scene measurement method, accuracy measurement method and system

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201910637565.3ACN110414101B (en)2019-07-152019-07-15 A simulation scene measurement method, accuracy measurement method and system

Publications (2)

Publication NumberPublication Date
CN110414101Atrue CN110414101A (en)2019-11-05
CN110414101B CN110414101B (en)2023-08-04

Family

ID=68361483

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201910637565.3AActiveCN110414101B (en)2019-07-152019-07-15 A simulation scene measurement method, accuracy measurement method and system

Country Status (1)

CountryLink
CN (1)CN110414101B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113012270A (en)*2021-03-242021-06-22纵深视觉科技(南京)有限责任公司Stereoscopic display method and device, electronic equipment and storage medium
CN113658474A (en)*2021-08-182021-11-16中国商用飞机有限责任公司 An aircraft emergency evacuation training system
CN115118880A (en)*2022-06-242022-09-27中广建融合(北京)科技有限公司XR virtual shooting system based on immersive video terminal is built
CN116664652A (en)*2023-06-022023-08-29杭州海康机器人股份有限公司 A depth image acquisition method, device and electronic equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102801994A (en)*2012-06-192012-11-28西北工业大学Physical image information fusion device and method
US20130300637A1 (en)*2010-10-042013-11-14G Dirk SmitsSystem and method for 3-d projection and enhancements for interactivity
CN107093195A (en)*2017-03-102017-08-25西北工业大学A kind of locating mark points method that laser ranging is combined with binocular camera
CN107277495A (en)*2016-04-072017-10-20深圳市易瞳科技有限公司A kind of intelligent glasses system and its perspective method based on video perspective
CN107408315A (en)*2015-02-232017-11-28Fittingbox公司 Process and method for real-time, physically accurate and realistic eyewear try-on
CN107820075A (en)*2017-11-272018-03-20中国计量大学A kind of VR equipment delayed test devices based on light stream camera
CN108413941A (en)*2018-02-062018-08-17四川大学A kind of simple and efficient distance measuring method based on cheap binocular camera
CN109598796A (en)*2017-09-302019-04-09深圳超多维科技有限公司Real scene is subjected to the method and apparatus that 3D merges display with dummy object
CN111951332A (en)*2020-07-202020-11-17燕山大学 Glasses design method and glasses based on line of sight estimation and binocular depth estimation

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20130300637A1 (en)*2010-10-042013-11-14G Dirk SmitsSystem and method for 3-d projection and enhancements for interactivity
CN102801994A (en)*2012-06-192012-11-28西北工业大学Physical image information fusion device and method
CN107408315A (en)*2015-02-232017-11-28Fittingbox公司 Process and method for real-time, physically accurate and realistic eyewear try-on
CN107277495A (en)*2016-04-072017-10-20深圳市易瞳科技有限公司A kind of intelligent glasses system and its perspective method based on video perspective
CN107093195A (en)*2017-03-102017-08-25西北工业大学A kind of locating mark points method that laser ranging is combined with binocular camera
CN109598796A (en)*2017-09-302019-04-09深圳超多维科技有限公司Real scene is subjected to the method and apparatus that 3D merges display with dummy object
CN107820075A (en)*2017-11-272018-03-20中国计量大学A kind of VR equipment delayed test devices based on light stream camera
CN108413941A (en)*2018-02-062018-08-17四川大学A kind of simple and efficient distance measuring method based on cheap binocular camera
CN111951332A (en)*2020-07-202020-11-17燕山大学 Glasses design method and glasses based on line of sight estimation and binocular depth estimation

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113012270A (en)*2021-03-242021-06-22纵深视觉科技(南京)有限责任公司Stereoscopic display method and device, electronic equipment and storage medium
CN113658474A (en)*2021-08-182021-11-16中国商用飞机有限责任公司 An aircraft emergency evacuation training system
CN115118880A (en)*2022-06-242022-09-27中广建融合(北京)科技有限公司XR virtual shooting system based on immersive video terminal is built
CN116664652A (en)*2023-06-022023-08-29杭州海康机器人股份有限公司 A depth image acquisition method, device and electronic equipment

Also Published As

Publication numberPublication date
CN110414101B (en)2023-08-04

Similar Documents

PublicationPublication DateTitle
CN108765498B (en)Monocular vision tracking, device and storage medium
CN115410233B (en)Gesture attitude estimation method based on Kalman filtering and deep learning
CN110414101B (en) A simulation scene measurement method, accuracy measurement method and system
CN105424006B (en)Unmanned plane hovering accuracy measurement method based on binocular vision
CN102072706B (en)Multi-camera positioning and tracking method and system
CN104173054B (en)Measuring method and measuring device for height of human body based on binocular vision technique
CN100562707C (en)Binocular vision rotating axis calibration method
CN103607584B (en)Real-time registration method for depth maps shot by kinect and video shot by color camera
CN109360240A (en) A Binocular Vision-Based Small UAV Localization Method
CN110617814A (en)Monocular vision and inertial sensor integrated remote distance measuring system and method
CN109658457A (en)A kind of scaling method of laser and any relative pose relationship of camera
CN103426168B (en)Based on the general calibration method of common, wide-angle, the flake stereo camera of one-dimension calibration bar
CN100417231C (en) Stereo vision hardware-in-the-loop simulation system and method
CN111667536A (en)Parameter calibration method based on zoom camera depth estimation
CN108259887B (en) Gaze point calibration method and device, gaze point calibration method and device
CN110337674A (en)Three-dimensional rebuilding method, device, equipment and storage medium
CN109920000A (en) A dead-end augmented reality method based on multi-camera collaboration
CN110264527A (en)Real-time binocular stereo vision output method based on ZYNQ
CN118982636B (en)Virtual-real alignment method for MR equipment
CN105374067A (en)Three-dimensional reconstruction method based on PAL cameras and reconstruction system thereof
CN110514114A (en) A method for calibrating the spatial position of tiny targets based on binocular vision
CN111243021A (en)Vehicle-mounted visual positioning method and system based on multiple combined cameras and storage medium
CN119520771B (en) Virtual training and evaluation method and device for intelligent focusing of surgical microscope
CN113487726B (en)Motion capture system and method
CN113744361B (en)Three-dimensional high-precision map construction method and device based on three-dimensional vision

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp