Movatterモバイル変換


[0]ホーム

URL:


CN116977419B - Method, system and storage medium for determining user posture of smart glasses - Google Patents

Method, system and storage medium for determining user posture of smart glasses

Info

Publication number
CN116977419B
CN116977419BCN202310843226.7ACN202310843226ACN116977419BCN 116977419 BCN116977419 BCN 116977419BCN 202310843226 ACN202310843226 ACN 202310843226ACN 116977419 BCN116977419 BCN 116977419B
Authority
CN
China
Prior art keywords
parameter matrix
smart glasses
extrinsic parameter
determining
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310843226.7A
Other languages
Chinese (zh)
Other versions
CN116977419A (en
Inventor
申志兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Techology Co Ltd
Original Assignee
Goertek Techology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Techology Co LtdfiledCriticalGoertek Techology Co Ltd
Priority to CN202310843226.7ApriorityCriticalpatent/CN116977419B/en
Publication of CN116977419ApublicationCriticalpatent/CN116977419A/en
Application grantedgrantedCritical
Publication of CN116977419BpublicationCriticalpatent/CN116977419B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The disclosure relates to a user pose determining method, a system and a storage medium of intelligent glasses, and belongs to the technical field of intelligent glasses. A method for determining the pose of a user of intelligent glasses comprises the steps of fixing the intelligent glasses at the tail ends of a moving device through a fixing device, enabling the position relation between the tail ends of the moving device and the intelligent glasses to be equal to the position relation between the vestibule center of the user and the intelligent glasses when the user wears the intelligent glasses, controlling the moving device to move so as to drive the intelligent glasses to move to a plurality of target positions, obtaining position data of the tail ends of the moving device when the intelligent glasses move to the target positions, controlling a camera of the intelligent glasses to shoot a calibration chart at the fixed positions to obtain target images, determining pose compensation parameters of the intelligent glasses based on the target images and the position data corresponding to the target images, and determining the pose of the user head of the intelligent glasses based on the pose compensation parameters.

Description

Method, system and storage medium for determining pose of user of intelligent glasses
Technical Field
The present disclosure relates to intelligent electronic devices, and in particular, to a method, a system, and a storage medium for determining a user pose of intelligent glasses.
Background
Intelligent glasses have become a common article for people's daily life and play an increasingly important role in people's life. The head positioning mode adopted by the current intelligent glasses such as VR and AR glasses is a positioning method based on fusion of visual positioning and inertial navigation, the pose change of the glasses is detected through an inertial measurement unit in the intelligent glasses, and virtual pictures output by the glasses are rendered according to the pose change.
However, when the user perceives the head movement, the user relies on the vestibular organ of the user's head, and the change of the virtual scene in the smart glasses is based on the movement of the inertial measurement unit in the glasses. Because the position of the inertia measurement unit is different from that of the vestibular organ, when the head rotates, the motion output by the intelligent glasses is deviated from the motion perceived by the vestibule of the user, so that the intelligent glasses cannot accurately identify the pose of the head of the user.
Disclosure of Invention
The method, the system and the storage medium for determining the user pose of the intelligent glasses can solve the problem that the intelligent glasses cannot accurately identify the user head pose.
According to a first aspect of the disclosure, a method for determining a user pose of an intelligent glasses is provided, and the method is characterized by comprising the steps of fixing the intelligent glasses to the tail ends of a moving device through a fixing device, enabling the position relation between the tail ends of the moving device and the intelligent glasses to be equal to the position relation between the vestibular center of a user and the intelligent glasses when the user wears the intelligent glasses, controlling the moving device to move so as to drive the intelligent glasses to move to a plurality of target positions, acquiring position data of the tail ends of the moving device when the intelligent glasses move to the target positions, controlling a camera of the intelligent glasses to shoot a calibration chart at the fixed positions, obtaining a target image, determining pose compensation parameters of the intelligent glasses based on the target image and the position data corresponding to the target image, and determining the pose of the head of the intelligent glasses based on the pose compensation parameters.
The method comprises the steps of determining pose compensation parameters of the intelligent glasses based on the target images and position data corresponding to the target images, determining first extrinsic matrixes of a plurality of target positions based on the target images, wherein the first extrinsic matrixes are used for converting a coordinate system of a calibration chart into a coordinate system of a camera, determining second extrinsic matrixes of a plurality of target positions according to the position data corresponding to the target images, the second extrinsic matrixes are used for converting a coordinate system of the tail end of the moving device into a coordinate system of the head end of the moving device, determining third extrinsic matrixes according to the first extrinsic matrixes of the plurality of target positions and the second extrinsic matrixes of the plurality of target positions, and determining pose compensation parameters of the intelligent glasses according to the third extrinsic matrixes.
Optionally, the method comprises the steps of determining a third external reference matrix according to the first external reference matrix of the plurality of target positions and the second external reference matrix of the plurality of target positions, wherein the third external reference matrix is used for converting the coordinate system of the camera into the coordinate system of the tail end of the movement device, determining a fourth external reference matrix of the plurality of target positions according to the first external reference matrix of the plurality of target positions and the second external reference matrix of the plurality of target positions, and the fourth external reference matrix is used for converting the coordinate system of the calibration graphic card into the coordinate system of the head end of the movement device, and determining the third external reference matrix according to the fourth external reference matrix of the plurality of target positions.
Optionally, the determining the third external parameter matrix according to the fourth external parameter matrix of the plurality of target positions includes constructing an equation set with the equality of the fourth external parameter matrix corresponding to the plurality of target positions as a constraint condition, and solving the equation set to determine the third external parameter matrix.
Optionally, the intelligent glasses are provided with an inertial measurement unit, the determining of pose compensation parameters of the intelligent glasses according to the third external parameter matrix comprises obtaining a fifth external parameter matrix, wherein the fifth external parameter matrix is used for converting a coordinate system of the inertial measurement unit into a coordinate system of the camera, determining the pose compensation parameters according to the third external parameter matrix and the fifth external parameter matrix, and the position compensation parameters are products of the third external parameter matrix and the fifth external parameter matrix.
Optionally, the determining the pose change of the user head of the intelligent glasses based on the pose compensation parameters comprises detecting the pose of the intelligent glasses, and determining the pose of the user head of the intelligent glasses according to the pose of the intelligent glasses and the pose compensation parameters.
Optionally, after the pose of the head of the smart glasses user is determined based on the pose compensation parameters, the method further comprises rendering a picture displayed by the smart glasses according to the pose of the head of the user.
According to a second aspect of the disclosure, a user pose determining system of intelligent glasses is provided, and the system comprises a fixing device, a first determining module and a second determining module, wherein the fixing device is used for fixing the intelligent glasses to the tail ends of the moving devices through the fixing device, so that the position relation between the tail ends of the moving devices and the intelligent glasses is equal to the position relation between the vestibular center of a user and the intelligent glasses when the user wears the intelligent glasses, the moving device is used for driving the intelligent glasses to move to a plurality of target positions, the control module is used for controlling the moving device to move so as to drive the intelligent glasses to move to the plurality of target positions, the obtaining module is used for obtaining position data of the tail ends of the moving devices and controlling a camera of the intelligent glasses to shoot a calibration chart card at the fixed position to obtain a target image, and the second determining module is used for determining pose compensating parameters of the intelligent glasses based on the target image and the position data corresponding to the target image.
Optionally, the movement device comprises a mechanical arm, and the end of the movement device is the end of the mechanical arm.
According to a third aspect of the present disclosure there is provided a storage medium having stored thereon computer instructions which when executed by a processor implement the steps of the method of any of the first aspects of the present disclosure.
One benefit of the disclosed embodiments is that by securing the smart glasses at the tip of the exercise device, the position of the tip of the exercise device is made equivalent to the position of the vestibular center of the user. And controlling the motion device to move, driving the intelligent glasses to shoot the calibration graphics card at different positions to obtain a target image, and determining pose compensation parameters of the intelligent glasses based on the target image and position data corresponding to the target image. In this way, the vestibular center of the user can be simulated by the tail end of the movement device, and the compensation parameters from the intelligent glasses to the vestibular center can be calculated, so that the correct pose of the head of the user can be obtained.
Other features of the disclosed embodiments and their advantages will become apparent from the following detailed description of exemplary embodiments of the disclosure, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the embodiments of the disclosure.
FIG. 1 illustrates a flow chart of a user pose determination method for smart glasses of an embodiment of the present disclosure;
Fig. 2 is a schematic diagram showing an example of a user pose determination method of smart glasses according to an embodiment of the present disclosure;
Fig. 3 shows a block diagram of a user pose determination system of smart glasses according to an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless it is specifically stated otherwise.
The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of exemplary embodiments may have different values.
It should be noted that like reference numerals and letters refer to like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
The embodiment of the application provides a user pose determining method of intelligent glasses, as shown in fig. 1, the control method comprises steps S11-S15.
Step S11, fixing the intelligent glasses at the tail end of the movement device through the fixing device, so that the position relationship between the tail end of the movement device and the intelligent glasses is equal to the position relationship between the vestibule center of the user and the intelligent glasses when the user wears the intelligent glasses.
In one example of this embodiment, the smart glasses may be VR glasses or AR glasses, or the like. A plurality of cameras and IMUs (Inertial Measurement Unit, inertial measurement units) may be provided in the smart glasses.
In one example of this embodiment, the movement means comprises a robotic arm, the movement means end being the end of the robotic arm.
In this embodiment, the movement device may include a mechanical arm, which may have a plurality of joints to perform the movement with multiple degrees of freedom. The tail end of the movement device is the tail end of the mechanical arm, the head end of the movement device can be a base part of the mechanical arm, and the head end position of the movement device is kept unchanged in the movement process of the mechanical arm. In another example, the moving device may further include other moving components, such as a sliding table, a rotating shaft, and the like, and may perform continuous sliding rail movement or excessive movement of the rotating shaft. In one example, the robotic arm is a6 degree of freedom robotic arm.
In one example of the present embodiment, the smart glasses may be fixed to the distal end of the movement means by a fixing means such as a fixing bracket or the like so that the position of the distal end of the movement means is fixed at the position where the vestibule center is located when the user wears the glasses, i.e., so that the positional relationship of the distal end of the movement means and the smart glasses is equivalent to the positional relationship of the vestibule center of the user and the smart glasses when the user wears the smart glasses.
And step S12, controlling the movement device to move so as to drive the intelligent glasses to move to a plurality of target positions.
In one example of this embodiment, after the smart glasses are fixed at the end of the movement device, the mechanical arm may be controlled to move so as to drive the smart glasses to move to a plurality of target positions. The target position comprises the position and the gesture of the intelligent glasses, and the target position can be the position of the intelligent glasses, at which the calibration chart can be shot.
And S13, when the intelligent glasses move to the target position, acquiring position data of the tail end of the movement device and controlling a camera of the intelligent glasses to shoot a calibration chart card at a fixed position so as to obtain a target image.
In one example of this embodiment, each time the smart glasses reach a target position, the camera of the smart glasses captures a calibration chart, and an image with the calibration chart, i.e., a target image, is obtained. Meanwhile, when the smart glasses reach a target position each time, position data of the tail end of the movement device is acquired, and specifically, the position data may include a position (the position may be described by a set of coordinate values) and a posture (the posture may be described by a set of angle values) of the tail end of the movement device. In some embodiments, the position data of the tip of the motion device may correspond to 6 degrees of freedom of the motion device.
In one example of this embodiment, the calibration chart may be a checkerboard, origin, or self-coded calibration chart of various patterns.
Step S14, determining pose compensation parameters of the intelligent glasses based on the target image and position data corresponding to the target image.
In one example of the embodiment, the pose compensation parameters of the intelligent glasses are determined based on the target images and the position data corresponding to the target images, and the pose compensation parameters comprise a first extrinsic matrix for determining a plurality of target positions based on the plurality of target images, wherein the first extrinsic matrix is used for converting a coordinate system of a calibration graphic card into a coordinate system of a camera, a second extrinsic matrix for determining a plurality of target positions according to the position data corresponding to the plurality of target images, wherein the second extrinsic matrix is used for converting a coordinate system of a tail end of a moving device into a coordinate system of a head end of the moving device, a third extrinsic matrix is determined according to the first extrinsic matrix of the plurality of target positions and the second extrinsic matrix of the plurality of target positions, and the third extrinsic matrix is used for converting the coordinate system of the camera into the coordinate system of the tail end of the moving device.
In one example of this embodiment, the coordinate system of the calibration chart is a coordinate system with a point in the calibration chart as an origin, and specifically may be a coordinate system with a center point of the calibration chart as an origin. The camera coordinate system is a coordinate system with a point of the camera as an origin, and may be a coordinate system established with the optical center position of the camera as the origin. And a first external parameter matrix corresponding to the target image with the calibration graphic card can be obtained through calculation by using the camera to shoot the target image. The first extrinsic matrix may include a rotation matrix and a translation matrix.
In one example of this embodiment, the coordinate system of the tip of the movement apparatus is a coordinate system established with the point of the tip of the movement apparatus as the origin, that is, a coordinate system established with the point corresponding to the center of the vestibule of the user. The coordinate system of the head end of the motion device is a coordinate system established by taking a point in the head end of the motion device, such as a base, as an origin. Taking the mechanical arm as an example, the kinematic positive solution can be performed according to the corresponding position data of the mechanical arm when the target image is shot, so as to obtain an extrinsic matrix from the tail end of the mechanical arm to the head end of the mechanical arm, namely a second extrinsic matrix.
In one example of the embodiment, a third extrinsic matrix is determined according to a first extrinsic matrix of a plurality of target positions and a second extrinsic matrix of the plurality of target positions, the third extrinsic matrix being used for converting a coordinate system of a camera into a coordinate system of a terminal of a motion device, and the method comprises determining a fourth extrinsic matrix of the plurality of target positions according to the first extrinsic matrix of the plurality of target positions and the second extrinsic matrix of the plurality of target positions, the fourth extrinsic matrix being used for converting a coordinate system of a calibration map card into a coordinate system of a head end of the motion device, and determining the third extrinsic matrix according to the fourth extrinsic matrix of the plurality of target positions.
In one example of the embodiment, the third extrinsic matrix is determined according to a fourth extrinsic matrix of the plurality of target positions, including constructing a system of equations with equal fourth extrinsic matrices corresponding to the plurality of target positions as constraint conditions, and solving the system of equations to determine the third extrinsic matrix.
In one example of this embodiment, as shown in fig. 2, since the positions of the calibration chart and the head end of the movement apparatus are fixed, the fourth external parameter matrix of the calibration chart coordinate system converted into the movement apparatus head end coordinate system is equal regardless of the target position where the smart glasses are located. The fourth extrinsic matrix may also be expressed as a product of the first extrinsic matrix, the second extrinsic matrix, and the third extrinsic matrix at a target location, such as a1*B*C1, where a1 is the second extrinsic matrix at target location 1, B is the third extrinsic matrix, and C1 is the first extrinsic matrix at target location 1.
In one example of the present embodiment, since the smart glasses are fixed to the tip of the moving apparatus by the fixing means, the relative positions of the smart glasses and the tip of the moving apparatus are fixed, that is, the third external reference matrix B of different target positions is the same, regardless of the positions where the smart glasses are located.
In one example of the present embodiment, since the fourth extrinsic matrices of different target positions are the same, a system of equations may be established with this as a constraint, and specifically, the equations in the system of equations take the target position 1 and the target position 2 as examples, as shown in the following formula:
A1*B*C1=A2*B*C2
Wherein A1 is the second extrinsic matrix of target position 1, B is the third extrinsic matrix, C1 is the first extrinsic matrix of target position 1, A2 is the second extrinsic matrix of target position 2, and C2 is the first extrinsic matrix of target position 2.
In one example of the present embodiment, the equations of the target position 1 and all other target positions can be obtained in the above manner, and the third extrinsic matrix B in the equation set can be obtained by SVD matrix decomposition.
In one example of the embodiment, the intelligent glasses are provided with an inertial measurement unit, and pose compensation parameters of the intelligent glasses are determined according to a third extrinsic matrix, wherein the method comprises the steps of obtaining a fifth extrinsic matrix, the fifth extrinsic matrix is used for converting a coordinate system of the inertial measurement unit into a coordinate system of a camera, and the pose compensation parameters are determined according to the third extrinsic matrix and the fifth extrinsic matrix, and are products of the third extrinsic matrix and the fifth extrinsic matrix.
In one example of this embodiment, the smart glasses may store in advance a fifth external reference matrix for converting the coordinate system of the inertial measurement unit into the coordinate system of the camera, and after determining the third external reference matrix, the fifth external reference matrix may be directly acquired, and the external reference matrix for converting the coordinate system of the inertial measurement unit into the coordinate system of the end of the motion device, that is, the compensation parameter, may be determined by multiplying the third external reference matrix by the fifth external reference matrix.
And S15, determining the pose change of the head of the intelligent glasses user based on the pose compensation parameters.
In one example of the embodiment, the determining the pose change of the head of the user of the intelligent glasses based on the pose compensation parameters comprises detecting the pose of the user of the intelligent glasses and determining the pose of the head of the user of the intelligent glasses according to the pose of the intelligent glasses and the pose compensation parameters.
In one example of this embodiment, after determining the pose compensation parameters, the smart glasses may use the compensation parameters to determine the head position of the user of the smart glasses, and in particular, the IMU may detect the pose of the smart glasses, and convert the pose to the position of the user's head through the compensation parameters.
In this example, by fixing the smart glasses at the tip of the movement apparatus, the position of the tip of the movement apparatus is made equivalent to the position of the vestibular center of the user. And controlling the motion device to move, driving the intelligent glasses to shoot the calibration graphics card at different positions to obtain a target image, and determining pose compensation parameters of the intelligent glasses based on the target image and position data corresponding to the target image. In this way, the vestibular center of the user can be simulated by the tail end of the movement device, and the compensation parameters from the intelligent glasses to the vestibular center can be calculated, so that the correct pose of the head of the user can be obtained.
In one example, after determining the pose of the smart glasses user's head based on the pose compensation parameters, the method further includes rendering a picture displayed by the smart glasses according to the pose of the user's head.
After the correct head pose of the intelligent glasses user is determined, the intelligent glasses can render through the correct head pose when generating a rendering picture, so that the problem that the user is easy to feel dizziness due to the fact that the intelligent glasses cannot correctly recognize the head pose of the user is solved.
Referring to fig. 3, the embodiment provides a user pose determining system 100 of an intelligent glasses, which comprises a fixing device 101 for fixing the intelligent glasses to the tail ends of a moving device through the fixing device, so that the position relationship between the tail ends of the moving device and the intelligent glasses is equal to the position relationship between the vestibule center of the user and the intelligent glasses when the user wears the intelligent glasses, a moving device 102 for driving the intelligent glasses to move to a plurality of target positions, a control module 103 for controlling the moving device to move so as to drive the intelligent glasses to move to the plurality of target positions, an obtaining module 104 for obtaining the position data of the tail ends of the moving device and controlling a camera of the intelligent glasses to shoot a calibration chart card at the fixed position when the intelligent glasses move to the target positions, a first determining module 105 for determining pose compensation parameters of the intelligent glasses based on the target images and the position data corresponding to the target images, and a second determining module 106 for determining the pose of the head of the intelligent glasses based on the pose compensation parameters.
The first determining module comprises a first determining sub-module, a second determining sub-module and a third determining sub-module, wherein the first determining sub-module is used for determining a first external parameter matrix of a plurality of target positions based on a plurality of target images, the first external parameter matrix is used for converting a coordinate system of a calibration image card into a coordinate system of a camera, the second determining sub-module is used for determining a second external parameter matrix of a plurality of target positions according to position data corresponding to the plurality of target images, the second external parameter matrix is used for converting a coordinate system of a tail end of a moving device into a coordinate system of a head end of the moving device, the third determining sub-module is used for determining a third external parameter matrix according to the first external parameter matrix of the plurality of target positions and the second external parameter matrix of the plurality of target positions, and the third external parameter matrix is used for converting the coordinate system of the camera into the coordinate system of the tail end of the moving device.
Optionally, the third determination submodule is specifically configured to determine a fourth parameter matrix of the plurality of target positions according to the first parameter matrix of the plurality of target positions and the second parameter matrix of the plurality of target positions, where the fourth parameter matrix is used to convert the coordinate system of the calibration graphic card into the coordinate system of the head end of the motion device, and determine the third parameter matrix according to the fourth parameter matrix of the plurality of target positions.
Optionally, determining a third external reference matrix according to the fourth external reference matrix of the plurality of target positions comprises constructing an equation set by taking equality of the fourth external reference matrix corresponding to the plurality of target positions as a constraint condition, and solving the equation set to determine the third external reference matrix.
Optionally, the intelligent glasses are provided with an inertial measurement unit, and the fourth determination submodule is specifically configured to obtain a fifth external parameter matrix, wherein the fifth external parameter matrix is used for converting a coordinate system of the inertial measurement unit into a coordinate system of the camera, and determine a compensation parameter according to the third external parameter matrix and the fifth external parameter matrix, and the compensation parameter is a product of the third external parameter matrix and the fifth external parameter matrix.
Optionally, the second determining module is specifically configured to detect a pose of the smart glasses, and determine a pose of a head of the smart glasses user according to the pose of the smart glasses and pose compensation parameters, where the apparatus further includes a rendering module.
Optionally, the system further comprises a rendering module for rendering the picture displayed by the intelligent glasses according to the pose of the head of the user.
Optionally, the movement device comprises a mechanical arm, and the end of the movement device is the end of the mechanical arm.
The embodiment of the application provides a storage medium, on which a program or an instruction is stored, the program or the instruction, when executed by a processor, realizes the steps of the method for determining the pose of the user of the intelligent glasses according to any one of the previous claims, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
The various embodiments in this disclosure are described in a progressive manner, and identical and similar parts of the various embodiments are all referred to each other, and each embodiment is mainly described as different from other embodiments. In particular, for the control method, storage medium embodiments, since they are substantially similar to the device embodiments, the description is relatively simple, and reference is made to the description of the method embodiments in part.
The foregoing has described certain embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
Embodiments of the present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of embodiments of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium include a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical encoding device, punch cards or intra-groove protrusion structures such as those having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
The computer program instructions for performing the operations of embodiments of the present disclosure may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as SMALLTALK, C ++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of embodiments of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which may execute the computer readable program instructions.
Various aspects of embodiments of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, implementation by software, and implementation by a combination of software and hardware are all equivalent.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (9)

Translated fromChinese
1.一种智能眼镜的用户位姿确定方法,其特征在于,包括:1. A method for determining a user's posture of smart glasses, comprising:将所述智能眼镜通过固定装置固定于运动装置的末端,使得所述运动装置的末端与所述智能眼镜的位置关系等同于用户佩戴所述智能眼镜时的用户的前庭中心与所述智能眼镜的位置关系;Fixing the smart glasses to the end of the motion device by a fixing device so that the positional relationship between the end of the motion device and the smart glasses is equivalent to the positional relationship between the vestibular center of the user when the user wears the smart glasses;控制所述运动装置进行移动,以带动所述智能眼镜移动到多个目标位置;Controlling the motion device to move so as to drive the smart glasses to move to a plurality of target positions;在所述智能眼镜移动到所述目标位置时,获取所述运动装置的末端的位置数据并控制所述智能眼镜的摄像头拍摄处于固定位置的标定图卡,获得目标图像;When the smart glasses move to the target position, acquiring position data of the end of the motion device and controlling the camera of the smart glasses to shoot a calibration chart at a fixed position to obtain a target image;基于所述目标图像和与所述目标图像对应的位置数据,确定所述智能眼镜的位姿补偿参数;Determining posture compensation parameters of the smart glasses based on the target image and position data corresponding to the target image;基于所述位姿补偿参数,确定所述智能眼镜用户头部的位姿;Determining the posture of the head of the user of the smart glasses based on the posture compensation parameters;所述基于所述目标图像和与所述目标图像对应的位置数据,确定所述智能眼镜的位姿补偿参数,包括:The determining, based on the target image and the position data corresponding to the target image, the posture compensation parameters of the smart glasses includes:基于多张目标图像,确定多个目标位置的第一外参矩阵,所述第一外参矩阵用于将标定图卡的坐标系转换为所述摄像头的坐标系;Determining a first extrinsic parameter matrix of a plurality of target positions based on the plurality of target images, wherein the first extrinsic parameter matrix is used to convert a coordinate system of the calibration chart into a coordinate system of the camera;根据多张目标图像对应的位置数据,确定多个目标位置的第二外参矩阵,所述第二外参矩阵用于将所述运动装置末端的坐标系转换为所述运动装置的头端的坐标系;determining a second extrinsic parameter matrix for the plurality of target positions based on position data corresponding to the plurality of target images, wherein the second extrinsic parameter matrix is used to convert a coordinate system of the end of the motion device into a coordinate system of the head end of the motion device;根据所述多个目标位置的第一外参矩阵和所述多个目标位置的第二外参矩阵,确定第三外参矩阵,所述第三外参矩阵用于将所述摄像头的坐标系转换为所述运动装置末端的坐标系;Determining a third extrinsic parameter matrix according to the first extrinsic parameter matrix of the plurality of target positions and the second extrinsic parameter matrix of the plurality of target positions, wherein the third extrinsic parameter matrix is used to convert the coordinate system of the camera into the coordinate system of the end of the motion device;根据所述第三外参矩阵确定所述智能眼镜的位姿补偿参数。Determine the posture compensation parameters of the smart glasses according to the third extrinsic parameter matrix.2.根据权利要求1所述的方法,其特征在于,所述根据所述多个目标位置的第一外参矩阵和所述多个目标位置的第二外参矩阵,确定第三外参矩阵,包括:2. The method according to claim 1, wherein determining a third extrinsic parameter matrix based on the first extrinsic parameter matrix of the plurality of target positions and the second extrinsic parameter matrix of the plurality of target positions comprises:根据所述多个目标位置的第一外参矩阵和多个目标位置的第二外参矩阵,确定所述多个目标位置的第四外参矩阵,所述第四外参矩阵用于将所述标定图卡的坐标系转换为所述运动装置头端的坐标系;Determining a fourth extrinsic parameter matrix of the plurality of target positions according to the first extrinsic parameter matrix of the plurality of target positions and the second extrinsic parameter matrix of the plurality of target positions, wherein the fourth extrinsic parameter matrix is used to convert the coordinate system of the calibration chart into the coordinate system of the head end of the motion device;根据所述多个目标位置的第四外参矩阵,确定所述第三外参矩阵。The third extrinsic parameter matrix is determined according to the fourth extrinsic parameter matrix of the multiple target positions.3.根据权利要求2所述的方法,其特征在于,所述根据所述多个目标位置的第四外参矩阵,确定所述第三外参矩阵,包括:3. The method according to claim 2, wherein determining the third extrinsic parameter matrix based on the fourth extrinsic parameter matrix of the plurality of target positions comprises:以所述多个目标位置对应的第四外参矩阵相等为约束条件,构建方程组;Constructing a system of equations with the fourth extrinsic parameter matrices corresponding to the plurality of target positions being equal as a constraint condition;对所述方程组进行求解,确定所述第三外参矩阵。The system of equations is solved to determine the third external parameter matrix.4.根据权利要求1所述的方法,其特征在于,所述智能眼镜设置有惯性测量单元,所述根据所述第三外参矩阵确定所述智能眼镜的位姿补偿参数,包括:4. The method according to claim 1, wherein the smart glasses are provided with an inertial measurement unit, and determining the posture compensation parameters of the smart glasses according to the third extrinsic parameter matrix comprises:获取第五外参矩阵,所述第五外参矩阵用于将所述惯性测量单元的坐标系转换为所述摄像头的坐标系;Obtaining a fifth extrinsic parameter matrix, where the fifth extrinsic parameter matrix is used to convert the coordinate system of the inertial measurement unit into the coordinate system of the camera;根据所述第三外参矩阵和第五外参矩阵,确定所述位姿补偿参数,所述位姿补偿参数为第三外参矩阵与第五外参矩阵的乘积。The posture compensation parameter is determined according to the third extrinsic parameter matrix and the fifth extrinsic parameter matrix, and the posture compensation parameter is the product of the third extrinsic parameter matrix and the fifth extrinsic parameter matrix.5.根据权利要求1所述的方法,其特征在于,所述基于所述位姿补偿参数,确定所述智能眼镜用户头部的位姿,包括:5. The method according to claim 1, wherein determining the posture of the head of the user of the smart glasses based on the posture compensation parameter comprises:检测所述智能眼镜的位姿;Detecting the posture of the smart glasses;根据所述智能眼镜的位姿与所述位姿补偿参数,确定所述智能眼镜的用户头部的位姿。The posture of the head of the user of the smart glasses is determined according to the posture of the smart glasses and the posture compensation parameter.6.根据权利要求1-5任一项所述的方法,其特征在于,在所述基于所述位姿补偿参数,确定所述智能眼镜用户头部的位姿之后,所述方法还包括:6. The method according to any one of claims 1 to 5, characterized in that after determining the posture of the head of the user of the smart glasses based on the posture compensation parameter, the method further comprises:根据所述用户头部的位姿,渲染所述智能眼镜显示的画面。Rendering the image displayed by the smart glasses according to the posture of the user's head.7.一种智能眼镜的用户位姿确定系统,其特征在于,包括:7. A user posture determination system for smart glasses, comprising:固定装置,用于将所述智能眼镜通过固定装置固定于运动装置的末端,使得所述运动装置的末端与所述智能眼镜的位置关系等同于用户佩戴所述智能眼镜时的用户的前庭中心与所述智能眼镜的位置关系;a fixing device, configured to fix the smart glasses to the end of the motion device via the fixing device, so that the positional relationship between the end of the motion device and the smart glasses is equivalent to the positional relationship between the vestibular center of the user when the user wears the smart glasses;运动装置,用于带动所述智能眼镜移动到多个目标位置;A motion device, used to drive the smart glasses to move to multiple target positions;控制模块,用于控制所述运动装置进行移动,以带动所述智能眼镜移动到多个目标位置;a control module, configured to control the movement of the motion device to drive the smart glasses to move to a plurality of target positions;获取模块,在所述智能眼镜移动到所述目标位置时,获取所述运动装置的末端的位置数据并控制所述智能眼镜的摄像头拍摄处于固定位置的标定图卡,获得目标图像;an acquisition module, which, when the smart glasses move to the target position, acquires position data of the end of the motion device and controls the camera of the smart glasses to capture a calibration chart at a fixed position to obtain a target image;第一确定模块,用于基于所述目标图像和与所述目标图像对应的位置数据,确定所述智能眼镜的位姿补偿参数;A first determination module is configured to determine a posture compensation parameter of the smart glasses based on the target image and position data corresponding to the target image;第二确定模块,用于基于所述位姿补偿参数,确定所述智能眼镜用户头部的位姿;A second determination module is configured to determine the posture of the head of the user wearing the smart glasses based on the posture compensation parameter;所述第一确定模块包括:第一确定子模块,基于多张目标图像,确定多个目标位置的第一外参矩阵,所述第一外参矩阵用于将标定图卡的坐标系转换为所述摄像头的坐标系;The first determination module includes: a first determination submodule, which determines a first extrinsic parameter matrix of a plurality of target positions based on a plurality of target images, wherein the first extrinsic parameter matrix is used to convert a coordinate system of the calibration chart into a coordinate system of the camera;第二确定子模块,根据多张目标图像对应的位置数据,确定多个目标位置的第二外参矩阵,所述第二外参矩阵用于将所述运动装置末端的坐标系转换为所述运动装置的头端的坐标系;a second determination submodule, determining a second extrinsic parameter matrix of the plurality of target positions based on the position data corresponding to the plurality of target images, wherein the second extrinsic parameter matrix is used to convert the coordinate system of the end of the motion device into the coordinate system of the head end of the motion device;第三确定子模块,用于根据所述多个目标位置的第一外参矩阵和所述多个目标位置的第二外参矩阵,确定第三外参矩阵,所述第三外参矩阵用于将所述摄像头的坐标系转换为所述运动装置末端的坐标系;A third determining submodule is configured to determine a third extrinsic parameter matrix based on the first extrinsic parameter matrix of the plurality of target positions and the second extrinsic parameter matrix of the plurality of target positions, wherein the third extrinsic parameter matrix is used to convert the coordinate system of the camera into the coordinate system of the end of the motion device;第四确定子模块,用于根据所述第三外参矩阵确定所述智能眼镜的位姿补偿参数。The fourth determination submodule is configured to determine the posture compensation parameters of the smart glasses according to the third extrinsic parameter matrix.8.根据权利要求7所述的系统,其特征在于,所述运动装置包括机械臂,所述运动装置末端为所述机械臂的末端。8. The system according to claim 7, wherein the motion device comprises a robotic arm, and the end of the motion device is the end of the robotic arm.9.一种存储介质,其特征在于,其上存储有计算机指令,所述计算机指令被处理器执行时实现权利要求1-6任一项所述方法的步骤。9. A storage medium, characterized in that computer instructions are stored thereon, and when the computer instructions are executed by a processor, the steps of the method according to any one of claims 1 to 6 are implemented.
CN202310843226.7A2023-07-102023-07-10 Method, system and storage medium for determining user posture of smart glassesActiveCN116977419B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202310843226.7ACN116977419B (en)2023-07-102023-07-10 Method, system and storage medium for determining user posture of smart glasses

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202310843226.7ACN116977419B (en)2023-07-102023-07-10 Method, system and storage medium for determining user posture of smart glasses

Publications (2)

Publication NumberPublication Date
CN116977419A CN116977419A (en)2023-10-31
CN116977419Btrue CN116977419B (en)2025-09-26

Family

ID=88476057

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202310843226.7AActiveCN116977419B (en)2023-07-102023-07-10 Method, system and storage medium for determining user posture of smart glasses

Country Status (1)

CountryLink
CN (1)CN116977419B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114777773A (en)*2022-04-202022-07-22歌尔科技有限公司Camera position and posture compensation method and device, electronic equipment and readable storage medium
CN116372918A (en)*2023-03-212023-07-04深圳市越疆科技股份有限公司Control method, device, equipment, robot and storage medium for robot

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110559083B (en)*2019-09-102020-08-25深圳市精锋医疗科技有限公司Surgical robot and control method and control device for tail end instrument of surgical robot
CN115552356A (en)*2020-06-052022-12-30Oppo广东移动通信有限公司Tracking method of head-mounted display device and head-mounted display system
CN114078158B (en)*2020-08-142024-11-22边辕视觉科技(上海)有限公司 A method for automatically acquiring characteristic point parameters of a target object

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114777773A (en)*2022-04-202022-07-22歌尔科技有限公司Camera position and posture compensation method and device, electronic equipment and readable storage medium
CN116372918A (en)*2023-03-212023-07-04深圳市越疆科技股份有限公司Control method, device, equipment, robot and storage medium for robot

Also Published As

Publication numberPublication date
CN116977419A (en)2023-10-31

Similar Documents

PublicationPublication DateTitle
US11625841B2 (en)Localization and tracking method and platform, head-mounted display system, and computer-readable storage medium
US11222409B2 (en)Image/video deblurring using convolutional neural networks with applications to SFM/SLAM with blurred images/videos
US11024082B2 (en)Pass-through display of captured imagery
US10419747B2 (en)System and methods for performing electronic display stabilization via retained lightfield rendering
CN110502097B (en)Motion control portal in virtual reality
US10567649B2 (en)Parallax viewer system for 3D content
WO2017169081A1 (en)Information processing device, information processing method, and program
CN106782260B (en) Display method and device for virtual reality motion scene
US11948257B2 (en)Systems and methods for augmented reality video generation
US11182953B2 (en)Mobile device integration with a virtual reality environment
CN110771143B (en) Control method of handheld PTZ, handheld PTZ, and handheld device
CN110895433B (en)Method and apparatus for user interaction in augmented reality
CN113141502B (en)Camera shooting control method and device of head-mounted display equipment and head-mounted display equipment
CN110769245A (en)Calibration method and related equipment
CN115342806A (en)Positioning method and device of head-mounted display equipment, head-mounted display equipment and medium
CN112580582A (en)Action learning method, action learning device, action learning medium and electronic equipment
CN109271025B (en)Virtual reality freedom degree mode switching method, device, equipment and system
CN109814710B (en)Data processing method and device and virtual reality equipment
CN116977419B (en) Method, system and storage medium for determining user posture of smart glasses
CN117716701A (en)Video image stabilization
CN111866493B (en)Image correction method, device and equipment based on head-mounted display equipment
CN108298101B (en)Cloud deck rotation control method and device and unmanned aerial vehicle
US11335304B2 (en)Driving circuit for head-worn display device, and virtual reality display device
US12219118B1 (en)Method and device for generating a 3D reconstruction of a scene with a hybrid camera rig
CN108415566B (en) Method and device for determining attitude information of external equipment in virtual reality scene

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp