Movatterモバイル変換


[0]ホーム

URL:


CN119871445B - Teleoperation control system of humanoid robot - Google Patents

Teleoperation control system of humanoid robot

Info

Publication number
CN119871445B
CN119871445BCN202510304449.5ACN202510304449ACN119871445BCN 119871445 BCN119871445 BCN 119871445BCN 202510304449 ACN202510304449 ACN 202510304449ACN 119871445 BCN119871445 BCN 119871445B
Authority
CN
China
Prior art keywords
information
humanoid robot
finger
teleoperation
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202510304449.5A
Other languages
Chinese (zh)
Other versions
CN119871445A (en
Inventor
董佳煜
李庆展
刘宇飞
李少东
王冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Humanoid Robot Shanghai Co ltd
Original Assignee
Humanoid Robot Shanghai Co ltd
Filing date
Publication date
Application filed by Humanoid Robot Shanghai Co ltdfiledCriticalHumanoid Robot Shanghai Co ltd
Priority to CN202510304449.5ApriorityCriticalpatent/CN119871445B/en
Publication of CN119871445ApublicationCriticalpatent/CN119871445A/en
Application grantedgrantedCritical
Publication of CN119871445BpublicationCriticalpatent/CN119871445B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Abstract

The application provides a teleoperation control system of a humanoid robot, which relates to the technical field of humanoid robots and comprises a terminal teleoperation control module, an executor mapping module and a data processing module, wherein the terminal teleoperation control module is used for collecting first information and second information, the first information comprises gesture information and/or finger terminal gesture information of a user, the second information comprises spatial position information and/or gesture information of the terminal teleoperation control module, the executor mapping module is used for generating a first operation instruction of an end executor corresponding to the humanoid robot according to the first information and sending the first operation instruction to the humanoid robot, and the data processing module is used for generating a second operation instruction of double arms corresponding to the humanoid robot according to the second information and sending the second operation instruction to the humanoid robot. The application can effectively improve the control precision and flexibility of teleoperation of the humanoid robot.

Description

Teleoperation control system of humanoid robot
Technical Field
The application relates to the technical field of humanoid robots, in particular to a teleoperation control system of a humanoid robot.
Background
Teleoperation refers to a technique that enables an operator to remotely control a humanoid robot to perform tasks through a remote control system.
Conventional teleoperation is typically performed by converting the operator's actions into control signals for the humanoid robot via a controller (e.g., joystick, buttons, etc.). This approach is generally more intuitive, but has less control accuracy and flexibility, and may present uncertainty and safety hazards to the operator's operation in the face of more complex or dangerous tasks.
In summary, how to improve the control accuracy and flexibility of teleoperation of a humanoid robot is a technical problem that needs to be solved at present.
Disclosure of Invention
The embodiment of the application provides a teleoperation control system for a humanoid robot, which can effectively improve the control precision and flexibility of teleoperation of the humanoid robot.
In some embodiments, the teleoperation control system of the humanoid robot comprises a terminal teleoperation control module, an actuator mapping module and a data processing module;
the terminal teleoperation control module is used for collecting first information and second information, wherein the first information comprises gesture information and/or finger terminal gesture information of a user, and the second information comprises spatial position information and/or gesture information of the terminal teleoperation control module;
The actuator mapping module is used for generating a first operation instruction of an end actuator corresponding to the humanoid robot according to the first information and sending the first operation instruction to the humanoid robot;
the data processing module is used for generating a second operation instruction of the corresponding double arms of the humanoid robot according to the second information and sending the second operation instruction to the humanoid robot.
In some embodiments, the actuator mapping module is specifically configured to:
converting the first information into gesture information of a standard hand;
according to the gesture information, generating angle data of each joint corresponding to the standard hand;
converting the angle data into expected joint angle data of the end effector;
and generating a first operation instruction according to the expected joint angle data.
In some embodiments, the end effector comprises a five finger smart hand assembly, and the effector mapping module is specifically configured to:
and determining expected joint angle data of each joint corresponding to the five-finger smart hand assembly according to the angle data of each joint corresponding to the standard hand.
In some embodiments, the end effector includes a two-finger jaw assembly, and the effector mapping module is specifically configured to:
and determining expected joint angle data of each joint corresponding to the two-finger clamping jaw assembly according to the angle data of each joint corresponding to the thumb and the index finger of the standard hand.
In some embodiments, the data processing module is specifically configured to:
determining a desired position and/or attitude of the end effector based on the second information;
a second operating instruction is generated based on the desired position and/or attitude of the end effector.
In some embodiments, the data processing module is specifically configured to:
Receiving actual position and/or posture data of an end effector fed back by the humanoid robot;
determining expected joint angle data of the two arms according to the actual position and/or posture data of the end effector and the expected position and/or posture of the end effector;
And generating a second operation instruction according to the expected joint angle data of the two arms.
In some implementations, the end-teleoperation control module includes a Virtual Reality (VR) device, including a left-finger VR device and/or a right-finger VR device.
In some embodiments, the system further comprises a plurality of VR location trackers, which may be worn on the waist and/or feet of the user;
the VR positioning tracker is configured to collect third information, where the third information includes spatial location information and/or pose information of the VR positioning tracker.
In some implementations, the data processing module is further to:
According to the third information acquired by each VR positioning tracker, determining the moving speed and/or angular speed of the humanoid robot;
and generating a third operation instruction according to the moving speed and/or the angular speed, and sending the third operation instruction to the humanoid robot.
In some embodiments, the system further comprises a head mounted display device and a positioning base station;
The head-mounted display device is used for receiving and displaying image data acquired by the humanoid robot;
The positioning base station is used for providing spatial position information and/or attitude information for the tail end teleoperation control module, the head-mounted display device and the VR positioning tracker.
According to the teleoperation control system of the humanoid robot, gesture information and/or finger tail end gesture information of a user are collected through the tail end teleoperation control module, fine control of an end effector (comprising fingers) of the humanoid robot can be achieved, the control mode is more visual and accurate than a traditional control rod, operation intention of both hands of the user can be reflected more naturally, further, double arms of the humanoid robot can be further accurately controlled through collecting space position information and/or gesture information of the tail end teleoperation control module, and the humanoid robot can simulate actions and gestures of the user more accurately, and control accuracy and flexibility of teleoperation of the humanoid robot are improved effectively.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic structural diagram of a humanoid robot according to an embodiment of the present application;
fig. 2 is a schematic diagram of a teleoperation control system of a humanoid robot according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a teleoperation control system of another humanoid robot according to an embodiment of the present application;
Fig. 4 is a schematic diagram of a teleoperation control flow of a humanoid robot according to an embodiment of the present application.
Specific embodiments of the present application have been shown by way of the above drawings and will be described in more detail below. The drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but rather to illustrate the inventive concepts to those skilled in the art by reference to the specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements throughout the different drawings, unless otherwise specified.
It should be noted that, the teleoperation system for the humanoid robot provided in the embodiment of the present application may be used in the humanoid robot field, and may also be used in other robot fields besides the humanoid robot field, which is not limited in this aspect of the present application.
A humanoid robot is a robot that mimics the shape and behavior of a human, and generally has double mechanical arms, lower limbs, etc. similar to a human, and is capable of performing various actions similar to a human.
Optionally, the key components of the humanoid robot include:
double-robot arm-the robot arm of a humanoid robot generally has a plurality of degrees of freedom capable of simulating various movements of the human arm, such as rotation, bending, stretching, etc., which enable the robot to perform fine manipulation tasks, such as gripping objects, manipulating tools, etc.
The lower limb of the humanoid robot also has a plurality of degrees of freedom, and can simulate the actions of walking, running, jumping and the like of a human.
Head systems although the heads of humanoid robots do not always have to mimic the structure of a human head, they are usually equipped with sensors such as cameras, microphones etc. for acquiring environmental information. In addition, some humanoid robots can be equipped with artificial intelligence techniques such as face recognition, voice recognition, etc. to improve their interactive ability and level of intelligence.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a humanoid robot according to an embodiment of the present application. In some embodiments, the humanoid robot includes:
The head system 1 comprises a head structure of a humanoid robot, and can be integrated with a sensor, a camera, a microphone and the like for environmental perception and interaction.
The chest system 2 comprises internal electronics of the humanoid robot and a power source, such as a motor controller, a power distribution unit and the like.
The robot arm system 3 comprises a double robot arm of a humanoid robot and is used for executing various operation tasks such as grabbing, carrying and the like.
The dexterous hand system 4, mounted at the end of the mechanical arm, is typically designed with multiple joints and degrees of freedom allowing it to move the individual finger and palm sections in a highly flexible manner. This design allows the dexterous hand to simulate various complex movements of the human hand, such as grasping, rotating, pinching, etc.
The thigh 5, the part connecting the hip joint and the knee joint, is the main load-bearing structure of the leg of the humanoid robot.
The shank 6, the part connecting the knee joint and the ankle joint, is an important component of leg movement.
Foot 7. The walking and standing parts of a humanoid robot are usually designed with anti-skid and shock absorbing functions.
Those skilled in the art will appreciate that the humanoid robot structure shown in fig. 1 is not limiting of the humanoid robot, and that the humanoid robot of the present application may include more or fewer components than shown in fig. 1, or may combine certain components, or may have a different arrangement of components.
The application scenarios of the humanoid robot may include, but are not limited to, the following aspects:
industrial manufacturing in an automated production line, humanoid robots may replace humans in performing some burdensome, dangerous or repetitive tasks such as handling heavy objects, assembling parts, etc.
In service industry, in service places such as restaurants, hotels and the like, the humanoid robot can serve as an attendant or a receptionist to provide services such as ordering, meal delivery, guiding and the like for customers.
Medical rehabilitation, in the medical field, the humanoid robot can be used as auxiliary equipment to help patients to perform rehabilitation training or provide daily nursing.
Rescue and exploration, namely, in disaster sites such as earthquake, fire disaster and the like, the humanoid robot can replace a human to enter a dangerous area for search and rescue or detection.
Entertainment and education in entertainment venues and educational institutions, humanoid robots can be used as performance guests or teaching aids to provide interesting interactive experiences for spectators and students.
Teleoperation refers to a technique that enables an operator to control a humanoid robot to perform tasks at a location remote from the humanoid robot by means of a remote control system. Along with the continuous expansion of the application field of humanoid robots, teleoperation technology has been widely applied in the fields of industry, medical treatment, exploration and the like.
The core goal of teleoperation technology is to improve work efficiency, operation accuracy and safety by reducing human contact with dangerous or complex environments. Existing teleoperation modes can be broadly classified into conventional teleoperation based on a physical controller, teleoperation based on Virtual Reality (VR) technology, teleoperation based on Brain-computer interface (Brain-Computer Interface, BCI) technology, and the like.
Illustratively, teleoperation based on physical controllers typically converts the operator's actions into humanoid robot control signals via controllers (e.g., physical devices such as joysticks, buttons, motion capture gloves, etc.). This approach is generally intuitive, but has poor control accuracy and flexibility, and may present uncertainty and safety hazards to the operator's operation in the face of more complex or dangerous tasks.
The introduction of VR technology brings higher immersion and perception capabilities for teleoperation of humanoid robots. Through the virtual reality system, operators can not only see the current working environment and state of the humanoid robot, but also feed back the actions of the humanoid robot in real time.
The existing VR teleoperation system generally performs incremental feedback by feeding back the position of the VR handle in the VR head display, and transmits the position feedback information to the humanoid robot body, so that the end effector of the humanoid robot can track the position of the VR handle in real time. Although this incremental control manner can control the operation of the humanoid robot to a certain extent, the following technical problems still remain:
The human body posture matching problem is that the increment control mode has higher requirements on the body posture of an operator, and the end effector of the humanoid robot is not always completely consistent with the body posture of the operator, and in some cases, the body posture of the operator can cause inaccurate control or cause dislocation phenomenon, so that the accuracy and stability of operation are affected.
The stroke control problem is that the VR handle used by the current VR teleoperation system mainly controls the opening and closing states of the actuator through buttons, and the accurate stroke control of the actuator cannot be realized. For example, current VR handles only support open and close control, lack of control over the precise movement of the robotic arm or gripping device, resulting in limited accuracy of operation.
The clamping jaw control problem is that for three-finger clamping jaws and more, the existing VR teleoperation system can only control the opening and closing states of the clamping jaws through specific tracks, and independent control of each mechanical finger cannot be realized. The control mode can not deal with complex grabbing tasks, and the flexibility of operation is reduced.
In view of the above technical problems, the embodiment of the application provides a teleoperation system for a humanoid robot, which can realize fine control on an end effector (including fingers) of the humanoid robot by collecting gesture information and/or finger end gesture information of a user through an end teleoperation control module, and can further control double arms of the humanoid robot by collecting space position information and/or gesture information of the end teleoperation control module, so that the humanoid robot can simulate actions and gestures of the user more accurately, thereby effectively improving control precision and flexibility of teleoperation of the humanoid robot.
The following describes the technical scheme of the present application and how the technical scheme of the present application solves the above technical problems in detail with specific embodiments. It should be noted that the following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments.
Referring to fig. 2, fig. 2 is a schematic diagram of a teleoperation control system of a humanoid robot according to an embodiment of the present application. In some embodiments, the humanoid robot teleoperation control system includes an end teleoperation control module 201, an actuator mapping module 202, and a data processing module 203.
In some embodiments, tip teleoperational control module 201 may be worn on both hands of a user (in the following examples or referred to as an operator).
In some embodiments, end teleoperation control module 201 may be used to collect first information and second information.
Optionally, the first information includes gesture information and/or finger tip gesture information of the user, and the second information includes spatial position information and/or gesture information of the tip teleoperation control module 201.
The first information may be used to identify the finger operation intention of the user, that is, the gesture and the finger tip gesture of the user may reflect the type of operation that the finger of the user is about to perform.
The second information can be used for identifying the position and the gesture of the two hands of the user, such as identifying the exact position and the direction of the two hands of the user in the three-dimensional space, so that the accuracy and the safety of the operation of the humanoid robot can be ensured.
In some implementations, the end-teleoperation control module 201 may include a finger VR device (which may also be referred to as a finger VR handle), which may include a left finger VR device and/or a right finger VR device. The left finger tiger VR device is worn on the left hand of the user, and the right finger tiger VR device is worn on the right hand of the user.
Optionally, the finger-tie VR device may be configured as a glove, and may be more hand-fitted than a conventional VR handle, thereby providing a more stable and comfortable gripping experience.
Optionally, the finger tiger VR device may be provided with a plurality of sensors for tracking the position and the motion of each finger, so that various motions, such as grabbing, releasing, etc., made by each finger of the user can be more accurately identified.
In some embodiments, the finger tiger VR device may feed back the collected first information to the data processing module 203 in real time, where the data processing module 203 sends the first information to the actuator mapping module 202, or may feed back the collected first information to the actuator mapping module 202 directly.
In some embodiments, the tiger VR device may feed the collected second information back to the data processing module 203 in real time.
Optionally, the teleoperation control system of the humanoid robot further comprises a communication module. The terminal teleoperation control module 201 may use the communication module to feed back the first information and the second information to the data processing module 203, or use the communication module to directly feed back the first information to the actuator mapping module 202.
Alternatively, the communication module may be a wired communication module, or a wireless communication module (such as a Wi-Fi module, a bluetooth module, etc.), or a dedicated communication protocol module, which is not limited in the embodiment of the present application.
The actuator mapping module 202 may be configured to generate, according to the first information, a first operation instruction of an end effector corresponding to the humanoid robot, and send the first operation instruction to the humanoid robot.
In some embodiments, the actuator mapping module 202 may parse the first information to identify a specific operation type that the user wants to perform, such as grabbing, releasing, and the like. Based on the analysis result, the actuator mapping module 202 generates a first operation instruction corresponding to an end effector of the humanoid robot. Finally, the actuator mapping module 202 sends the generated first operation instruction to the humanoid robot through the communication module, so as to control the end effector of the humanoid robot to complete corresponding actions.
The data processing module 203 is configured to generate a second operation instruction of the two arms corresponding to the humanoid robot according to the second information, and send the second operation instruction to the humanoid robot.
In some embodiments, the data processing module 203 may analyze the second information to identify the expected position and orientation of the arm of the humanoid robot in the three-dimensional space. Further, based on the analysis result, the data processing module 203 may determine an expected joint angle of the two arms of the humanoid robot, and generate the second operation instruction of the two arms of the humanoid robot according to the expected joint angle. And finally, sending the generated second operation instruction to the humanoid robot so as to control the double arms of the humanoid robot to execute corresponding actions.
According to the teleoperation control system of the humanoid robot, gesture information and/or finger tail end gesture information of a user are collected through the tail end teleoperation control module, fine control of an end effector (comprising fingers) of the humanoid robot can be achieved, the control mode is more visual and accurate than a traditional control rod, operation intention of both hands of the user can be reflected more naturally, further, double arms of the humanoid robot can be further controlled through collecting space position information and/or gesture information of the tail end teleoperation control module, so that the humanoid robot can simulate actions and gestures of the user more accurately, and control accuracy and flexibility of teleoperation of the humanoid robot are improved effectively.
Based on what is described in the embodiments above, in some embodiments, the actuator mapping module 202 is specifically configured to:
the first information is converted into gesture information of a standard hand, angle data of each joint corresponding to the standard hand is generated according to the gesture information, the angle data is converted into expected joint angle data of an end effector, and the first operation instruction is generated according to the expected joint angle data.
In some embodiments, the actuator mapping module 202 may receive the first information, and convert the first information into a standard hand gesture, that is, uniformly map the hand shape of the operator to the same standard hand shape.
Wherein, in order to uniformly map the hand shapes of different operators to the same standard hand shape, the actuator mapping module 202 may employ a mapping technique that may ensure that the hand actuators of the humanoid robot can respond in a consistent and accurate manner regardless of the actual hand shape of the operators.
The form of the standard hand can be predefined, and represents the expected posture of the human-shaped robot hand actuator. Through mapping, the actuator mapping module 202 can convert the user's gesture and finger tip information into control instructions corresponding to a standard hand morphology.
In some implementations, to ensure accuracy, the actuator mapping module 202 may employ a weighted average of the finger tip inverse solution and gesture data to calculate the joint angle of each finger. The finger tip inverse solution means that the angles of all joints of the finger are reversely deduced from the positions of the finger tip, and the weighted average is used for smoothing gesture data and reducing the influence of noise and errors.
In some embodiments, the end effector may comprise a five finger smart hand assembly, and the effector mapping module 202 is specifically configured to:
and determining expected joint angle data of each joint corresponding to the five-finger dexterous hand component according to the angle data of each joint corresponding to the standard hand.
Each finger of the five-finger dexterous hand can independently move to simulate various actions of human fingers, such as grasping, pinching, rotating and the like.
In some implementations, the actuator mapping module 202 can map joint angle data of a standard hand to each joint of the five-finger dexterous hand assembly.
In some embodiments, the mapping process described above may be customized to the specific model of the five-finger dexterous hand to ensure that the joint angle of each finger of a standard hand can be accurately mapped to the corresponding joint of the five-finger dexterous hand. The degrees of freedom (such as thumb, index finger, middle finger, ring finger and little finger) of the standard hand are in one-to-one correspondence with the joint motions of the five-finger dexterous hand in a remapping mode, so that fine teleoperation is realized.
In some embodiments, in the control of the five-finger dexterous hand, each joint angle data of the standard hand may be processed separately and transmitted to the corresponding part of the manipulator.
For example, for a thumb joint, it typically has multiple degrees of freedom, including metacarpal joint flexion and proximal and distal joint flexion. From the standard hand data, the desired bending angles of these joints can be determined.
For other finger joints, the index finger, middle finger, ring finger, and little finger typically have degrees of freedom such as metacarpal joint curvature, proximal joint curvature, and distal joint curvature. Likewise, the desired bending angles of these joints can be determined from the data of the standard hand.
In some embodiments, the actuator mapping module 202 may dynamically adjust the mapping parameters according to the task requirement and the environmental condition, so as to ensure that the five-finger dexterous hand can accurately restore the motion of the standard hand when performing the task, and achieve high flexibility and high precision operation.
In some embodiments, the end effector may include a two-jaw assembly, and the effector mapping module 202 is specifically configured to:
and determining expected joint angle data of each joint corresponding to the two-finger clamping jaw assembly according to the angle data of each joint corresponding to the thumb and the index finger of the standard hand.
In some embodiments, for control of the two-jaw assembly, the mapping of a standard hand may use only the joint angle data of the thumb and index finger as a source of expected values, which may be mapped into the opening and closing motion of the two-jaw.
It will be appreciated that since the two finger jaws have only two degrees of freedom to open and close, partial joint angle data (e.g., the angles of thumb and index finger) of a standard hand can be mapped to the desired joint angle of the two finger jaws.
In some embodiments, the two-finger clamping jaw can adjust the opening and closing degree according to the angle data of the joints corresponding to the thumb and the index finger of the standard hand, so that the tasks of grabbing, placing and the like can be accurately performed.
The embodiment not only simplifies the complexity of data processing, but also ensures that the two-finger clamping jaw can efficiently and stably execute teleoperation tasks.
In the embodiment of the application, the gesture information and/or the finger tail end gesture information of the user are acquired through the tail end teleoperation control module, so that the fine control of the human-shaped robot tail end executor (comprising fingers) can be realized, the control mode is more visual and accurate than the traditional control lever, and the operation intention of the hands of the user can be reflected more naturally.
In some embodiments, the data processing module 203 is specifically configured to:
And generating the second operation instruction according to the expected position and/or the expected gesture of the two arms of the humanoid robot.
In some embodiments, the data processing module 203 may be specifically configured to:
the method comprises the steps of receiving actual position and/or posture data of an end effector fed back by a humanoid robot, determining expected joint angle data of two arms according to the actual position and/or posture data of the end effector and the expected position and/or posture of the end effector, and generating the second operation instruction according to the expected joint angle data of the two arms.
In some embodiments, the deviation may be calculated by comparing the actual position and/or pose data of the end effector with the desired position and/or pose. And based on the deviation and the mechanical structure of the robot, performing inverse kinematics calculation to determine expected joint angle data of the two arms. The inverse kinematics is used in the human robot neighborhood to inversely push out the joint angle in the joint space according to the position and the gesture of the end effector.
According to the embodiment of the application, the double arms of the humanoid robot can be further controlled by collecting the space position information and/or the gesture information of the terminal teleoperation control module, so that the humanoid robot can more accurately simulate the actions and the gestures of a user, and the teleoperation control precision and the flexibility of the humanoid robot are effectively improved.
Referring to fig. 3, fig. 3 is a schematic diagram of a teleoperation control system of another humanoid robot according to an embodiment of the present application.
In some embodiments, the system further comprises a plurality of VR position trackers 301, the VR position trackers 301 being wearable on the user's waist and/or feet.
In some implementations, the VR position tracker 301 is configured to collect third information that includes spatial position information and/or pose information of the VR position tracker.
In some embodiments, the VR positioning tracker 301 may communicate with the positioning base station through the above communication module to obtain the relative position with the positioning base station in real time.
In some implementations, the VR position tracker 301 may transmit its own spatial position information and/or pose information to the data processing module 203 in real time.
In some embodiments, the data processing module 203 is further configured to:
according to the third information collected by each VR location tracker 301, the moving speed and/or angular speed of the humanoid robot are determined, and according to the moving speed and/or angular speed, a third operation instruction is generated and sent to the humanoid robot.
Optionally, the third operation instruction may include a movement instruction that the humanoid robot needs to execute, such as forward, backward, left turn, right turn, and the like, and corresponding speed and acceleration parameters. By sending the third operating instruction to the humanoid robot, accurate control of its movement can be achieved.
In the embodiment of the application, by introducing the VR positioning tracker 301, the actions and the gestures of the lower limbs of the user can be tracked in real time, the intention of the user can be more accurately understood, and the intention is converted into the accurate motion of the humanoid robot, and the control mode enables the humanoid robot to flexibly operate in a complex environment, such as avoiding obstacles, crossing narrow spaces and the like.
In some embodiments, the system further comprises a positioning head-mounted display device and a base station, wherein the head-mounted display device is used for receiving and displaying image data acquired by the humanoid robot, and the positioning base station is used for providing spatial position information and/or gesture information respectively corresponding to the terminal teleoperation control module 201 and the VR positioning tracker 301.
In some embodiments, the head-mounted display device may be a VR head-mounted display device, configured to receive image data from the humanoid robot and present the image data in the VR head-mounted display, so that an operator can see the environment in which the humanoid robot is currently located, to assist the operator in teleoperation.
In some implementations, the head mounted display device may also feed back its spatial location information and/or pose information to the data processing module 203.
In some embodiments, the data processing module 203 may determine the desired position and orientation of the head of the humanoid robot by analyzing the spatial position information and/or the gesture information of the head-mounted display device, and generate the fourth operation instruction of the head of the humanoid robot according to the desired position and orientation. And finally, sending the generated fourth operation instruction to the human robot so as to control the head of the human robot to execute corresponding actions.
Optionally, the fourth operation instruction may include, but is not limited to:
and the steering control instruction is used for controlling a specific direction or angle of the head of the humanoid robot to steer.
And the nod control instruction is used for controlling the head of the humanoid robot to perform the action of nod up and down.
And the head shaking control instruction is used for controlling the head of the humanoid robot to shake left and right.
By controlling the head steering of the humanoid robot, the visual field of the humanoid robot can be adjusted, the robot can be helped to locate and operate the target more accurately, and by controlling the nodding or the head shaking of the humanoid robot, the humanoid robot can be helped to avoid obstacles and the like better.
Optionally, the positioning base station utilizes any positioning technique (e.g., optical positioning, electromagnetic positioning, ultrasonic positioning, laser positioning, inertial navigation, etc.) to accurately determine the position of each device (including the end teleoperational control module, head mounted display device, and VR positioning tracker) in three-dimensional space. In addition to the location information, the positioning base station may also provide attitude information, i.e. the orientation and tilt angle of the device in space.
In some embodiments, the positioning base station and each device perform data transmission through the communication module, so that the real-time performance and accuracy of information are ensured.
In the embodiment of the application, for the head-mounted display equipment, the accurate position information provided by the positioning base station enables the user to be immersed in a more real and stereoscopic virtual environment. Meanwhile, the cooperation of the terminal teleoperation control module 201, the VR positioning tracker 301 and the positioning base station enables the actions and the gestures of the user to be tracked and fed back in real time, so that the overall performance and the reliability of the system are improved.
In some embodiments, the data processing module 203 may perform a corresponding mode of operation depending on the type and number of VR devices accessed by the system. For example, when the access device is a terminal teleoperation control module and a head-mounted display device, the system operates in an upper limb teleoperation mode, and when the access device is a terminal teleoperation control module, a head-mounted display device and a VR positioning tracker, the system operates in a whole body teleoperation mode.
Referring to fig. 4, fig. 4 is a schematic diagram of a teleoperation control flow of a humanoid robot according to an embodiment of the present application.
In the upper limb teleoperation mode, the teleoperation control flow of the humanoid robot comprises the following steps:
S11, initializing a system.
In some embodiments, the system initialization includes:
starting a positioning base station, namely ensuring that each VR device is in the coverage area of the positioning base station, and ensuring that the subsequent device positioning and tracking can be performed normally.
And starting VR equipment, namely respectively opening the left finger tiger VR equipment, the right finger tiger VR equipment and the VR head display equipment. The relative positions of the left and right finger tiger VR devices are observed by the user in the VR head display.
And confirming the device position, namely if the positions of the left finger tiger VR device and the right finger tiger VR device displayed in the VR head display are overlapped, indicating that the relative positions of the two finger tiger VR devices are correct.
And if the two finger tiger VR devices are not overlapped in the VR head display, initializing the positions. Specifically, the two dactylogyr VR devices may be placed at predetermined positions, and an initialization function may be started in the data processing module, and wait for the system to complete initialization.
And after the initialization is finished, repeating the flow until the positions of the two finger tiger VR devices displayed in the VR head display coincide.
S12, equipment wearing and data transmission.
In some implementations, the device wearing and data transmission includes:
and the wearing equipment is characterized in that an operator wears the finger tiger VR equipment and the VR head display equipment.
Gesture and position feedback, namely feeding back the gesture and/or the tail end gesture of the finger of an operator and the spatial position and/or gesture of the user in real time by the finger-tiger VR device, and transmitting the information to the positioning base station through the communication module. Meanwhile, the VR head display device can also transmit the spatial position and/or gesture information of the VR head display device to the positioning base station through the communication module.
And the humanoid robot transmits real-time image data to the data processing module through the camera. The data processing module sends the image data to the VR head display device, so that an operator can obtain current visual field information of the humanoid robot in real time.
The data processing module can also receive the whole body joint information of the humanoid robot and the state data of the end effector.
S13, mapping the data processing and the executor.
In some implementations, the data processing and executor mapping includes:
The gesture and/or finger tip pose information is processed by a data processing module that receives gesture information from a finger tiger VR device, finger tip pose information, and end effector type data and transmits these information to an effector mapping module.
And generating standard hand information, namely, an actuator mapping module maps the hand of the current operator into the gesture information of the standard hand according to the gesture and/or the finger tail end gesture of the operator, and generates angle data of each joint of the standard hand.
Remapping of standard hand and actuator the actuator mapping module converts the joint angle data of the standard hand to the desired joint angle data of the corresponding end effector, depending on the type of end effector. This process ensures coordination and accuracy between the standard hand and the end effector of the humanoid robot.
And calculating the joint angle of the humanoid robot, namely after the data processing module receives the actual position and the gesture of the end effector, performing inverse kinematics calculation and calculating the expected joint angle data of the two arms of the humanoid robot.
S14, controlling and teleoperating the humanoid robot.
In some embodiments, humanoid robot control and teleoperation includes:
and transmitting the expected joint angle information, namely transmitting the expected joint angle information of the end effector and the double arms to the humanoid robot body by the data processing module.
Tracking control, namely tracking the expected joint angle of the humanoid robot body through a control algorithm (such as a PID control algorithm) and adjusting the action of the humanoid robot so as to complete the whole teleoperation process.
Illustratively, in the whole-body teleoperation mode, the teleoperation control flow of the humanoid robot includes:
S21, initializing a system.
In some embodiments, the system initialization includes:
starting a positioning base station, namely ensuring that each VR device is in the coverage area of the positioning base station, and ensuring that the subsequent device positioning and tracking can be performed normally.
Starting VR equipment, namely respectively opening left finger tiger VR equipment, right finger tiger VR equipment, VR head display equipment and each VR positioning tracker. The relative positions of the left finger tiger VR device, the right finger tiger VR device and the respective VR positioning trackers are observed by the user in the VR head display.
And (3) confirming the device position, namely placing the two finger tiger VR devices and each VR positioning tracker in an initialization bracket, observing whether the relative positions of the two finger tiger VR devices and each VR positioning tracker are normal or not through a VR head display device, and skipping the next step if the relative positions of the two finger tiger VR devices and each VR positioning tracker are normal.
If not, initializing the position. Specifically, the two finger tiger VR devices and each VR positioning tracker can be placed at a predetermined bracket position, and an initialization function is started in the data processing module to wait for the system to finish initialization.
And after the initialization is finished, re-observing whether the relative positions of the two finger tiger VR devices and each VR positioning tracker are normal, and confirming that the moving direction and the distance are correct. If it is still abnormal, the initialization needs to be carried out again.
S22, equipment wearing and data transmission.
In some implementations, the device wearing and data transmission includes:
And the wearing equipment comprises finger tiger equipment, VR head display equipment and various VR positioning trackers. Alternatively, the user may wear a VR position tracker on the waist and ankle of the foot, respectively.
Gesture and position feedback, namely feeding back the gesture and/or the tail end gesture of the finger of an operator and the spatial position and/or gesture of the user in real time by the finger-tiger VR device, and transmitting the information to the positioning base station through the communication module. Meanwhile, the VR head display device and each VR positioning tracker can also transmit the spatial position and/or gesture information of the VR head display device and each VR positioning tracker to a positioning base station through a communication module.
And the humanoid robot transmits real-time image data to the data processing module through the camera. The data processing module sends the image data to the VR head display device, so that an operator can obtain current visual field information of the humanoid robot in real time.
The data processing module can also receive the whole body joint information of the humanoid robot and the state data of the end effector.
S23, mapping the data processing and the executor.
In some implementations, the data processing and executor mapping includes:
The gesture and/or finger tip pose information is processed by a data processing module that receives gesture information from a finger tiger VR device, finger tip pose information, and end effector type data and transmits these information to an effector mapping module.
And generating standard hand information, namely, an actuator mapping module maps the hand of the current operator into the gesture information of the standard hand according to the gesture and/or the finger tail end gesture of the operator, and generates angle data of each joint of the standard hand.
Remapping of standard hand and actuator the actuator mapping module converts the joint angle data of the standard hand to the desired joint angle data of the corresponding end effector, depending on the type of end effector. This process ensures coordination and accuracy between the standard hand and the end effector of the humanoid robot.
And calculating the joint angle of the humanoid robot, namely after the data processing module receives the actual position and the gesture of the end effector, performing inverse kinematics calculation and calculating the expected joint angle data of the two arms of the humanoid robot.
The speed and angular speed instruction calculation, wherein the data processing module can analyze the variation of the position of each VR positioning tracker and map the variation into the motion data of the humanoid robot.
For example, the change amount is mapped to the moving speed of the robot when the operator is traveling straight, and the change amount is mapped to the angular speed of the robot when the operator is steering.
S24, controlling and teleoperating the humanoid robot.
In some embodiments, humanoid robot control and teleoperation includes:
Transmitting the desired joint angle information the data processing module transmits the desired joint angle information of the end effector and the two arms to the humanoid robot body. And transmitting the determined speed and angular speed information to the robot body.
And tracking control, namely tracking the expected joint angle of the upper limb of the humanoid robot body through a control algorithm, and adjusting the action of the upper limb of the humanoid robot. The lower limbs complete walking and steering of the humanoid robot through tracking speed and angular speed instructions.
The teleoperation control system of the humanoid robot provided by the embodiment of the application has the following beneficial effects:
1. Compared with a common VR handle teleoperation system, the system adopts finger tiger VR equipment, can be adapted to various types of end effectors, for example, on the control of a five-finger dexterous hand, the opening and closing actions of each finger can be accurately regulated, and the traditional VR handle can only control the five-finger dexterous hand to execute opening and closing control to a specific degree. Therefore, the system greatly improves the flexibility and the precision of the end effector, and enhances the diversity and the complexity of the operation of the humanoid robot.
2. The cost can be reduced, and compared with the motion capture glove, the finger tiger VR device has obvious price advantage. By providing low-cost, high-precision gesture recognition and capture techniques, the hardware cost of the overall system is effectively reduced. This cost advantage makes the system more competitive in terms of economy, reducing the threshold for teleoperation using VR devices, helping to drive the wide application of this technology.
3. The system supports personalized optimization according to hand shapes of different operators. The adaptability of the system to various users is enhanced by mapping the hand shape of the operator to the standard hand shape and then mapping the standard hand shape to the joint angle of the manipulator. The optimization process enables operators with different hands to use the system seamlessly, and improves the universality and user friendliness of a teleoperation system.
4. The system also expands the control range of teleoperation and combines the tracking of leg actions. The linear and angular speeds of the legs are converted into control commands by capturing leg movements using a VR tracker to control the robot to advance and steer. The integration of leg actions expands the control dimension of the robot, realizes teleoperation control of the whole body of the robot, and further improves the expandability and flexibility of the system.
In the several embodiments provided by the present application, it should be understood that the disclosed system may be implemented in other ways. For example, the above-described division of modules is merely a logic function division, and there may be another division manner in actual implementation, for example, a plurality of modules may be combined or may be integrated into another system, or some features may be omitted or not performed.
The modules described above as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purposes of each solution in this embodiment.
In addition, each functional module in the embodiments of the present application may be integrated in one processing unit, or each module may exist alone physically, or two or more modules may be integrated in one unit. The units formed by the modules can be realized in a form of hardware or a form of hardware and software functional units.
The integrated modules, which are implemented in the form of software functional modules, may be stored in a computer readable storage medium. The software functional modules described above are stored in a storage medium and include instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or processor to perform some of the steps provided by the various embodiments of the application.
It should be appreciated that the Processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. Some aspects disclosed in the incorporated application may be embodied directly in hardware processor execution or in a combination of hardware and software modules in a processor.
The memory may include a high-speed memory, and may further include nonvolatile storage, such as at least one magnetic disk memory, and may also be a usb disk, a removable hard disk, a read-only memory, a magnetic disk, or an optical disk.
The bus may be an industry standard architecture (Industry Standard Architecture, ISA) bus, an external device interconnect (PERIPHERAL COMPONENT, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, the buses in the drawings of the present application are not limited to only one bus or to one type of bus.
The storage medium may be implemented by any type or combination of volatile or nonvolatile memory devices such as static random access memory, electrically erasable programmable read only memory, magnetic memory, flash memory, magnetic or optical disk. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
It should be noted that the above embodiments are merely for illustrating the technical solution of the present application and not for limiting the same, and although the present application has been described in detail with reference to the above embodiments, it should be understood by those skilled in the art that the technical solution described in the above embodiments may be modified or some or all of the technical features may be equivalently replaced, and these modifications or substitutions do not make the essence of the corresponding technical solution deviate from the scope of the technical solution of the embodiments of the present application.

Claims (7)

The data processing module is used for determining the expected position and/or posture of the end effector according to the second information, receiving actual position and/or posture data of the end effector fed back by the humanoid robot, determining deviation between the actual position and/or posture data and the expected position and/or posture according to the actual position and/or posture data of the end effector and the expected position and/or posture of the end effector, performing inverse kinematics calculation according to the deviation and the mechanical structure of the humanoid robot, determining expected joint angle data of two arms corresponding to the humanoid robot, generating a second operation instruction of the two arms according to the expected joint angle data of the two arms, and sending the second operation instruction to the humanoid robot.
CN202510304449.5A2025-03-14Teleoperation control system of humanoid robotActiveCN119871445B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202510304449.5ACN119871445B (en)2025-03-14Teleoperation control system of humanoid robot

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202510304449.5ACN119871445B (en)2025-03-14Teleoperation control system of humanoid robot

Publications (2)

Publication NumberPublication Date
CN119871445A CN119871445A (en)2025-04-25
CN119871445Btrue CN119871445B (en)2025-10-17

Family

ID=

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114924647A (en)*2022-05-272022-08-19重庆长安新能源汽车科技有限公司 A vehicle control method, device, control device and medium based on gesture recognition
CN119159602A (en)*2024-09-122024-12-20华南理工大学 Method and system for remotely controlling a humanoid robot via a tracker and a data glove

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114924647A (en)*2022-05-272022-08-19重庆长安新能源汽车科技有限公司 A vehicle control method, device, control device and medium based on gesture recognition
CN119159602A (en)*2024-09-122024-12-20华南理工大学 Method and system for remotely controlling a humanoid robot via a tracker and a data glove

Similar Documents

PublicationPublication DateTitle
US20210205986A1 (en)Teleoperating Of Robots With Tasks By Mapping To Human Operator Pose
Wilson et al.Formulation of a new gradient descent MARG orientation algorithm: Case study on robot teleoperation
Diftler et al.Evolution of the NASA/DARPA robonaut control system
Cerulo et al.Teleoperation of the SCHUNK S5FH under-actuated anthropomorphic hand using human hand motion tracking
Fritsche et al.First-person tele-operation of a humanoid robot
Fang et al.A robotic hand-arm teleoperation system using human arm/hand with a novel data glove
US9193072B2 (en)Robot and control method thereof
US20240149458A1 (en)Robot remote operation control device, robot remote operation control system, robot remote operation control method, and program
Ben et al.Homie: Humanoid loco-manipulation with isomorphic exoskeleton cockpit
Odesanmi et al.Skill learning framework for human–robot interaction and manipulation tasks
CN108098780A (en)A kind of new robot apery kinematic system
JP7035309B2 (en) Master-slave system
JP2023507241A (en) A proxy controller suit with arbitrary dual-range kinematics
Falck et al.DE VITO: A dual-arm, high degree-of-freedom, lightweight, inexpensive, passive upper-limb exoskeleton for robot teleoperation
Lee et al.Exoskeletal master device for dual arm robot teaching
Park et al.A whole-body integrated AVATAR system: Implementation of telepresence with intuitive control and immersive feedback
Noccaro et al.A teleoperated control approach for anthropomorphic manipulator using magneto-inertial sensors
Shanmugam et al.A comprehensive review of haptic gloves: advances, challenges, and future directions
Bolder et al.Visually guided whole body interaction
CN119871445B (en)Teleoperation control system of humanoid robot
CN119871445A (en)Teleoperation control system of humanoid robot
Sohn et al.Recursive inverse kinematic analysis for humanoid robot based on depth camera data
Wang et al.Intuitive and versatile full-body teleoperation of a humanoid robot
Brüggemann et al.Coupled human-machine tele-manipulation
WO2023037966A1 (en)System and method for control of robot avatar by plurality of persons

Legal Events

DateCodeTitleDescription
PB01Publication
SE01Entry into force of request for substantive examination
CB02Change of applicant information

Country or region after:China

Address after:201203 Shanghai Pudong New Area, China (Shanghai) Pilot Free Trade Zone, No. 835 and 937 Dengui Road, Main Building (Building 1), 3rd Floor, Room 331

Applicant after:Humanoid Robot (Shanghai) Co.,Ltd.

Address before:201210 Shanghai Pudong New Area free trade trial area, 1 spring 3, 400 Fang Chun road.

Applicant before:Humanoid Robot (Shanghai) Co.,Ltd.

Country or region before:China

GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp