Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Advertisement

Springer Nature Link
Log in

An Perception Enhanced Human-Robot Skill Transfer Method for Reactive Interaction

  • Conference paper
  • First Online:

Part of the book series:Lecture Notes in Computer Science ((LNAI,volume 15052))

Included in the following conference series:

  • 228Accesses

Abstract

In this paper, we propose a trust human-robot skill transfer framework, interpretive and reactive dynamic system (IRDS), by investigating the behaviour tree (BT) and dynamic movement primitives (DMPs) enhanced by the feedback from perceived information for dynamic and uncertain tasks and environments. Human sensorimotor control allows interactions with various environments and accomplishes complex manipulation tasks with uncertainty; however, it is still hard for robots to own this capability. In this work, we aim to transfer these reactive skills to robots, which enable robots to interact with humans and environments under varying uncertainty. The main challenges of robot skill learning are the generalization, safety and stability issues during the skills learning and execution autonomously. BT was investigated for task planning and reactive interaction during the robot execution. The dynamic system-based model generates action for the low-level compliant controller. The convergence of the proposed IRDS framework under dynamic interaction and disturbance was proved by control theory. We conducted simulations and physical experiments on the real robots to evaluate the generalization performance and the convergence capability under uncertainty. And the results show that the reactivity, convergence and interaction performance can be guaranteed, and the learned skills can be transferred among different physical robot platforms.

This is a preview of subscription content,log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 7549
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 9437
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Similar content being viewed by others

References

  1. Si, W., Wang, N., Yang, C.: A review on manipulation skill acquisition through teleoperation-based learning from demonstration. Cogn. Comput. Syst.3(1), 1–16 (2021)

    Article MathSciNet MATH  Google Scholar 

  2. Li, Y., Ganesh, G., Jarrassé, N., Haddadin, S., Albu-Schaeffer, A., Burdet, E.: Force, impedance, and trajectory learning for contact tooling and haptic identification. IEEE Trans. Rob.34(5), 1170–1182 (2018)

    Article  Google Scholar 

  3. Huang, Q., Lan, J., Li, X.: Robotic arm based automatic ultrasound scanning for three-dimensional imaging. IEEE Trans. Industr. Inf.15(2), 1173–1182 (2018)

    Article MATH  Google Scholar 

  4. Ning, G., Zhang, X., Liao, H.: Autonomic robotic ultrasound imaging system based on reinforcement learning. IEEE Trans. Biomed. Eng.68(9), 2787–2797 (2021)

    Article MATH  Google Scholar 

  5. Yang, C., Zeng, C., Fang, C., He, W., Li, Z.: A DMPS-based framework for robot learning and generalization of humanlike variable impedance skills. IEEE/ASME Trans. Mechatron.23(3), 1193–1203 (2018)

    Article MATH  Google Scholar 

  6. Yang, C., Zeng, C., Liang, P., Li, Z., Li, R., Su, C.-Y.: Interface design of a physical human-robot interaction system for human impedance adaptive skill transfer. IEEE Trans. Autom. Sci. Eng.15(1), 329–340 (2018)

    Article MATH  Google Scholar 

  7. Li, Z., Zhao, T., Chen, F., Hu, Y., Su, C.-Y., Fukuda, T.: Reinforcement learning of manipulation and grasping using dynamical movement primitives for a humanoidlike mobile manipulator. IEEE/ASME Trans. Mechatron.23(1), 121–131 (2017)

    Article  Google Scholar 

  8. Franzese, G., Mészáros, A., Peternel, L., Kober, J.: ILoSA: interactive learning of stiffness and attractors. In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7778–7785. IEEE (2021)

    Google Scholar 

  9. Liu, T., Lyu, E., Wang, J., Meng, M.Q.-H.: Unified intention inference and learning for human-robot cooperative assembly. IEEE Trans. Autom. Sci. Eng. (2021)

    Google Scholar 

  10. Si, W., Wang, N., Li, Q., Yang, C.: A framework for composite layup skill learning and generalizing through teleoperation. Front. Neurorobot.16, 840240 (2022)

    Article MATH  Google Scholar 

  11. Si, W., Guan, Y., Wang, N.: Adaptive compliant skill learning for contact-rich manipulation with human in the loop. IEEE Robot. Autom. Lett.7(3), 5834–5841 (2022)

    Article MATH  Google Scholar 

  12. Petrič, T., Gams, A., Žlajpah, L., Ude, A., Morimoto, J.: Online approach for altering robot behaviors based on human in the loop coaching gestures. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 4770–4776. IEEE (2014)

    Google Scholar 

  13. Yang, C., Chen, C., He, W., Cui, R., Li, Z.: Robot learning system based on adaptive neural control and dynamic movement primitives. IEEE Trans. Neural Netw. Learn. Syst.30(3), 777–787 (2018)

    Article MathSciNet MATH  Google Scholar 

  14. Lee, M.A., et al.: Making sense of vision and touch: learning multimodal representations for contact-rich tasks. IEEE Trans. Rob.36(3), 582–596 (2020)

    Article MATH  Google Scholar 

  15. Ögren, P., Sprague, C.I.: Behavior trees in robot control systems. Annu. Rev. Control Robot. Auton. Syst.5, 81–107 (2022)

    Article MATH  Google Scholar 

  16. Paxton, C., Ratliff, N., Eppner, C., Fox, D.: Representing robot task plans as robust logical-dynamical systems. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5588–5595. IEEE (2019)

    Google Scholar 

  17. Paxton, C., Hundt, A., Jonathan, F., Guerin, K., Hager, G.D.: CoSTAR: instructing collaborative robots with behavior trees and vision. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 564–571. IEEE (2017)

    Google Scholar 

  18. Hu, H., Jia, X., Liu, K., Sun, B.: Self-adaptive traffic control model with behavior trees and reinforcement learning for AGV in Industry 4.0. IEEE Trans. Industr. Inf.17(12), 7968–7979 (2021)

    Article MATH  Google Scholar 

  19. Sagredo-Olivenza, I., Gómez-Martín, P.P., Gómez-Martín, M.A., González-Calero, P.A.: Trained behavior trees: programming by demonstration to support AI game designers. IEEE Trans. Games11(1), 5–14 (2017)

    Article  Google Scholar 

  20. Si, W., Guo, C., Dong, J., Lu, Z., Yang, C.: Deformation-aware contact-rich manipulation skills learning and compliant control. In: Borja, P., Della Santina, C., Peternel, L., Torta, E. (eds.) Human-Friendly Robotics 2022. HFR 2022. Springer Proceedings in Advanced Robotics, vol. 26, pp. 90–104. Springer, Cham (2022).https://doi.org/10.1007/978-3-031-22731-8_7

  21. Hogan, N.: Impedance Control: An Approach to Manipulation: Part II-Implementation, vol. 107, no. 2, pp. 8–16 (1985)

    Google Scholar 

  22. Iovino, M., Scukins, E., Styrud, J., Ögren, P., Smith, C.: A survey of behavior trees in robotics and AI. arXiv preprintarXiv:2005.05842 (2020)

Download references

Author information

Authors and Affiliations

  1. School of Computer Science and Electronic Engineering, University of Essex, Wivenhoe Park, Colchester, CO4 3SQ, UK

    Weiyong Si

  2. South China University of Technology, Guangzhou, 510640, China

    Jiale Dong

  3. Bristol Robotics Lab, University of the West of England, Bristol, BS16 1QY, UK

    Ning Wang

  4. Department of Computer Science, University of Liverpool, Liverpool, L69 3BX, UK

    Chenguang Yang

Authors
  1. Weiyong Si

    You can also search for this author inPubMed Google Scholar

  2. Jiale Dong

    You can also search for this author inPubMed Google Scholar

  3. Ning Wang

    You can also search for this author inPubMed Google Scholar

  4. Chenguang Yang

    You can also search for this author inPubMed Google Scholar

Corresponding author

Correspondence toChenguang Yang.

Editor information

Editors and Affiliations

  1. Department of Electronic and Electrical Engineering, Brunel University London, London, UK

    M. Nazmul Huda

  2. Department of Mechanical and Aerospace Engineering, Brunel University London, Uxbridge, UK

    Mingfeng Wang

  3. Department of Electronic and Electrical Engineering, Brunel University London, London, UK

    Tatiana Kalganova

Rights and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Si, W., Dong, J., Wang, N., Yang, C. (2025). An Perception Enhanced Human-Robot Skill Transfer Method for Reactive Interaction. In: Huda, M.N., Wang, M., Kalganova, T. (eds) Towards Autonomous Robotic Systems. TAROS 2024. Lecture Notes in Computer Science(), vol 15052. Springer, Cham. https://doi.org/10.1007/978-3-031-72062-8_6

Download citation

Publish with us

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 7549
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 9437
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only


[8]ページ先頭

©2009-2025 Movatter.jp