Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Advertisement

Springer Nature Link
Log in

Using sim-to-real transfer learning to close gaps between simulation and real environments through reinforcement learning

  • Original Article
  • Published:
Artificial Life and Robotics Aims and scope Submit manuscript

Abstract

We attempt to develop an autonomous mobile robot that supports workers in the warehouse to reduce their burden. The proposed robot acquires a state-action policy to circumvent obstacles and reach a destination via reinforcement learning, using aLiDAR sensor. Regarding the real-world applications of reinforcement learning, the policies previously learned under a simulation environment are generally diverted to real robot, owing to unexpected uncertainties inherent to simulation environments, such as friction and sensor noise. To address this problem, in this study, we proposed a method to improve the action control of an Omni wheel robot via transfer learning in an environment. In addition, as an experiment, we searched the route for reaching a goal in an real environment using transfer learning’s results and verified the effectiveness of the policy acquired.

This is a preview of subscription content,log in via an institution to check access.

Access this article

Log in via an institution

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Gen A (2006) Development of outdoor cleaning robot SUBARUROBOHITER RS1. J Robot Soc Japan 24(2):153–155

    Article  Google Scholar 

  2. Yuzi H (2006) An approach to development of a human-symbiotic robot. J Robot Soc Japan 24(3):296–299

    Article  Google Scholar 

  3. Yoichi S et al (2006) Security services system using security robot Guardrobo. J Robot Soc Japan 24(3):308–311

    Article  Google Scholar 

  4. Sutton RS, Barto AG (1998) Reinforcement Learning. The MIT Press, Cambridge

    MATH  Google Scholar 

  5. Sunayama R et al. (2007) Acquisition of collision avoidance behavior using reinforcement learning for autonomous mobile robots ROBOMEC lecture, pp.2A1-E19-1-4

  6. Yuto U et al. (2020) Policy transfer from simulation to real world for autonomous control of an omni wheel robot, proceedings of 2020 IEEE 9th Global Conference on Consumer Electronics, pp.170-171, October 13-16

  7. Watkins CJCH, Dayan P (1992) Technical note: Q-learning. Mach Learn 8:56–68

    Google Scholar 

  8. Laser Range Scanner Data Output Type/UTM-30LX Product Details |HOKUYO AUTOMATIC CO.,LTD. Avail-able:https://www.hokuyo-aut.co.jp/search/single.php?serial=21

  9. Lazaric A (2012) Transfer in Reinforcement Learning: a Framework and a Survey, Reinforcement Learning-State of the art. Springer, Berlin, pp 143–173

    Book  Google Scholar 

  10. Taylor ME, Stone P (2009) Transfer learning for reinforcement learning domains: a survey. J Mach Learn Res 10:1633–1685

    MathSciNet MATH  Google Scholar 

  11. Taylor ME (2009) Transfer in reinforcement learning domains. Studies in computational intelligence, vol. 216. Springer, New York

    Google Scholar 

Download references

Author information

Authors and Affiliations

  1. Graduate School of Engineering, Nagoya Institute of Technology, Nagoya, Japan

    Yuto Ushida, Hafiyanda Razan, Shunta Ishizuya, Takuto Sakuma & Shohei Kato

  2. Frontier Research Institute for Information Science, Nagoya Institute of Technology, Nagoya, Japan

    Shohei Kato

Authors
  1. Yuto Ushida

    You can also search for this author inPubMed Google Scholar

  2. Hafiyanda Razan

    You can also search for this author inPubMed Google Scholar

  3. Shunta Ishizuya

    You can also search for this author inPubMed Google Scholar

  4. Takuto Sakuma

    You can also search for this author inPubMed Google Scholar

  5. Shohei Kato

    You can also search for this author inPubMed Google Scholar

Corresponding author

Correspondence toYuto Ushida.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was presented in part at the 26th International Symposium on Artificial Life and Robotics (Online, January 21–23, 2021).

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ushida, Y., Razan, H., Ishizuya, S.et al. Using sim-to-real transfer learning to close gaps between simulation and real environments through reinforcement learning.Artif Life Robotics27, 130–136 (2022). https://doi.org/10.1007/s10015-021-00713-y

Download citation

Keywords

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Advertisement


[8]ページ先頭

©2009-2025 Movatter.jp