Statistical Learning of Robotic Demonstration Trajectories Based on Multicriteria Segmentation and Multi-Demonstration Alignment (HSMM)

Cover Page

Cite item

Full Text

Open Access Open Access
Restricted Access Access granted
Restricted Access Subscription or Fee Access

Abstract

Statistical Learning of Robotic Demo Trajectories Based on Multicriteria Segmentation and Multi-Demonstration Alignment (HSMM) addresses complex tasks in human-robot interaction and intelligent manufacturing. The research goal of this study is to automatically extract generalized key segments from multiple robotic demonstration trajectories in the absence of prior annotations and establish statistical and parametric models for universal trajectory reproduction across diverse tasks and conditions. To achieve this, the research tasks include multicriteria segmentation (speed, curvature, acceleration, direction change), trajectory alignment using Hidden Semi-Markov Models (HSMM), and subsequent implementation of statistical representations (ProMP, GMM/GMR, DMP). The proposed methodology begins with the smoothing of raw data and the identification of key points via topological simplification and non-maximum suppression, then, using HSMM, it ensures consistent segmentation of multiple demonstrations into characteristic segments. The conducted experiments confirm the results of the approach, demonstrating low reconstruction error while simultaneously improving data compression and preserving key actions, indicating the high efficiency of the method. Finally, the novelty and practical significance of this study can be highlighted by the potential industrial applications (such as welding, painting, etc.), as well as the future prospective expansions of the method to more dynamic and non-stationary scenarios, requiring adaptive and statistically grounded trajectory planning.

Full Text

Restricted Access

About the authors

Tianci Gao

Bauman Moscow State Technical University

Author for correspondence.
Email: Gaotianci0088@gmail.com
ORCID iD: 0009-0003-1359-2180
SPIN-code: 4254-9225
Scopus Author ID: 59144647300

Postgraduate Student of the Department of System Analysis, Control Science, and Information Processing

Russian Federation, Moscow

Dmitry D. Dmitriev

Bauman Moscow State Technical University

Email: dddbmstu@gmail.com
SPIN-code: 2264-1653

Cand. Sci. (Eng.); Associate Professor of the Department of System Analysis, Control Science, and Information Processing

Russian Federation, Moscow

Konstantin A. Neusypin

Bauman Moscow State Technical University

Email: neysipin@mail.ru
ORCID iD: 0000-0001-6703-6735
SPIN-code: 2860-1736
Scopus Author ID: 6602995907

Dr. Sci. (Eng.), Professor of the Department of System Analysis, Control Science, and Information Processing

Russian Federation, Moscow

References

  1. Savitsky A., Golay M.J.E. Smoothing and differentiation of data by simplified least squares methods. Analytical Chemistry. 1964. Vol. 36. No. 8. Pp. 1627–1639. doi: 10.1021/ac60214a047.
  2. Cohen-Steiner D., Edelsbrunner H., Harer J. Stability of persistence diagrams. In: Proceedings of the Twenty-First Annual Symposium on Computational Geometry. ACM, 2005. Pp. 263–271. doi: 10.1145/1064092.1064133.
  3. Liu C., Ren B., Fu D., Li M. A GNSS composite interference recognition method based on YOLOv5. In: IEEE 6th International Conference on Civil Aviation Safety and Information Technology (ICCASIT). IEEE, 2024. Pp. 1157–1162. doi: 10.1109/ICCASIT62299.2024.10828065.
  4. Liu T., Zhu K., Zeng L. Diagnosis and prognosis of degradation process via hidden semi-Markov model. IEEE/ASME Transactions on Mechatronics. 2018. Vol. 23. No. 3. Pp. 1456–1466. doi: 10.1109/TMECH.2018.2823320.
  5. Osa T., Pajarinen J., Neumann G. et al. An algorithmic perspective on imitation learning. Foundations and Trends® in Robotics. 2018. Vol. 7. No. 1–2. Pp. 1–179. doi: 10.1561/2300000053.
  6. Calinon S. Gaussians on Riemannian manifolds: Applications for robot learning and adaptive control. IEEE Robotics & Automation Magazine. 2020. Vol. 27. No. 2. Pp. 33–45. doi: 10.1109/MRA.2020.2980548.
  7. Xie J., Yan H., Wang J., Li J., Chen B. Unsupervised approach for multi-modality telerobotic trajectory segmentation. IEEE Internet of Things Journal. 2024. doi: 10.1109/JIOT.2024.3412134.
  8. Yu L., Bai S. A modified dynamic movement primitive algorithm for adaptive gait control of a lower limb exoskeleton. IEEE Transactions on Human-Machine Systems. 2024. doi: 10.1109/THMS.2024.3458905.
  9. Kulak T., Girgin H., Odobez J.M. et al. Active learning of Bayesian probabilistic movement primitives. IEEE Robotics and Automation Letters. 2021. Vol. 6. No. 2. Pp. 2163–2170. doi: 10.1109/LRA.2021.3060414.
  10. Sung H.G. Gaussian mixture regression and classification. Abstract of dis. ... of Dr. Sci. (Philos.). Rice University, 2004.
  11. Mandlekar A., Zhu Y., Garg A. et al. Roboturk: A crowdsourcing platform for robotic skill learning through imitation. In: Conference on Robot Learning. PMLR, 2018. Pp. 879–893.
  12. Paraschos A., Daniel C., Peters J.R. et al. Probabilistic movement primitives. In: Advances in Neural Information Processing Systems. 2013. P. 26.
  13. Vemuri N., Thaneeru N. Enhancing human-robot collaboration in Industry 4.0 with AI-driven HRI. Power System Technology. 2023. Vol. 47. No. 4. Pp. 341–358. doi: 10.52783/pst.196.
  14. Bishop C.M., Nasrabadi N.M. Pattern recognition and machine learning. New York: Springer, 2006.
  15. Zhang T., Mo H. Reinforcement learning for robot research: A comprehensive review and open issues. International Journal of Advanced Robotic Systems. 2021. Vol. 18. No. 3. doi: 10.1177/17298814211007305.
  16. Li G., Jin Z., Volpp M. et al. ProDMP: A unified perspective on dynamic and probabilistic movement primitives. IEEE Robotics and Automation Letters. 2023. Vol. 8. No. 4. Pp. 2325–2332. doi: 10.1109/LRA.2023.3248443.
  17. Wong C.C., Vong C.M. Persistent homology-based graph convolution network for fine-grained 3D shape segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. Pp. 7098–7107. doi: 10.1109/ACCESS.2022.3218653.
  18. Lu Y., Jiang B., Liu N. et al. CrossPrune: Cooperative pruning for camera – LiDAR fused perception models of autonomous driving. Knowledge-Based Systems. 2024. Vol. 289. Art. 111522. doi: 10.1016/j.knosys.2024.111522.
  19. Neubeck A, Van Gool L. Efficient non-maximum suppression. In: 18th International Conference on Pattern Recognition (ICPR’06). IEEE, 2006. Vol. 3. Pp. 850–855. doi: 10.1109/ICPR.2006.479.
  20. Gervet T., Xian Z., Gkanatsios N. et al. Act3D: 3D feature field transformers for multi-task robotic manipulation. In: 7th Annual Conference on Robot Learning, 2023.

Supplementary files

Supplementary Files
Action
1. JATS XML
2. Fig. 1. General structure of the proposed method

Download (332KB)
3. Fig. 2. Original signal and detected local extrema

Download (177KB)
4. Fig. 3. Dependence of the number of extrema on the persistency threshold

Download (199KB)
5. Fig. 4. Comparison of extrema before and after simplification

Download (202KB)
6. Fig. 5. Approximate distribution of several demonstration trajectories in 3D space (a) and results of extracting several features from one demonstration trajectory (b)

Download (499KB)
7. Fig. 6. Result of key point extraction on a single trajectory (solid line represents the original trajectory, and the points indicate the selected key points). Projections on the XY, XZ, and YZ planes are shown, clearly highlighting the key nodes where speed or direction changes

Download (638KB)
8. Fig. 7. Comparison of the original and reconstructed trajectories

Download (653KB)
9. Fig. 8. Results of the aligned averaging of several demonstrations using HSMM and ProMP (thin curves represent the original trajectories, while the thick curve shows the average trajectory formed from the common key points and distributions). Critical moments of action are captured, as shown in the zoomed-in fragment of the figure

Download (657KB)