[1] 殷聪聪, 张秋菊. 机器人演示学习编程技术研究综述[J]. 计算机科学与探索, 2013, 14(8): 1275-1287.
YIN C C, ZHANG Q J. Review of research on robot programming by learning from demonstration[J]. Journal of Frontiers of Computer Science and Technology, 2013, 14(8): 1275-1287.
[2] ZHANG H X, LYU X Y, LENG W C, et al. Recent advances on vision-based robot learning by demonstration[J]. Recent Patents on Mechanical Engineering, 2018, 11(4): 269-284.
[3] OSA T, ESFAHANI A M G, STOLKIN R, et al. Guiding trajectory optimization by demonstrated distributions[J]. IEEE Robotics and Automation Letters, 2017, 2(2): 819-826.
[4] 李萧. 基于GMM/GMR演示学习方法自适应改进策略研究[D]. 沈阳: 东北大学, 2019.
LI X. Research on adaptive improvement strategy of GMM/GMR towards to learning from demonstration (LfD)[D]. Shenyang: Northeastern University, 2019.
[5] YANG D W, LYU Q, LIAO G, et al. Learning from demonstration: dynamical movement primitives based reusable suturing skill modelling method[C]//Proceedings of the 2018 Chinese Automation Congress, 2018.
[6] KORDIA A H, MELO F S. An end-to-end approach for learning and generating complex robot motions from demonstration[C]//Proceedings of the 16th IEEE International Conference on Control, Automation, Robotics and Vision, 2020.
[7] MA Y, XIE Y, ZHU W, et al. An efficient robot precision assembly skill learning framework based on several demonstrations[J].IEEE Transactions on Automation Science and Engineering, 2022: 1-13.
[8] CEM E, DOGANCAN K, BARIS A. Reward learning from very few demonstrations[J]. IEEE Transactions on Robotics, 2021(3): 893-904.
[9] WACHTER M, SCHULZ S, ASFOUR T, et al. Action sequence reproduction based on automatic segmentation and object-action complexes[C]//Proceedings of the 2013 13th IEEE-RAS International Conference on Humanoid Robots, Humanoids, 2013.
[10] MYTHRA V B, VISHNUNANDAN L N V, JYOTHSNA P B, et al. Extending policy from one-shot learning through coaching[C]//Proceedings of the 2019 28th IEEE International Conference on Robot and Human Interactive Communication, 2019: 1-7.
[11] 赵亮. 面向机器人装配任务的力位混合演示学习关键技术研究[D]. 沈阳: 东北大学, 2020.
ZHAO L. Learning hybrid force position skills from demonstration for robotic assembly tasks[D]. Shenyang: Northeastern University, 2020.
[12] 王朝阳. 基于Kinect的类人机械臂演示学习研究[D]. 哈尔滨: 哈尔滨工业大学, 2017.
WANG C Y. Learning from demonstration for humanoid robot arm based on Kinect[D]. Harbin: Harbin Institute of Technology, 2017.
[13] 孙茂斌. 基于动态时间规整的时序数据相似度量方法研究[D]. 重庆: 重庆邮电大学, 2020.
SUN M B. Research on similarity measurement method of time series data based on dynamic time warping[D]. Chongqing: Chongqing University of Posts and Telecommunications, 2020.
[14] 杨洁, 康宁. 动态时间规整DTW算法的研究[J]. 科技与创新, 2016(4): 11-12.
YANG J, KANG N. Research on dynamic time warping DTW algorithm[J]. Science and Technology & Innovation, 2016(4): 11-12.
[15] 夏寒松. 基于动态时间规整的时间序列相似性度量方法研究[D]. 重庆: 重庆邮电大学, 2021.
XIA H S. The method of similarity measurement based on dynamic time waring in time series data[D]. Chongqing: Chongqing University of Posts and Telecommunications, 2021.
[16] CHEN J, LAU H Y K, XU W J, et al. Towards transferring skills to flexible surgical robots with programming by demonstration and reinforcement learning[C]//Proceedings of the 2016 8th International Conference on Advanced Computational Intelligence, 2016.
[17] 乔少杰, 金琨, 韩楠, 等. 一种基于高斯混合模型的轨迹预测算法[J]. 软件学报, 2015, 26(5): 1048-1063.
QIAO S J, JIN K, HAN N, et al. Trajectory prediction algorithm based on Gaussian mixture model[J]. Journal of Software, 2015, 26(5):1048-1063.
[18] ZHANG H X, LYU X Y, LENG W C, et al. Recent advances on vision-based robot learning by demonstration[J]. Recent Patents on Mechanical Engineering, 2018, 11(4): 269-284.
[19] 李世伟. 基于YOLOv5算法的目标检测与车牌识别系统[J]. 电子技术与软件工程, 2022(1): 138-141.
LI S W. Target detection and license plate recognition system based on YOLOv5 algorithm[J]. Electronic Technology & Software Engineering, 2022(1): 138-141.
[20] 段傲, 李莉, 杨旭. 基于AlexNet的图像识别与分类算法[J]. 天津职业技术师范大学学报, 2022, 32(1): 63-66.
DUAN A, LI L, YANG X. Image recognition and classification algorithm based on AlexNet[J]. Journal of Tianjin University of Technology and Education, 2022, 32(1): 63-66. |