用户名: 密码: 验证码:
基于多项式拟合的模特走秀动作分类方法
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Classification of catwalk based on polynomial fitting
  • 作者:童基均 ; 刘宇 ; 常晓龙 ; 张瑾
  • 英文作者:TONG Jijuna;LIU Yua;CHANG Xiaolonga;ZHANG Jinb;School of Information Science and Technology;School of Fashion Design & Engineering, Zhejiang Sci-Tech University;
  • 关键词:节点检测 ; 普氏分析 ; 多项式拟合 ; 数据降维 ; 动作评价
  • 英文关键词:joint point detection;;Procrustes analysis;;polynomial fitting;;data dimension reduction;;action evaluation
  • 中文刊名:ZJSG
  • 英文刊名:Journal of Zhejiang Sci-Tech University(Natural Sciences Edition)
  • 机构:浙江理工大学信息学院;浙江理工大学服装学院;
  • 出版日期:2018-12-01 13:08
  • 出版单位:浙江理工大学学报(自然科学版)
  • 年:2019
  • 期:v.41
  • 基金:浙江省重点研发计划项目(2015C03023);; 浙江理工大学“521人才培养计划”
  • 语种:中文;
  • 页:ZJSG201902010
  • 页数:7
  • CN:02
  • ISSN:33-1338/TS
  • 分类号:88-94
摘要
为自动、准确地对模特走秀动作进行评价,提出了一种基于多项式拟合的动作分类方法。该方法首先利用基于局部亲和域的方法进行人体关节点检测,同时为消除相机视角和个体体型差异性,将检测到的关节点通过普氏分析进行数据校准;其次将人体关节点分为脊柱、上肢和下肢三部分,分别从水平和垂直方向进行多项式拟合;再次对得到的多项式系数进行数据降维;最后将降维后的多项式系数作为动作评价的特征,利用SVM分类器实现模特走秀动作分类。实验结果表明:该方法评价准确率为71.9%,初步实现对人体动作的定性评价。该方法为模特走秀的动作评估提供了一种解决方案。
        In order to evaluate catwalk show actions in an automatic and accurate way, an action classification method based on polynomial fitting is proposed in this paper. Firstly, the method uses the Part Affinity Fields to detect the human joint point. Data calibration of detected joint points was conducted by Prussian analysis to eliminate the difference between camera perspective and individual body type. Then, the detected human joint points were divided into three parts: spine, upper limb and lower limbs. Polynomial fitting of the data of the joint points was implemented respectively from the horizontal and vertical directions. Data dimensionality reduction was carried out again for polynomial coefficients obtained. Finally, the polynomial coefficient after reducing the dimension was regarded as the feature of the action evaluation, and SVM classifier was used to achieve classification of the model catwalk show actions. The experimental results showed that the evaluation accuracy of this method was 71.9%, and the qualitative evaluation of human body's action was preliminarily realized. It provides an effective solution for the catwalk action assessment of the models.
引文
[1] Hofmann M, Gavrila D M. Multi-view 3D human pose estimation in complex environment[J]. International Journal of Computer Vision, 2012, 96(1): 103-124.
    [2] Chan C K, Loh W P, Rahim I A. Human motion classification using 2D stick-model matching regression coefficients[J]. Applied Mathematics and Computation, 2016, 283(12): 70-89.
    [3] 佟丽娜, 侯增广, 彭亮, 等. 基于多路 sEMG 时序分析的人体运动模式识别方法[J]. 自动化学报, 2014, 40(5): 810-821.
    [4] Wolf A, Senesh M. Motion estimation using a statistical solid dynamic method[M]// Kecskeméthy A, Müller A. Computational Kinematics. Springer, Berlin, Heidelberg, 2009: 109-116.
    [5] 刘磊, 杨鹏, 刘作军. 采用多核相关向量机的人体步态识别[J]. 浙江大学学报 (工学版), 2017, 51(3): 562-571.
    [6] Felzenszwalb P F, Girshick R B, McAllester D, et al. Object detection with discriminatively trained part-based models[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(9): 1627-1645.
    [7] Fischler M A, Elschlager R A. The representation and matching of pictorial structures[J]. IEEE Transactions on Computers, 1973, 100(1): 67-92.
    [8] Dubout C, Fleuret F. Exact acceleration of linear object detectors[C]//European Conference on Computer Vision. Springer, Berlin, Heidelberg, 2012: 301-311.
    [9] Pantic M, Rothkrantz L J M. Automatic analysis of facial expressions: The state of the art[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22(12): 1424-1445.
    [10] Ouyang W, Chu X, Wang X. Multi-source deep learning for human pose estimation[C]// Computer Vision and Pattern Recognition. IEEE, 2014:2337-2344.
    [11] Chen X, Yuille A. Articulated pose estimation by a graphical model with image dependent pairwise relations[EB/OL].(2014-11-04)[2018-06-11]. https://arxiv.org/abs/1407.3399.
    [12] Jain A. Articulated people detection and pose estimation: Reshaping the future[C]// Computer Vision and Pattern Recognition. IEEE, 2012:3178-3185.
    [13] Gkioxari G, Hariharan B, Girshick R, et al. Using k-poselets for detecting people and localizing their keypoints[C]// IEEE Conference on Computer Vision and Pattern Recognition. IEEE Computer Society, 2014:3582-3589.
    [14] Cao Z, Simon T, Wei S E, et al. Realtime multi-person 2D pose estimation using Part Affinity Fields[C]// IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2017:1302-1310.
    [15] 毛红保, 张凤鸣, 冯卉. 基于奇异值分解的飞行动作评价方法研究[J]. 计算机工程与应用, 2008, 44(32):240-242.
    [16] Ren S, Cao X, Wei Y, et al. Face alignment at 3000 fps via regressing local binary features[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE,2014: 1685-1692.
    [17] 杜克勤. Procrustes问题的若干研究[D].杭州:浙江大学,2005:7-22.
    [18] Wei S E, Ramakrishna V, Kanade T, et al. Convolutional pose machines[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE,2016: 4724-4732.
    [19] Güdükbay U, Demir I, Dedeo. Motion capture and human pose reconstruction from a single-view video sequence[J]. Digital Signal Processing, 2013, 23(5): 1441-1450.
    [20] 孟强, 王佩贤, 刘志朋. 基于最小二乘法的高速道岔精测技术研究[J]. 测绘与空间地理信息, 2018, 41 (2):186-189.
    [21] 苑玮琦, 曲晓峰, 柯丽, 等. 主成分分析重建误差掌纹识别方法[J]. 光学学报, 2008, 28(10): 1903-1909.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700