用户名: 密码: 验证码:
CNN多位置穿戴式传感器人体活动识别
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Convolutional Neural Networks for Human Activity Recognition Using Multi-location Wearable Sensors
  • 作者:邓诗卓 ; 王波涛 ; 杨传贵 ; 王国仁
  • 英文作者:DENG Shi-Zhuo;WANG Bo-Tao;YANG Chuan-Gui;WANG Guo-Ren;School of Computer Science and Engineering, Northeastern University;School of Computer Science and Technology, Beijing Institute of Technology;
  • 关键词:人体活动识别 ; 卷积神经网络 ; 穿戴式传感器 ; 特征提取 ; 动作图片
  • 英文关键词:human activity recognition;;convolutional neural network;;wearable sensor;;feature extraction;;activity image
  • 中文刊名:RJXB
  • 英文刊名:Journal of Software
  • 机构:东北大学计算机科学与工程学院;北京理工大学计算机学院;
  • 出版日期:2019-03-15
  • 出版单位:软件学报
  • 年:2019
  • 期:v.30
  • 基金:国家自然科学基金(61872072,U1401256,61173030,61732003)~~
  • 语种:中文;
  • 页:RJXB201903014
  • 页数:20
  • CN:03
  • ISSN:11-2560/TP
  • 分类号:228-247
摘要
随着人工智能的发展和可穿戴传感器设备的普及,基于传感器数据的人体活动识别(human activity recognition,简称HAR)得到了广泛关注,且具有巨大的应用价值.抽取良好判别力的特征,是提高HAR准确率的关键因素.利用卷积神经网络(convolutional neural networks,简称CNN)无需领域知识抽取原始数据良好特征的特点,针对现有基于传感器的HAR忽略三轴向传感器单一轴向多位置数据空间依赖性的不足,提出了两种动作图片构建方法T-2D和M-2D,构建多位置单轴传感器动作图片和非三轴传感器动作图片;进而提出了卷积网络模型T-2DCNN和M-2DCNN,抽取三组单一轴向动作图片的时空依赖性和非三轴传感器的时间依赖性,并将卷积得到的特征拼接为高层次特征用于分类;为了优化网络结构,减少卷积层训练参数数量,进一步提出了基于参数共享的卷积网络模型.在公开数据集上与现有的工作进行对比实验,默认参数情况下,该方法在公开数据集OPPORTUNITY和SKODA中F_1最大提升值分别为6.68%和1.09%;从传感器数量变化和单类识别准确性角度验证了模型的有效性;且基于共享参数模型,在保持识别效果的同时减少了训练参数.
        Wearable sensor-based human activity recognition(HAR) plays a significant role in the current smart applications with the development of the theory of artificial intelligence and popularity of the wearable sensors. Salient and discriminative features improve the performance of HAR. To capture the local dependence over time and space on the same axis from multi-location sensor data on convolutional neural networks(CNN), which is ignored by existing methods with 1D kernel and 2D kernel, this study proposes two methods, T-2D and M-2D. They construct three activity images from each axis of multi-location 3D accelerometers and one activity image from the other sensors. Then it implements the CNN networks named T-2DCNN and M-2DCNN based on T-2D and M-2D respectively,which fuse the four activity image features for the classifier. To reduce the number of the CNN weight, the weight-shared CNN,TS-2DCNN and MS-2DCNN, are proposed. In the default experiment settings on public datasets, the proposed methods outperform the existing methods with the F_1-value increased by 6.68% and 1.09% at most in OPPORTUNITY and SKODA respectively. It concludes that both na?ve and weight-shared model have better performance in most F_1-values with different number of sensors and F_1-value difference of each class.
引文
[1]Wang JD,Chen YQ,Hao SJ,Peng XH,Hu LS.Deep learning for sensor-based activity recognition:A survey.Pattern Recognition Letters,2018,3(33):1-9.[doi:10.1016/j.patrec.2018.02.010]
    [2]Nweke HF,Teh YW,Al-garadi MA,Alo UR.Deep learning algorithms for human activity recognition using mobile and wearable sensor networks:State of the art and research challenges.Expert Systems with Applications,2018,105:233-261.[doi:10.1016/j.eswa.2018.03.056]
    [3]Poppe R.A survey on vision-based human action recognition.Image and Vision Computing,2010,28(6):976-990.[doi:10.1016/j.imavis.2009.11.014]
    [4]Asadi-Aghbolaghi M,Clapes A,Bellantonio M,Escalante HJ,Ponce-López V,BaróX,Guyon I,Kasaei S,Sergio E.A survey on deep learning based approaches for action and gesture recognition in image sequences.In:Proc.of the 12th IEEE Int’l Conf.on Automatic Face&Gesture Recognition.IEEE,2017.476-483.[doi:10.1109/FG.2017.150]
    [5]Lara OD,Labrador MA.A survey on human activity recognition using wearable sensors.IEEE Communications Surveys and Tutorials,2013,15(3):1192-1209.[doi:10.1109/SURV.2012.110112.00192]
    [6]Kasteren TL,Englebienne G,Kr?se BJA.An activity monitoring system for elderly care using generative and discriminative models.Personal and Ubiquitous Computing,2010,14(6):489-498.[doi:10.1007/s00779-009-0277-9]
    [7]Avci A,Bosch S,Marin-Perianu M,Marin-Perianu R,Paul H.Activity recognition using inertial sensing for healthcare,wellbeing and sports applications:A survey.In:Proc.of the 23rd Int’l Conf.on Architecture of Computing Systems(ARCS).2010.167-176.
    [8]Mazilu S,Blanke U,Hardegger M,R?ster G,Gazit E,Hausdorff JM.GaitAssist:A daily-life support and training system for parkinson’s disease patients with freezing of gait.In:Proc.of the 32nd Annual ACM Conf.on Human Factors in Computing Systems.ACM Press,2014.2531-2540.[doi:10.1145/2556288.2557278]
    [9]Rashidi P,Cook DJ.Keeping the resident in the loop:Adapting the smart home to the user.IEEE Trans.on Systems,Man,and Cybernetics-Part A:Systems and Humans,2009,39(5):949-959.[doi:10.1109/TSMCA.2009.2025137]
    [10]Tolstikov A,Hong X,Biswas J,Nugent C,Chen L,Parente G.Comparison of fusion methods based on dst and DBN in human activity recognition.Journal of Control Theory and Applications,2011,9(1):18-27.[doi:10.1007/s11768-011-0260-7]
    [11]Hong J,Ohtsuki T.A state classification method based on space-time signal processing using SVM for wireless monitoring systems.In:Proc.of the 2011 IEEE 22nd Int’l Symp.on Personal Indoor and Mobile Radio Communications(PIMRC).IEEE,2011.2229-2233.[doi:10.1109/PIMRC.2011.6139913]
    [12]Stiefmeier T,Roggen D,Ogris G,Lukowicz P,Tr?ster G.Wearable activity tracking in car manufacturing.IEEE Pervasive Computing,2008,7(2):42-50.[doi:10.1109/MPRV.2008.40]
    [13]Kautz T,Groh BH,Hannink J,Jensen U,Strubberg H,M.Eskofier B.Activity recognition in beach volleyball using a deep convolutional neural network.Data Mining and Knowledge Discovery,2017,31(6):1678-1705.[doi:10.1007/s10618-017-0495-0]
    [14]Parkka J,Ermes M,Korpipaa P,eMantyjarvi J,Peltola J,Korhonen I.Activity classification using realistic data from wearable sensors.IEEE Trans.on Information Technology in Biomedicine,2006,10(1):119-128.[doi:10.1109/TITB.2005.856863]
    [15]Shoaib M,Bosch S,Incel OD,Scholten H,Havinga P.Complex human activity recognition using smartphone and wrist-worn motion sensors.Sensors,2016,16(4):426.[doi:10.3390/s16040426]
    [16]Chen YP,Yang JY,Liou SN,Lee GY,Wang JS.Online classifier construction algorithm for human activity detection using a triaxial accelerometer.Applied Mathematics and Computation,2008,205(2):849-860.[doi:10.1016/j.amc.2008.05.099]
    [17]Figo D,Diniz PC,Ferreira DR,Cardoso J.Preprocessing techniques for context recognition from accelerometer data.Personal and Ubiquitous Computing,2010,14(7):645-662.[doi:10.1007/s00779-010-0293-9]
    [18]He Z,Jin L.Activity recognition from acceleration data based on discrete consine transform and SVM.In:Proc.of the IEEE Int’l Conf.on Systems,Man and Cybernetics.IEEE,2009.5041-5044.[doi:10.1109/ICSMC.2009.5346042]
    [19]Zeng M,Nguyen LT,Yu B,et al.Convolutional neural networks for human activity recognition using mobile sensors.In:Proc.of the 6th Int’l Conf.on Mobile Computing,Applications and Services(MobiCASE).IEEE,2014.197-205.[doi:10.4108/icst.mobicase.2014.257786]
    [20]Kuang XH,He J,Hu ZH,Zhou Y.Comparison of deep feature learning methods for human activity recognition.Application Reaearch of Computers,2017,35(9)(in Chinese with English abstract).http://www.arocmag.com/article/02-2018-09-005.html[doi:10.3969/j.issn.1001-3695.2018.09.059]
    [21]Hammerla NY,Halloran S,Ploetz T.Deep,convolutional,and recurrent models for human activity recognition using wearables.In:Proc.of the 25th Int’l Joint Conf.on Artificial Intelligence.New York,2016.1533-1540.
    [22]Chen Y,Xue Y.A deep learning approach to human activity recognition based on single accelerometer.In:Proc.of the 2015 IEEEInt’l Conf.on Systems,Man,and Cybernetics(SMC).Hong Kong:IEEE,2015.1488-1492.[doi:10.1109/SMC.2015.263]
    [23]Ronao CA,Cho SB.Deep convolutional neural networks for human activity recognition with smartphone sensors.In:Proc.of the Int’l Conf.on Neural Information Processing.Cham:Springer-Verlag,2015.46-53.[doi:10.1007/978-3-319-26561-2_6]
    [24]Ronao CA,Cho SB.Human activity recognition with smartphone sensors using deep learning neural networks.Expert Systems with Applications,2016,59:235-244.[doi:10.1016/j.eswa.2016.04.032]
    [25]Yang JB,Nguyen MN,San PP,Li XL,Krishnaswamy S.Deep convolutional neural networks on multichannel time series for human activity recognition.In:Proc.of the 24th Int’l Conf.on Artificial Intelligence.Buenos Aires.2015.3995-4001.
    [26]Ordó?ez FJ,Roggen D.Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition.Sensors,2016,16(1):115.[doi:10.3390/s16010115]
    [27]Teng QL,Kelishomi AE,Cai ZM.A deep learning model for human activity recognition using motion sensor data.Journal of Xi’an Jiaotong University,2018,52(8):1-8(in Chinese with English abstract).[doi:10.7652/xjtuxb201808010]
    [28]Chen B,Yu QT,Chen TM.On deep-learning-model-based sensor activity recognition.Journal of Zhejiang University of Technology,2018,46(4):375-381(in Chinese with English abstract).
    [29]Babu GS,Zhao P,Li XL.Deep convolutional neural network based regression approach for estimation of remaining useful life.In:Proc.of the Int’l Conf.on Database Systems for Advanced Applications.Cham:Springer-Verlag,2016.214-228.[doi:10.1007/978-3-319-32025-014]
    [30]Ha S,Yun JM,Choi S.Multi-modal convolutional neural networks for activity recognition.In:Proc.of the IEEE Int’l Conf.on Systems,Man,and Cybernetics(SMC).IEEE,2015.3017-3022.[doi:10.1109/SMC.2015.525]
    [31]Ha S,Choi S.Convolutional neural networks for human activity recognition using multiple accelerometer and gyroscope sensors.In:Proc.of the Int’l Joint Conf.on Neural Networks(IJCNN).IEEE,2016.381-388.[doi:10.1109/IJCNN.2016.7727224]
    [32]Ravi D,Wong C,Lo B,Yang GZ.A deep learning approach to on-node sensor data analytics for mobile or wearable devices.IEEEJournal of Biomedical and Health Informatics,2017,21(1):56-64.[doi:10.1109/JBHI.2016.2633287]
    [33]Jiang W,Yin Z.Human activity recognition using wearable sensors by deep convolutional neural networks.In:Proc.of the 23rd ACM Int’l Conf.on Multimedia.ACM,2015.1307-1310.[doi:10.1145/2733373.2806333]
    [34]Chen K,Yao L,Gu T,Yu Z,Wang XZ,Zhang DL.Fullie and wiselie:A dual-stream recurrent convolutional attention mmmodel for Activity Recognition.arXiv preprint arXiv:1711.07661,2017.
    [35]Wu J,Xiao KC.Human activity recognition based on deep convolutional neural networks.Journal of Huazhong University of Science and Technology(Natural Science),2016,44:190-194(in Chinese with English abstract).[doi:10.13245/j.hust.16S139]
    [36]Wang ZM,Cao HJ,Fan L.Human activity recognition,deep learning,convolutional neural networks.Computer Science,2016,43(Z11):56-58(in Chinese with English abstract).[doi:10.11896/j.issn.1002-137X.2016.11A.012]
    [37]Wang ZM,Zhang C,Heng X.Research on new human behavior recognition method basd on CNN and decision tree.Application Research of Computers,2017,34(12):3569-3572(in Chinese with English abstract).[doi:10.3969/j.issn.1001-3695.2017.12.011]
    [38]Krizhevsky A,Sutskever I,Hinton GE.Imagenet classification with deep convolutional neural networks.In:Proc.of the Advances in Neural Information Processing Systems.2012.1097-1105.
    [39]Karpathy A,Toderici G,Shetty S,Leung T,Sukthankar R,Li FF.Large-scale video classification with convolutional neural networks.In:Proc.of the IEEE Conf.on Computer Vision and Pattern Recognition.2014.1725-1732.
    [40]Abdel-Hamid O,Mohamed A,Jiang H,Deng L,Penn G,Yu D.Convolutional neural networks for speech recognition.ACM Trans.on Audio,Speech,and Language Processing,2014,22(10):1533-1545.[doi:10.1109/TASLP.2014.2339736]
    [41]Roggen D,Calatroni A,Rossi M,et al.Collecting complex activity datasets in highly rich networked sensor environments.In:Proc.of the 7th Int’l Conf.on Networked Sensing Systems(INSS).IEEE,2010.233-240.[doi:10.1109/INSS.2010.5573462]
    [42]Zappi P,Lombriser C,Stiefmeier T,et al.Activity recognition from on-body sensors:Accuracy-power trade-off by dynamic sensor selection.In:Proc.of the Wireless Sensor Networks.Heidelberg Berlin:Springer-Verlag,2008.17-33.[doi:10.1007/978-3-540-77690-1_2]
    [43]Edel M,K?ppe E.Binarized-blstm-rnn based human activity recognition.In:Proc.of the Int’l Conf.on Indoor Positioning and Indoor Navigation(IPIN).IEEE,2016.1-7.[doi:10.1109/IPIN.2016.7743581]
    [44]Xu Y,Shen Z,Zhang X,Gao Y,Deng S,Wang Y,Fan Y,Chang E.Learning multi-level features for sensor-based human action recognition.Pervasive and Mobile Computing,2017,40:324-338.[doi:/10.1016/j.pmcj.2017.07.001]
    [20]匡晓华,何军,胡昭华,周媛.面向人体行为识别的深度特征学习方法比较.计算机应用研究,2017,35(9).http://www.arocmag.com/article/02-2018-09-005.html[doi:10.3969/j.issn.1001-3695.2018.09.059]
    [27]滕千礼,Kelishomi AE,蔡忠闵.采用运动传感器的人体运动识别深度模型.西安交通大学学报,2018,52(8):1-8.[doi:10.7652/xjtuxb201808010]
    [28]陈波,余秋婷,陈铁明.基于传感器人体行为识别深度学习模型的研究.浙江工业大学学报,2018,46(4):375-381.
    [35]吴军,肖克聪.基于深度卷积神经网络的人体活动识别.华中科技大学学报,2016,44:190-194.[doi:1013245/j.hust.16S139]
    [36]王忠民,曹洪江,范琳.一种基于卷积神经网络深度学习的人体行为识别方法.计算机科学,2016,43(Z11):56-58.[doi:10.11896/j.issn.1002-137X.2016.11A.012]
    [37]王忠民,张琮,衡霞.CNN与决策树结合的新型人体行为识别方法研究.计算机应用研究,2017,34(12):3569-3572.[doi:10.3969/j.issn.1001-3695.2017.12.011]

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700