用户名: 密码: 验证码:
服务机器人多通道人机交互感知反馈工作机制及关键技术
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着老龄化人口数量的增加和护理成本的提高,用于照顾老年人、残疾人的护理服务机器人和康复机器人已经受到许多研究人员的重视。为了实现服务机器人与用户自然和谐的交流,在服务机器人的诸多关键技术中,信息感知反馈和与人的交互能力就尤为重要。当前的服务机器人信息感知和人机交互能力还比较初步,有待于进一步的提高和完善。
     本文针对老年人、残疾人等用户与服务机器人交互的特点,系统地研究了人机间信息感知和反馈机制,以及多通道人机交互方法,实现了一种具有多种信息感知与反馈功能的多通道自然人机交互系统。
     文中首先研究了“视觉-眼动”、“语音-听觉”、机械接触式通道等自然的人机交互效应通道,提出了几种效应通道主-辅协作的工作模式。通过几种效应通道的协作,减少了单个通道的工作负担,提高了交互的可靠性和工作效率。为了提高“视觉-眼动”交互通道对环境光照和用户头部位置变化的适应性。建立了一种适应环境光照变化的图像采集系统和图像二值化的阈值模型,并提出一种对用户头部位置变化影响的补偿策略和控制方法,改善了系统对用户头部位置变化的适应性。
     提出了一种基于Agent的自然和谐人机交互体系框架,建立了人机交互的用户模型与人机交互协议,根据人机交互协议的格式提出一种任务生命周期管理的任务进行管理方法,并用基于框架(Frame)的信息融合方法对不同交互通道表达的交互信息进行了语义层的交互语义融合。对以“人-界面-机器(人)”为基础的用户临场感进行了研究,提出一种多模式信息感知与反馈的体系和对多种通道反馈的交互语义信息进行融合的方法,增强了用户对系统反馈信息的感知和人机交互的自然和谐性。
     在建立的服务机器人与用户构成的“人—界面—机器(人)”实验系统中,用认知心理学的试验方法对人机交互界面进行实验研究,确定了适合不同用户(老年人、残疾人等)个性特点的人机交互界面;用人机工效试验方法对几种人机交互效应通道工效参数、以及工作模式作了实验研究,确定了几种效应通道工效参数的范围,验证了多种自然效应通道协作交互方法的有效性。
With the increase of the elderly population and the growing health care cost, the role of service robot in aiding the disabled and elderly is becoming important, and many researchers in the world have paid much attention to the healthcare robot and rehabilitation robot. To get natural and harmonious communication between user and service robot, the information perception and feedback ability, and interaction ability for service robots become more important in many key issues, but currently these capabilities of service robots are in preliminary level, therefore, there is much work to be done to improve the performance of service robots.
     Considering the characteristics of service robots to interact with the elderly and disabled, we systematically researched the information perception and feedback mechanism, and multi-modality human-robot interaction approaches, and then set up a multi-modality and human-robot natural interaction system which has multiple information perception and feedback functions.
     In the dissertation, we firstly had a research on some natural response modalities, such as eye-gaze input, speech recognition and syntheses, and mechanical touch modality, and presented the main-sub modality collaborative work mode for these response modalities, by this collaborative method, the work load of single modality has been decreased, and the work efficiency and interaction robust for these modalities have been improved. In order to improve the adaptation of eye-gaze input modality to environment illumination, a set of image capturing equipment and image binary threshold model have been put forward, and some strategies and control methods have been presented to decrease the influences on the system caused by users head movement, by these means, the adaptation of the system to environment illumination and users head movement have been improved.
     To get natural and harmonious interaction, a kind of human-robot interaction architecture based on agent has been proposed, and the user model and interaction protocol in this architecture have been presented. According to the format of the interaction protocol, a set of task process management method has been put up in the light of task lifecycle, and then at the semantic decision–level, the frame-based information fusion approaches has been applied to integrate the interactive semantics from different interaction modalities. Some work on user telepresence in the“Human-Interface-Machine(Robot)”architecture, has been carried out, and a kind of information perception and feedback method by multiple modes has been presented, all these have increased the naturalness for user to get feedback information and interact with service robots.
     Finally, in the“Human-Interface-Machine(Robot)”system formed by service robot and user, some experiments about interface have been done with the cognitive psychology test method, by this way, a kind of adaptive interface catering to the elderly and disabled users personal characteristics can be gotten, and some other experiments have been carried out with ergonomics test method to get the working parameters range of some response modalities, moreover, this method has been used
引文
[1] Kawamura, K., Pack, R.T., Iskarous, M., Design philosophy for service robots, IEEE International Conference on Systems, Man and Cybernetics, Vol.4, Oct. 1995, pp.3736-3741
    [2] The definition of service robot in the questionnaire of the international federation of robotics (IFR) for their yearly study of worldwide robot statistics [C], 1998
    [3] Graf, B., H?gele, M., Dependable interaction with an intelligent home care robot, Proceedings of the IEEE International Conference on Robotics and Automation, Seoul, Korea, May 21-26
    [4] Asami, S., Robots in Japan: present and future, IEEE Robotics & Automation Magazine, Vol.1, Issue 2, June 1994,pp.22-26
    [5] Paolo Dario, Eugenio Guglielmelli, Robot assistants: Applications and evolution, Robotics and Autonomous Systems, Vol.18, 1996, pp.225-234
    [6] Kawamura, K., Iskarous, M., Trends in service robots for the disabled and the elderly, Proceedings of the IEEE International Conference on Intelligent Robots and Systems Vol.3, Sept. 1994, pp.1647-1654
    [7] Khatchadourian, R., Rahman, T., Harwin, W., IROS special session on “Service robots for the disabled and elderly”, A technique for robotic assisted transfer for people with motor difficulties, Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Vol.3, Sept. 1994, pp.1676-1681
    [8] Patrick Corsi, Contributions from the community IT programme, Annual Reviews in Control, Vol.23, 1999, pp. 53-60
    [9] Francesco Giuffrida, Pietro G. Morasso, PARTNER — a semi-autonomous mobile service robot in a wireless network for biomedical applications, Proceedings of the 3rd TIDE Congress, 23–25 June 1998, Helsinki, Finland
    [10] Stiefelhagen, R., Fugen, C., Natural Human-Robot Interaction using Speech, Head Pose and Gestures, Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Vol.3, Oct. 2004, pp.2422 – 2427
    [11] Krishnamurthy, B., Evans, J., HelpMate: A robotic courier for hospital use, IEEE International Conference on Systems, Man and Cybernetics, vol.2, Oct. 1992, pp.1630-1634
    [12] Evans, J.M., HelpMate: an autonomous mobile robot courier for hospitals, Proceedings of the IEEE/RSJ/GI International Conference on Intelligent Robots and Systems, Vol.3, Sept.1994, pp.1695-1700
    [13] 王全福等,机器人的昨天、今天和明天,中国机械工程,Vol.11,No.2, 2000,pp.173-176
    [14] Topping M J (1996) Handy 1, a robotic aid to independence for severely disabled people, published in Technology and Disability, 5, 1996, pp.233-234
    [15] Tachi S, Tanie K, Komoriya K, AbeM, Electrocutaneous communication in a guide dog robot(MELDOG), IEEE Trans Biomed Eng., Vol.32, No.7, Jul. 1985, pp.461-469
    [16] Zeungnam Bien , Won-Kyung Song, Blend of soft computing techniques for effective human–machine interaction in service robotic systems, Fuzzy Sets and Systems, Vol.134,2003, pp.5–25
    [17] Hideo Mori, Shinji Kotclni, Noriuki Kiyohiro, Human Interface of a Robotic Travel Aid, IEEE international Workshop on Robot and Human Communication, July 1994, pp.90-94
    [18] Martens, C., Ruchel, N., Lang, A FRIEND for assisting handicapped people, IEEE Robotics & Automation Magazine, Vol.8, Issue 1, March 2001, pp.57-65
    [19] Sarangi P. Pari, Rahul Rao, Sang-Hack Jungt, Human Robot Interaction and Usability Studies for a Smart Wheelchair, Proceedings of the IEEE, International Conference on Intelligent Robots and Systems, October 2003, pp.3206-3211
    [20] 谢安,中国人口老龄化的现状、变化趋势及特点, 统计研究, No.8,2004, pp.50-53
    [21] 张东,谢存禧,吴剑,机器人化多功能护理床的研究与开发, 机器人技术与应用, No.6, 2003, pp.21-25
    [22] 陈斌,郭大勇,施克仁, 用于拟人机器人的嵌入式语音交互系统研究, 机器人, Vol.25, No.5, Sept, 2003, pp.452-455
    [23] 刘莉,汪劲松,陈恳等,THB IP2I 拟人机器人研究进展,机器人,Vol.24, No.3, May, 2002, pp.262-267
    [24] 刘江华,程君实,陈佳品, 基于视觉的动态手势识别及其在仿人机器人交互中的应用, 机器人, Vol.24, No.3, May, 2002, pp.197-216
    [25] Arvin Agah, Human interactions with intelligent systems: research taxonomy, Computers and Electrical Engineering, 27, 2001, pp71-107
    [26] 方志刚, 计算机手势输入及其在人机交互技术中的应用, 小型微型计算机系统, Vol.20, No.6, June 1999, pp.418-421
    [27] Vladimir I. Pavlovic, Rajeev Sharma, Thomas S. Huang, Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review, IEEE Transactions on pattern analysis and machine intelligence, Vol.19, No.7, July 1997, pp.677-695
    [28] Greenleaf, W.J., Developing the tools for practical VR applications [Medicine], Engineering in Medicine and Biology Magazine, Vol.15, Issue 2, March-April 1996, pp.23-30
    [29] 吴江琴,高文,基于DGMM 的中国手语识别系统,计算机研究与发展,Vol.137, No.15, M ay 2000, pp.551-558
    [30] Segen, J., Kumar, S., Human-computer interaction using gesture recognition and 3D hand tracking, Proceedings of International Conference on Image Processing (ICIP 98), vol.3, Oct. 1998, pp.188-192
    [31] Wilson, A.D., Bobick, A.F., Realtime online adaptive gesture recognition, Proceedings of International Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-TimeSystems, 26-27 Sept 1999, pp.111-116
    [32] 栗阳, 关志伟, 陈由迪, 戴国忠, 基于手势的人机交互的研究, 系统仿真学报, Vol.12 No.5, September 2000, pp.528-533
    [33] Hara, F., Kobayashi, H., A face robot able to recognize and produce facial expression, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, vol.3, Nov. 1996, pp.1600-1607
    [34] Andreas Lanitis, Chris J. Taylor, Timothy F. Cootes, Automatic Interpretation and Coding of Face Images Using Flexible Models, IEEE Transactions on pattern analysis and machine intelligence, Vol.19, No.7, July 1997, pp.743-756
    [35] 王志良,陈锋军,薛为民, 人脸表情识别方法综述,计算机应用与软件, No.12, 2003, pp.63-66
    [36] Irfan A. Essa and Alex P. Pentland, Coding, Analysis, Interpretation, and Recognition of Facial Expressions, IEEE Transactions on pattern analysis and machine intelligence, Vol.19, No.7, July 1997, pp.757-763
    [37] Calder. Andrew J., Burton A. Mike, Miller. Paul, A principal component analysis of facial expressions, Vision Research Vol. 41, Issue 9, April 2001, pp.1179-1208
    [38] I. A. Essa, A. P. Pentland, Facial expression recognition using a dynamic model and motion energy, Proceedings of the Fifth International Conference on Computer Vision, June 1995, pp.20-23
    [39] 赵力庄,高文,陈熙霖, Eigenface 的变维分类方法及其在表情识别中的应用, 计算机学报, Vo l.22, No.6, June 1999, pp.627-632
    [40] 刘伟,袁修干, 人的视觉-眼动系统的研究, 人类工效学, Vol.6, No.4, Dec 2000, pp.41-44
    [41] Stiefelhagen, R., Yang, J., Gaze tracking for multimodal human-computer interaction, IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol.4, April 1997, pp.2617-2620
    [42] 冯成志, 沈模卫, 视线跟踪技术及其在人机交互中的应用, 浙江大学学报(理学版), Vol.29, No.2, Mar. 2002, pp.225-232
    [43] 赵乐军,时书锋,陈怀琛, 实时眼动仪系统, 光电工程, Vol.125, No.14, August, 1998, pp.35-40
    [44] Thomas E.Hutchinson, White, K.P., et.al., Human-computer interface using eye-gaze input, IEEE Trans.On system, man, and cybernetics, Vol.19, No.6, 1989,pp.1527-1534
    [45] Robert J. K. Jacob,The Use of Eye Movements in Human-Computer Interaction Techniques: What You Look At is What You Get,ACM Transactions on Information Systems, Vol.9, No.3, April 1991, pp.152-169
    [46] G.A.Myers, et.al., Microcomputer-based instrument uses an internal model to track the eye, IEEE Computer, March, 1991
    [47] Zhai Shumin, What’s in the eyes for attentive input, Communication of the ACM, Vol.46, No.3,2003, pp.34-39
    [48] The IBM Viavoice Speech Recognition SDK (http://www-306.ibm.com/software/voice)
    [49] Microsoft Speech SDK (http://www.microsoft.com/speech/default.mspx)
    [50] 董士海, 人机交互的进展及面临的挑战, 计算机辅助设计与图形学学报, Vol.16,No.1, 2004, pp1-13
    [51] Mu-Chun Su, Ming-Tsang Chung, Voice-controlled human-computer interface for the disabled, Computer and Control Engineering Journal, Vol.12, No.5, 2001,pp.225-230
    [52] Woodland, P., Speech recognition, IEE Colloquium on Speech and Language Engineering - State of the Art, 19 Nov. 1998, pp.2/1-2/5
    [53] Jo?o Brisson Lopes, "Designing User Interfaces for Severely Handicapped Persons", Proceedings of the EC/NSF workshop on Universal Accessibility of Ubiquitous Computing: providing for the Elderly", Alcácer do Sal, Portugal, May 2001, pp.100-106
    [54] 董士海,王坚,戴国忠等,人机交互和多通道用户界面. 科学出版社,北京:1999.8
    [55] 袁保宗,阮秋琦,王延江等,新一代(第四代)人机交互的概念框架特征及关键技术,电子学报,Vol.31, No.12, 2003, pp1945-1954
    [56] 龚杰民,王献青. 人机交互技术的进展与发展趋向, 西安电子科技大学学报, Vol.25, No.6, pp.782-786
    [57] Garlan, D., Siewiorek, D.P., Smailagic, A., Steenkiste, P., Project Aura: Toward Distraction-Free Pervasive Computing, Pervasive Computing, IEEE, Vol.1, Issue: 2, April-June 2002, pp. 22–31
    [58] 徐光祐,史元春,谢伟凯,普适计算, 计算机学报, Vol.26, No.9, Sept. 2003,pp.1042-1050
    [59] Minh Tue Vo, Wood, C., Building an application framework for speech and pen input integration in multimodal learning interfaces, Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol.6, May 1996, pp.3545-3548
    [60] R.A.Bolt, Put that there: Voice and gesture at the graphic interface, Computer Graphics, Vol.14, No.3, 1980, pp.262-270
    [61] Rajeev Sharma, Toward Multimodal Human–Computer Interface, Proceedings of the IEEE, Vol.86, No.5, May 1998, pp.853-869
    [62] Sharon L. Oviatt, R. Coulston, et.al., Toward a Theory of Organized Multimodal Integration Patterns during Human-Computer Interaction, Proc. of the Fifth IEEE International Conference on Multimodal Interfaces (ICMI 2003), November, 2003, pp.44-51
    [63]Oviatt.S.L, Multimodal interfaces for dynamic interactive Maps, Proceedings of ACM Conference on Human Factors in Computing Systems(CHI’96),ACM,1996,pp.95-102
    [64] Oviatt.S.L, Multimodal interactive Maps: designing for human performance, Human Computer Interaction, 1997,12(1-2): pp93-129
    [65] Lizhong Wu, Sharon L. Oviatt, et.al., Multimodal Integration—A Statistical View, IEEE Transactions on Multimedia, Vol.1, No.4, Dec. 1999, pp.334-343
    [66] Sharon. Oviatt,User-Centered Modeling and Evaluation of Multimodal Interfaces, Proceedings of the IEEE, Vol.91, No.9, Sep. 2003, pp.1457-1468
    [67] Gregory A. Berry, Vladimir Pavlovi′c1, Thomas S. Huang, BattleView: A Multimodal HCI Research Application, Workshops on Perceptual User Interfaces, November, 1998, pp. 67-70
    [68] Huang, X., Acero, A., et.al., MIPAD: A multimodal interaction prototype, Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, (ICASSP '01). Vol. 1, May 2001 pp.9 - 12
    [69] Li Deng, Kuansan Wang, Acero, A., et.al., Distributed speech processing in miPad's multimodal user interface, Proceedings of the IEEE Transactions on Speech and Audio, Vol.10, Issue 8, Nov. 2002, pp.605 - 619
    [70] Aakay, M., Marsic, I., Medl, A., Guangming Bu, A system for medical consultation and education using multimodal human/machine communication, IEEE Transactions on Information Technology in Biomedicine, Vol.2, Issue 4, Dec. 1998, pp.282 – 291
    [71] Yamada, T., Hashimoto, H., Tosa, N., Pattern recognition of emotion with neural network, Proceedings of the IEEE IECON 21st International Conference on Industrial Electronics, Control, and Instrumentation, Vol. 1, Nov. 1995, pp.183-187
    [72] Pentland, A., Smart rooms, smart clothes, Proceedings of the Fourteenth International Conference on Pattern Recognition, Vol.2, Aug.1998, pp.949 - 953
    [73] Ozer, B., Lv, T., Wolf, W., A bottom-up approach for activity recognition in smart rooms, Proceedings Of the IEEE International Conference on Multimedia and Expo (ICME '02), Vol.1, Aug. 2002, pp.917-920
    [74] Focken, D., Stiefelhagen, R., Towards vision-based 3-D people tracking in a smart room, Proceedings of the Fourth IEEE International Conference on Multimodal Interfaces, Oct. 2002, pp.400-405
    [75] Henry, T.C.C., Ruwan Janapriya, E.G., de Silva, L.C., An automatic system for multiple human tracking and actions recognition in office environment, Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '03), Vol.3, April. 2003, pp. III45-8
    [76] Rainer Bischoff, Volker Graefe, Dependable Multimodal Communication and Interaction with Robotic Assistants, Proceedings of the IEEE International Workshop on Robot and Human Interactive Communication, Berlin, Germany, Sept., 2002, pp.300-305
    [77] Hans-Joachim B?hme, Torsten Wilhelm, et.al, An approach to multi-modal human–machine interaction for intelligent service robots, Robotics and Autonomous Systems, No.44, 2003, pp.83-96
    [78] Nobuyuki Yamasaki, Yuichiro Anzai, Active Interface for Human-Robot Interaction, Proceedings of the IEEE International Conference on Robotics and Automation, 1995, pp.3103-3109
    [79] Phongchai Nilas, Pramila Rani, Nilanjan Sarka, An Innovative High-Level Human-Robot Interaction for Disabled Persons, Proceedings of the IEEE international Conference on Robotics & Automation, New Orleans, April, 2004, pp.2309-2315
    [80] Kazuhiko Kawamura*, Tamara E. Rogers, Towards a human-robot symbiotic system, Robotics and Computer Integrated Manufacturing, No.19, 2003, pp.555–565
    [81] 李茂贞,戴国忠, 多通道软件界面结构模型及整合算法.计算机学报, Vol.21, No.2, 1998.2, pp.111-118
    [82] 林应明,董士海, 多通道融合算法和软件平台的实现.计算机学报.Vol.23, No.1, Jan. 2000, pp.90-94
    [83] 董士海,陈敏,罗军等,多通道用户界面的模型、方法及实例,北京大学学报(自然科学版),Vol.34, No.223, April, 1998,pp.231-239
    [84] 王坚,董士海,戴国忠, 基于自然交互方式的多通道用户界面模型,计算机学报, Vol.19, 1996年,增刊 pp.130-134
    [85] 高文,陈熙林,马继勇,王兆其,基于多模态接口技术的聋人和正常人交流系统,计算机学报,2000,Vol.23, No.12, pp.1253-1260
    [86] 王延江, 袁保宗,软件Agent及其在多功能感知系统中的应用,信号处理, 1999,15, pp.441-444
    [87] 李清水,方志刚等, 手势识别技术及其在人机交互中的应用, 人类工效学,2002,Vol.8, No.1, pp.27-30
    [88] 任海兵,祝远新,徐光祐等, 连续动态手势的时空表现建模及识别, 计算机学报, 2000, Vol.23, No.8, pp.824-828
    [89] 戴志强, 张延,苏晓星,基于ICA 的人机交互手势的识别,光电子·激光,Vol.14, No.8, Aug, 2003, pp.867-869
    [90] 刘江华,陈佳品,程君实, 用于人机交互的静态手势识别系统, 红外与激光工程, Vol.31, No.6, Dec. 2002, pp.449-503
    [91] 尹海荣,屠大维, 基于红外电视法的眼睛盯视人机交互技术, 红外技术,Vol.24, No.4, July.2002, pp.1-3
    [92] Hairong Yin, Dawei Tu, et.al, Exploitation of the adaptive platform for eye-gazing input, Proc. Of SPIE, Vol.4925, 2002
    [93] 尹海荣,基于视线跟踪的盯视输入人机交互及应用研究,上海大学硕士学位论文,2003,2
    [94] 吴湛微,姚晓东,邹俊忠,王行愚,王蓓, 从EOG扫视信号中提取眼球位置信息,华东理工大学学报(自然科学版), Vol.31, No.2, 2005, pp.212-214
    [95] Caldwell, D.G., Wardle, A., Kocak, O., Goodwin, M., Telepresence feedback and input systemsfor a twin armed mobile robot, IEEE Robotics & Automation Magazine, Vol.3, Issue 3, Sept. 1996, pp.29-38
    [96] Yeung, S.K., McMath, W.S., Petriu, E.M., Trif, N., Gal, C., Teleoperator-aided multi-sensor data fusion for mobile robot navigation, IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, Oct. 1994, pp.470-476
    [97] 陈辉, 宋爱国, 金世俊, 黄惟一, 带有力觉和触觉临场感的灵巧手主从系统的设计,机器人,Vol.20, No.6, 1998, pp.437-441
    [98] 朱清峰,黄惟一, 力觉临场感系统中环境模型的研究,机器人, Vol.20, No.4, July, 1998, pp.287-297
    [99] 陈卫东,席裕庚,蔡鹤皋, 具有力觉临场感的主从遥控机器人系统的双向控制,宇航学报, Vol.20, No.3, Jul. 1999, pp.69-75
    [100] 张红芬,李科杰,申延涛,机器人触觉临场感系统研究,机器人, Vol.22, No.5, Sept., 2000, pp.365-370
    [101] 高理富,王定成,唐毅,一种具有力觉触觉临场感的主从机器人装配作业平台,机器人,vol.24,No.1, 2002, p81-85
    [102] 徐旭明, 叶榛, 陶品,基于视觉临场感的机器人遥操作系统,高技术通讯,2000, 3, pp.57-61
    [103] 艾海舟,张朋飞,何克忠,江潍,张军宇,室外移动机器人的视觉临场感系统,机器人, Vol.22, No.1, Jan., 2000, pp.28-32
    [104] Takahashi, Y., Masuda, I., A visual interface for security robots, Proceedings of the IEEE International Workshop on Robot and Human Communication, Sept. 1992, pp.123 -128
    [105] Qingping Lin, Chengi Kuo, Virtual tele-operation of underwater robots, Proceedings of the IEEE International Conference on Robotics and Automation, Vol.2, April 1997, pp.1022 - 1027
    [106] J.J.Wagner, H.F.M. Van der Loos, et.al., Construction of social relationships between user and robot, Robotics and Autonomous System, Vol.31, No.3, 2000, pp.185-191
    [107] Joaquim A Jorge, Adaptive Tools for the Elderly New Devices to cope with Age-Induced Cognitive Disabilities, Proceedings of the 2001 EC/NSF workshop on Universal accessibility of ubiquitous computing, May 2001, pp.66-70
    [108] Tu Dawei, Zhao Qijie, Sensing and controlling model for eye-gaze input human-computer interface,Proceedings of SPIE,Vol.5718, 2004, pp.221-228
    [109] 屠大维、赵其杰等,自动适应用户头部位置变化的眼睛盯视输入系统,仪器仪表学报,Vol.25, No.6, 12, 2004, pp.828-831
    [110] S. Sirohey, A. Rosenfeld. Eye detection in a face image using linear and nonlinear filters, Pattern Recognition, 2001,34, pp.1367–1391
    [111] S. Sirohey, A. Rosenfeld, Z. Duric, A method of detecting and tracking irises and eyelids invideo, Pattern Recognition, 2002, Vol.35, No.6, pp.1389–1401
    [112] J.Y. Deng, F. Lai, Region-based template deformation and masking for eye-feature extraction and description, Pattern Recognition, 1997, No.30, pp.403–419
    [113] A. Yuille, D. Cohen, P. Hallinan, Feature extraction from faces using deformable templates, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1989, pp.104–109
    [114] J.P. Ivins, J. Porrill, A deformable model of the human iris for measuring small three-dimensional eye movements, Mach. Vision Appl, 1998, No.11, pp.42–51
    [115] X. Xie, R. Sudhakar, H. Zhuang, Real-time eye feature tracking from a video image sequence using Kalman filter, IEEE Trans. Systems Man Cybernet, 1995, No.25, pp.1568–1577
    [116] X. Xie, R. Sudhakar, H. Zhuang, A cascade scheme for eye tracking and head movement compensation, IEEE Trans.Systems Man Cybernet, 1998, No.28, pp.487–490
    [117] Witzner Hansen D., Pece A.E.C, Iris Tracking with Feature Free Contours, IEEE International Workshop on Analysis and Modeling of Faces and Gestures, 2003, pp.208 –214
    [118] T. Pun, A new method for gray-level picture thresholding using the entropy of the histogram, Signal Processing, Vol.2, No.2, 1980, pp.223-237
    [119] T. Pun, Entropic thresholding: a new approach, Computer Vision, Graphics, and Image Processing, Vol.16, No.2, 1981, pp.210-239
    [120] 罗希平,田捷,用最大熵原则作多阈值选择的条件迭代算法,软件学报,Vol.11, No.3, 2000, pp.379-385
    [121] Jui-Cheng Yen, Fu-Juay Chang, Shyang Chang, A new criterion for automatic multilevel thresholding, IEEE Transactions on Image Processing, Vol.4, No.3, 1995, pp.370-378
    [122] Pal N. R., Pal S. K., Entropy Thresholding, Signal Processing, No.16, 1989, pp.97-108
    [123] Pal N.R., Pal S.K., Entropy: A New Definition and Its Application, IEEE Transactions on Systems, Man and Cybernetics, No.21, 1991, pp.1260-1270
    [124] 王建军, 苑玮琦, 张宏勋, 一种基于相对熵的图象分割算法, 信息与控制, Vol.26, No.1 Feb., 1997, pp.67-72
    [125] Regina Wender, Angus M.Brown, Astrocytic Glycogen Influences Axon Function and Survival during Glucose Deprivation in Central White Matter, The Journal of Neuroscience, 2000, Vol.20, No.18, pp.6804-6810
    [126] Huang Lie-de, Apply Boltzmann equation analysis to advanced techniques, Journal of Ningxia University (natural science edition), Vol.21, No.1, 2000, pp.2-5
    [127] H.J. Motulsky, A Christopoulos, Fitting models to biological data using linear and nonlinear regression, a practical guide to curve fitting, 2003, pp.316-318
    [128] 高达明,人眼盯视人机交互中头部位置的自动跟踪控制,上海大学硕士学位论文,2005,3
    [129] Jeffrey A. Fayman, Oded Sudarsky, Zoom tracking and its applications,Machine Vision and Applications, No.13, 2001, pp.25–37
    [130] Tarabanis, K., Tsai, R.Y., Goodman, D.S., Modeling of a computer-controlled zoom lens, Proceedings of the IEEE international Conference on Robotics and Automation, vol.2, May 1992, pp.1545-1551
    [131] 关志伟,面向用户意图的智能人机交互,中国科学院软件研究所博士论文,2000,12
    [132] 聂亚杰,刘大昕,马惠玲, Agent 的体系结构, 计算机应用研究,No.9,pp.52-55
    [133] Giorgio Brajnik, Giovanni Guida, Carlo Tasso, User Modeling in Expert Man-Machine Interfaces: A Case Study in Intelligent Information Retrieval, IEEE Transactions on systems, man, and Cybernetics, Vol.20, No.1, 1990, pp.166-185
    [134] Elaine R. Users are Individuals: Individualizing User Models. Int. J. Human-computer Studies, Vol.51, 1999, pp.323-338
    [135] Murray, D.M., Embedded User Models, Proceedings of INTERACT’87, Second IFIP conference on Human-Computer Interaction, Elsevier, Netherlands, 1987
    [136] 程景云,倪亦泉等,人机界面设计与开发工具,电子工业出版社,1994
    [137] M.V. Mason, R.C. Thomas, Experimental adaptive interface, information technology: Research, Design, Application, Vol.3, No.3, 1984, pp.162-167
    [138] 李荣,人机交互中用户建模方法的研究,南京师范大学硕士学位论文,2004,5
    [139] Finlay, J., Beale, R., Neural networks in human-computer interaction: a view of user modeling, IEE Colloquium on Neural Nets in Human-Computer Interaction, Dec 1990, pp.7/1-7/3
    [140] Horvitz, E., Breese, J., Heckerman, D., Hovel, D., The Lumiere Project: Bayesian User Modeling for Inferring the Goals and Needs of Software Users, In Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (Madison, WI), Morgan Kaufmann, San Francisco, 1998, pp.256-265
    [141] Qi Zhang, Xiangdong Zhou, Learning Dyanamic User Model In Bayesian Image Retrieval, Proceedings of the Second International Conference on Machine Learning and Cybernetics, Xi’an, 2-5 November 2003, pp.2844-2849
    [142] Chen, Q., Norcio, A.F., A neural network approach for user modeling, Decision Aiding for Complex Systems, Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, vol.2, Oct. 1991, pp.1429-1434
    [143] Chiu, C., Norcio, A.F., Petrucci, K.E., Using neural networks and expert systems to model users in an object-oriented environment, Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, vol.3, Oct. 1991, pp.1943 – 1948
    [144] Scott M.Brown, Eugene Santos, Sheila B.Banks, Mark E. Oxley, Using explicit requirements and metrics for interface agent user model correction, Proceedings of the second internationalconference on Autonomous agents, May 1998, pp.1-7
    [145] Bellika, J.G., Hartvigsen, G., Widding, R.A., Using user models in software agents: the Virtual Secretary, Proceedings of the International Conference on Multi Agent Systems, July 1998, pp.391-392
    [146] Mimica, M.R.M., Morimoto, C.H., A Computer Vision Framework for Eye Gaze Tracking, XVI Brazilian Symposium on Computer Graphics and Image Processing, SIBGRAPI 2003, Oct. 2003, pp.406 – 412
    [147] Esaki, S., Ebisawa, Y., Quick menu selection using eye blink for eye-slaved nonverbal communicator with video-based eye-gaze detection, Proceedings of the 19th Annual International Conference of the IEEE, vol.5, Nov. 1997, pp.2322-2325
    [148] Rajeev Sharma, Toward Multimodal Human–Computer Interface, Proceedings of the IEEE, Vol.86, No.5, MAY 1998, pp.853-869
    [149] Sharon Oviatt, Antonella DeAngeli, Karen Kuhn, Integration and Synchronization of Input Modes during Multimodal Human-Computer Interaction, Proceedings of the Conference on Human Factors in Computing Systems(CHI’97), Atlanta, GA, pp.415-422
    [150] Hall, D.L., Llinas, J., An introduction to multisensor data fusion, Proceedings of the IEEE, Vol.85, Issue 1, Jan. 1997, pp.6-23
    [151] M. Minsky, A framework for representing knowledge, The Psychology of Computer Vision, P. H.Winston, Ed. New York: McGraw-Hill, 1975
    [152] R. A. Bolt, Put that there: Voice and gesture at the graphics interface, ACM Comput. Graph., vol.14, no.3, 1980, pp.262–270
    [153] Minh Tue Vo, Wood, C., Building an application framework for speech and pen input integration in multimodal learning interfaces, Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol.6, May 1996, pp.3545-3548
    [154] Kober, R, Harz, U., Schiffers, J., Fusion of visual and acoustic signals for command-word recognition, Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol.2, April 1997, pp.1495-1497
    [155] Meier, U., Hurst, W., Duchnowski, P., Adaptive bimodal sensor fusion for automatic speechreading, Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol.2, May 1996, pp.833-836
    [156] Jeffrey M. Bradshaw, An Introduction to Software Agents, Software Agents, AAAI Press/MIT Press, 1997
    [157] D. B. Moran, A. J. Cheyer, L. E. Julia, D. L. Martin, and S. Park, Multimodal user interface in the open agent architecture, Proceedings of the International Conference on Intelligent User Interfaces, Orlando, 1997, pp. 61–68.
    [158] Sharon Oviatt, Advances in the Robust Processing of Multimodal Speech and Pen Systems, Multimodal Interfaces for Human Machine Communication, 2001, pp.1-16
    [159] 普建涛, 董士海, 任务制导的多通道分层整合模型及其算法, 计算机研究与发展, Vol.38, No.8, Aug.2001, pp.966-971
    [160] 陈敏,罗军,董士海, ATOM:面向任务的多通道界面结构模型,计算机辅助设计与图形学学报, 1996, 8(增刊), pp.61-67
    [161] 徐军,陶开山, 人机工程学概论,中国纺织出版社,北京,2002,11
    [162] Bradley J. Nelson, Fast Stable Contact Transitions with a Stiff Manipulator Using Force and Vision Feedback, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Vol.2, Aug. 1995, pp.90-95
    [163] Akio Namiki, Yoshihiro Nakabo, Idaku Ishii, 1-ms Sensory-Motor Fusion System, IEEE Transactions On Mechatoronics, Vol.5, No.3, pp.244-252
    [164] Akio Namiki, Masatoshi Ishikawa, Sensory-Motor Fusion Architecture Based on High-Speed Sensory Feedback and Its Application to Grasping and Manipulation, Proceedings of the 32nd International Symposium On Robotics, Soul, Vol.4, No.2, 2001, pp.19-21
    [165] Akio Namiki, Yoshihiro Nakabo, Idaku Ishii, High Speed Grasping Using Visual and Force Feedback, Proceedings of the IEEE International Conference on Robotics and Automation, Detroit, Vol.5, No.14, 1999, pp.3195-3200
    [166] 席文明等,基于预测的立体视觉/力反馈控制研究,仪器仪表学报,Vol.25, No.1, 2004, pp.48-52
    [167] 田梦倩, 罗翔, 黄惟一, 视觉伺服机器人对运动目标操作的研究, 机器人, Vol.25, No.6, Nov. 2003, pp.548-553
    [168] Stephen J. Ralis, Barmeshwar Vikramaditya, Bradley J. Nelson, Micropositioning of a Weakly Calibrated Microassembly System Using Coarse-to-Fine Visual Servoing Strategies, IEEE Transactions on Electronics Packaging Manufacturing, Vol.23, No.2, April 2000, pp.123-131
    [169] 潘且鲁, 苏剑波, 席裕庚, 眼在手上机器人手眼无标定三维视觉跟踪, 自动化学报, Vol.28, No.3, May, 2002, pp.371-377
    [170] 赵清杰,连广宇,孙增圻, 机器人视觉伺服综述,控制与决策, Vo l.16, No.6, Nov. 2001, pp.849-853
    [171] 王麟琨, 徐德, 谭民,机器人视觉伺服研究进展, 机器人,Vol.26, No. 3, May, 2004, pp.277-282
    [172] 钱江, 苏剑波,图象雅可比矩阵的在线Kalman滤波估计,控制与决策,Vol.18, No.1, Jan. 2003, pp.77-80
    [173] Barmeshwar Vikramaditya, Bradley J. Nelson, Visually Guided Microassembly Using Optical Microscopes and Active Vision Techniques, Proceedings of the IEEE International Conference onRobotics and Automation, Albuquerque, New Mexico, April 1997, 3172-3177
    [174] Papanikolopoulos, N.P., Nelson, B., Khosla, P.K, Full 3-D tracking using the controlled active vision paradigm, Proceedings of the IEEE International Symposium on Intelligent Control, Aug. 1992, pp.267 - 274
    [175] Greg Welch, Gary Bishop, An Introduction to the Kalman Filter, Technical Report TR 95-041, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599-3175, April 5, 2004
    [176] 罗仕鉴,朱上上,孙守迁,人机界面设计,机械工业出版社,北京,2002.8
    [177] Joao Brisson Lopes, Designing User Interfaces for Severely Handicapped Persons, Proceedings of the 2001 EC/NSF workshop on Universal accessibility of ubiquitous computing, May 2001, pp.100-106
    [178] 朱上上,罗仕鉴,赵江洪, 基于人机工程的数控机床造型意象尺度研究, 计算机辅助设计与图形学学报, Vol.12, No.11, Nov.2000, pp.873-875
    [179] 朱序璋,人机工程学,西安电子科技大学出版社,西安,1999,11

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700