用户名: 密码: 验证码:
直齿圆柱齿轮渐开线齿廓的结构光视觉测量技术
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
结构光视觉技术以光学和电学中的物理现象为基础,获取并处理被测物体表面与结构光相接处的图像来重构被测物体的三维几何信息,可主要用于复杂曲面或自由曲面的三维测量。将结构光视觉技术应用于机械工业中的零件尺寸测量,具有非接触、速度快、自动化程度高等优点,因此结构光视觉技术在机械工业领域中的应用越来越受到人们的关注。
     齿轮是机械工业中一种重要的传动零件,直齿圆柱齿轮的应用尤为广泛,随着对齿轮质量的要求越来越高,对齿轮的测量也提出了更加严格的要求。测量精度的发展影响着制造技术的提高,因此研究更加先进的齿轮测量技术对齿轮制造技术的整体提高有着非常重要的推动作用。
     当齿轮存在较大的齿形误差时,工作齿形已不是正确的渐开线齿形,其啮合点的运动理论上已经不再符合齿轮基本定律,导致瞬时传动比发生变化,影响齿轮传动的工作平稳性。因此,齿轮齿形误差的测量在齿轮各项参数的测量中尤为重要。本文建立以CCD摄像机、光学镜头、半导体激光器以及计算机为主要组成部分的结构光视觉测量系统,利用直齿圆柱齿轮的几何特征,提出了一种基于结构光视觉技术的齿轮渐开线齿形误差的测量方法。
     首先,本文分析了结构光视觉模型,该模型分为两部分:对于CCD摄像机的成像模型,基于小孔成像原理的假设,建立非线性的坐标转换关系,并利用经典的两步标定法对模型中各参数进行标定;对于结构光光平面的参数模型,在分析了传统标定法的基础上,提出了基于全局参数的改进标定法,优化了整个系统的参数变量,有效的提高了结构光视觉系统的标定精度。最后,结构光系统标定实验表明,采用6幅共面标靶图像,将光条宽度控制在5个像素左右可以得到最佳的标定结果。
     其次,本文根据被测直齿圆柱齿轮的几何特征,建立了渐开线齿廓的测量模型。针对测量时结构光光平面与齿轮回转轴线不垂直的问题,建立了基于回转轴线方向的伪光平面,并将通过结构光视觉系统重构的目标点投影到这个平面上,在垂直于回转轴线的方向上形成一条渐开线。为了简化齿形误差的计算,本文在伪光平面上建立局部坐标系,将被测点的三维坐标转换成光平面上的二维坐标,并在此二维直角坐标系下,按坐标法给出了基圆半径和渐开线齿形误差的计算方法。
     为了提高结构光视觉系统测量的分辨率,从图像中获取光条中心点的亚像素位置是必要的。本文分析了几种亚像素光条中心点检测方法。针对经典的Steger光条中心点检测方法,讨论了各参数的最优值选择。针对传统评价方法的不足,本文提出了一种基于量块尺寸的精度评价方法,将实际尺寸测量误差作为精度评价准则,更符合测量的实际要求。
     最后,通过测量实验验证了本文提出的直齿圆柱齿轮渐开线齿形误差的结构光视觉测量方法的精度。实验分为测量基圆误差和测量渐开线齿形误差两部分。实验结果表明,结构光视觉测量系统平台的安装位置、齿轮的齿数、结构光光平面与齿轮横截面的夹角对测量精度无明显影响。齿轮的模数越大,本文的测量方法的测量精度也越高。
     本文的研究工作对发展结构光视觉测量技术的工程应用具有一定的意义。
Based on light and electricity physical phenomena, structured light vision technologyobtain and process the image of the structured light on the measured object to reconstruct thethree-dimensional geometry. It can be mainly used for the measurements of complex surfaceor freedom three-dimensional surface. The structured light vision technology is used inmechanical industry, which has the outstanding advantages of non-contact, high speed, highdegree of automation. Thus, more and more people pay attention to the use of structuredlight vision technology in the machinery industry.
     Gear is an important part of the transmission. The application of spur gear is particularlywidespread. With the increasingly high quality requirements of the gear, measuring gear isalso stricter. The development of measurement precision has an influence on theimprovement of manufacturing technology. As a result, the research of more advanced gearmeasuring technology has a very important role on the overall improvement of the gearmanufacturing technology.
     When there is a large gear profile error, the work profile is not the correct tooth profile.The trajectory of the engagement point theoretically no longer meets the basic laws of gear,resulting the occurrence of the change of instantaneous transmission ratio. It will have animpact on the stability of gear transmission job. Therefore, the measurement of the geartooth error is particularly important in measuring the parameters of the gear. This paperestablishes structured light vision measurement system with CCD camera, optical lenses,semiconductor lasers and computers. Based on the geometric characteristics of the spur gear,a measurement method of gear profile error based on structured light vision technology ispresented.
     Firstly, this paper analyzes the model of structure light vision, which is divided into twoparts: For the imaging model of the CCD camera, based on the assumption of pinhole imaging principle, a non-linear relationship of the coordinate transformation is established.Using the classic two-step calibration method, the parameters in the model is calculated; Forthe parameter model of structured light plane, based on the analysis of the traditionalcalibration method, an improved calibration method based on global parameter is proposed.The parameters variables of the system are optimized. The calibration accuracy of thestructure light system is effectively improved. At last, the experiments of calibratingstructured light system showed that the best calibration results can be got by6images ofcoplanar target and the stripe width of5pixel.
     Secondly, according to the geometric characteristics of the measured spur gear, a toothprofile measurement model is established. As the structured light measurement plane is notperpendicular to the rotation axis of the measured gear, a pseudo structured light plane basedon the rotation axis is established. The target point which is reconstructed by the structuredlight vision imaging system will project onto the pseudo structured light plane. Ten, aninvolute is formed on the direction of the rotation axis. To simplify the calculation of profileerror, this paper establishes a local coordinate system on the pseudo structured light plane,the three-dimensional coordinates of the measured points are transformed to thetwo-dimensional coordinate plane. In this two-dimensional coordinate system, according tocoordinate method, the radius of the base circle and profile errors is calculated.
     To improve resolution of structure light vision system, obtaining a sub-pixel position ofthe light center from the image is essential. This paper analyses several of sub-pixeldetection method of the light center. Based on the classical detection method proposed bySteger, the optimal choose of each parameter is discussed. For the lack of traditionalevaluation methods, this paper presents an accuracy evaluation method based on the blocksize, taking actual measurement error as criterion of accuracy evaluation, which conforms torequirements of actual measurement.
     Finally, the experiments verify the accuracy of the proposed measurement method basedon structured light vision. Experiments were divided into two parts: measurements of thebase circle and measurement of profile error. Experimental results show that the installationlocation of the structured light vision measurement system platform, the number of gear teeth and the angle between the plane of structured light and the cross-section plane of gearhave no significant effect on the measurement accuracy. The greater the gear modulus is, thehigher the accuracy of the measurement method is.
     The work and research in this article has certain significance for applying structuredlight vision measurement technology in engineering.
引文
[1]谭庆昌,赵洪志.机械设计[M].北京:高等教育出版社,2008.
    [2]秦荣荣,崔可维.机械原理[M].北京:高等教育出版社,2008.
    [3]甘永立.几何量公差与检测[M].上海:上海科学技术出版社,2004.
    [4]冯其波.光学测量技术与应用[M].北京:清华大学出版社,2008.
    [5] Yoshihiro W, Takashi K, Masatoshi I.955-fps Real-time Shape Measurement of aMoving/Deforming Object using High-speed Vision for Numerous-point Analysis [C].IEEE International Conference on Robotics and Automation,2007:10-14.
    [6] Xianyu S, Qican Z. Dynamic3-D shape measurement method: A review [J]. Optics andLasersin Engineering,2010,48(2):191–204.
    [7] Jindong T, Xiang P. Three-dimensional digital imaging based on shifted point-arrayencoding [J]. Applied Optics,2005,44(26):5491-5496.
    [8]任丽芬,廖世鹏,廖俊必.三坐标图像测量在渐开线圆柱齿轮齿距与齿廓偏差中的应用[J].工具技术,2008,42(4):89-92.
    [9]王友林,于青,姜英.齿轮轮齿齿廓渐开线的实用直角坐标方程研究[J].机械传动,2002,26(3):41-43.
    [10]谢竹铭.齿轮检测[M].北京:中国计量出版社,1991.
    [11] J.Peters, G.Goch, A.Günther. Helical gear measurement using structured light [C]. XVIIMEKO World Congress (IMEKO2000),2000(VIII):227-230.
    [12]王立鼎,娄志峰,王晓东,马勇,张玉玲.超精密渐开线齿形的测量方法[J].光学精密工程,2006,14(6):980-985.
    [13]张广军.视觉测量[M].北京:科学出版社,2008.
    [14] Steger C, Ulrich M, Wiedemanm C. Machine vision algorithms and applications [M].Germany, München: MVTec Software GmbH,2009.
    [15]章毓晋.计算机视觉教程[M].北京:人民邮电出版社,2011.
    [16] Nunes J C, Bouaoune Y, Delechelle E, Niang O, Bunel P. Image analysis bybidimensional empirical mode decomposition [J]. Image and Vision Computing,2003,21(12):1019-1026.
    [17]王晓嘉,高隽,王磊.激光三角法综述[J].仪器仪表学报,2004,25(4):601-604.
    [18] Santolaria J, Aguilar J, Guillomía D, Cajal C. A crenellated-target-based calibrationmethod for laser triangulation sensors integration in articulated measurement arms [J].Robotics and Computer-Integrated Manufacturing,2011,27(2):282-291.
    [19] Valkenburg R J, Mclvor A M. Accurate3D measurement using a Structured LightSystem [J]. Image and Vision Computing,1998,16(2):99-110.
    [20] Rocchini C, Cignoni P, Montani C, Pingi P, Scopigno R. A low cost3D scanner basedon structured light [J]. Computer Graphics Forum,2001,20(3):299-308.
    [21] Sadlo F, Weyrich T, Peikert R, Gross M. A practical structured light acquisition systemfor point-based geometry and texture [C]. Symposium on point-based graphics,2005:89–98.
    [22] Grassler T, Wirth K E. X-ray computer tomography potential and limitation for themeasurement of local solids distribution in circulating fluidized beds [J]. ChemicalEngineering Journal,2000,77(1-2):65-72.
    [23] Ming-June T, Chuan-Cheng H. Development of a high-precision surface metrologysystem using structured light projection [J]. Measurement,2005,38(3):236–247.
    [24] Yuanzheng G, Song Z. Ultrafast3-D shape measurement with an off-the-shelf DLPprojector [J]. Optics Express,2010,18(19):19743-19754.
    [25] Frank C, Gordon M. B, Mumin S. Overview of three-dimensional shape measurementusing optical methods [J]. Society of Photo-Optical Instrumentation Engineers,2000,39(01):10-22.
    [26]周富强,张广军.标定十字结构光传感器的新方法[J].光电工程,2006,33(11):52-56.
    [27]许丽,张之江.基于共面靶标的结构光标定方法[J].光电子·激光,2009,20(8):1063-1069.
    [28]徐德,王麟琨,谭民.基于运动的手眼系统结构光参数标定[J].仪器仪表学报,2005,26(11):1101-1105.
    [29]孙军华,张广军,刘谦哲,杨珍.结构光视觉传感器通用现场标定方法[J].机械工程学报,2009,45(03):174-177.
    [30]韩建栋,吕乃光,董明利,娄小平.线结构光传感系统的快速标定方法[J].光学精密工程,2009,17(05):958-963.
    [31]刘震,张广军,魏振忠,江洁.一种高精度线结构光视觉传感器现场标定方法[J]光学学报,2009,29(11):3124-3128
    [32]马颂德,张正友.计算机视觉理论与算法基础[M].北京:科学出版社,1998.
    [33] Brown D C. Decentering distortion of lenses [J]. Photogrammetric Engineering andRemote Sensing,1966:444-462.
    [34] Zhang Z. A flexible new technique for camera calibration [C]. IEEE Transactions onPattern Analysis and Machine Intelligence,2000,22(11):1330-1334.
    [35]孟晓桥,胡占义.摄像机自标定方法的研究与进展[J].自动化学报,2003,29(1):110-124..
    [36] Abdel-Aziz Y I, Karara H M. Direct linear transformation from comparator coordinatesinto object space coordinates [J]. ASP Symposium on Close-Range Photogrammetric,1971:1-18.
    [37] Faig W. Calibration of close-range photogrammetric systems: mathematicalformulation [J]. Photogrammetric Engineering and Remote Sensing,1975,41(12):1479-1486.
    [38] Dainis A, Juberts M. Accurate remote measurement of robot trajectory motion [C].IEEE International Conference on Robotics and Automation,1985(2):92-99.
    [39]李鹏,王军宁.摄像机标定方法综述[J].山西电子技术,2007(4):77-79.
    [40] Zhang Z Y. Camera calibration with one-dimentional objects [J]. IEEE Transactions onAnalysis and Machine Intelligence,2004,26(7):892-899.
    [41] Tsai R Y. An efficient and accurate camera calibration technique for3D machine vision[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,1986:364-373.
    [42] Tsai R Y. A versatile camera calibration technique for high-accuracy3D machine visionmetrology using off-the-shelf TV cameras and lenses [J]. IEEE Journal of Robotics andAutomation,1987,3(4):323-344.
    [43] Weng J Y, Cohen P, Herniou M. Camera calibration with distortion models andaccuracy evaluation [C]. IEEE Transactions on Pattern Analysis and MachineIntelligence,1992,14(10):965-980.
    [44] Martins H A, Birk J R, Kelley R B. Camera models based on data from two calibrationplanes [J]. Computer Graphics and Imaging Processing,1981,17(2):173-180.
    [45] Ma S, Wei G. A self-calibration technique for active vision system [J]. IEEE Trans onRobotics and Automation,1996,12(1):114-120.
    [46] Faugeras O, Luong Q T, Maybank S. Camera self-calibration:Theory and experiments
    [C]. Proceedings of the2nd European Conference on Computer Vision,1992:321-334.
    [47] Maybank S, Faugeras O. A theory of self-calibration of a moving camera [J].International Journal of Computer Vision,1992,8(2):123-151.
    [48] Hartley R I. Kruppa’s Equations Derived from the Fundamental Matrix[J]. IEEETransactions on Pattern Analysis and Machine Intelligence,1997,19(02):133-135.
    [49]雷成,胡占义,吴福朝,Tsui H T.一种新的基于Kruppa方程的摄像机自标定方法[J].计算机学报,2003,26(5):587-597..
    [50] Zeller C, Faugeras O. Camera self-calibration from video sequences: the Kruppaequations revisited[J]. Research Report2793, INRIA Sophia-Antipolis, France,1996.
    [51] Press W H, Flannery B P, Teukolsky S A, Vetterling WT. Numerical Recipes in C: TheArt of Scientific Computing [M]. Cambridge: Cambridge University Press,1988.
    [52] Hartley R. Euclidean reconstruction and invariants from multiple images [J]. IEEETransactions on Pattern Analysis and Machine Intelligence,1994,16(10):1036-1041.
    [53] Wei G Q, Ma S D. Implicit and explicit camera calibration: theory and experiments [J].IEEE Transactions on Pattern Analysis and Machine Intelligence,1994,16(5):469-480.
    [54]李伟,吕晓旭,钱晓凡,钟丽云等.基于平面模板的摄像机标定方法比较[J].激光杂志,2006,27(2):54-55.
    [55] Hartley R. Self-calibration of stationary cameras [J]. International Journal of ComputerVision,1997,22(1):5-23.
    [56] Ma S D. A self-calibration technique for active vision system [J]. IEEE Transactions onRobotics and Automation,1996,12(1):114-120.
    [57]李华,吴福朝,胡占义.一种新的线性摄像机自标定方法[J].计算机学报,2000,23(11):1121-1129.
    [58]吴福朝,胡占义.摄像机自标定的线性理论与算法[J].计算机学报,2001,24(11):1121-1135.
    [59]胡占义,吴福朝.基于主动视觉摄像机标定方法[J].计算机学报,2002,25(11):1149-1156.
    [60]李洪海,王敬东.摄像机标定技术研究[J].光学仪器,2007,29(4):7-12.
    [61] Huynh D Q. Calibrating a structured light stripe system: a novel approach [J].International Journal of Computer Vision,1999,33(1):73–86。
    [62] Zhu J G,Li Y J,Ye S H. Calibration of line structured light vision system based oncamera’s projective center [J]. Optics and Precision Engineering,2005,13(05):584-591.
    [63] Song Z, Peisen S H. Novel method for structured light system calibration [J].OpticalEngineering,2006,45(08):083601.
    [64] Huang J H, Wang Z, Gao J M, Xue Q. Projector calibration with error surfacecompensation method in the structured light three-dimensional measurement system [J].Optical Engineering,2013,52(4):043602.
    [65]马颂德,张正友.计算机视觉——计算理论与算法基础[M].北京:科学出版社,1998.
    [66]陈天飞.线结构光表面三维测量系统的标定技术研究[D].大连:大连海事大学,2013.
    [67] Geoffrey T, Lindsay K. Stereoscopic light stripe scanning: interference rejection, errorminimization and Calibration [J]. The International Journal of Robotics Research,2004,23(12):1141-1155.
    [68] Wolfgang S,Gunther N. Theory and arrangements of self-calibrating whole-bodythree-dimensional measurement systems using fringe projection technique [J]. Societyof Photo-Optical Instrumentation Engineers,2000,39(01):159-169.
    [69] Ricardo L, Thorsten B, Werner P J. Accurate procedure for the calibration of astructured light system [J]. Society of Photo-Optical Instrumentation Engineers,2004,43(02):464–471.
    [70] Chen X B, Xi J T, Jin Y, Sun J. Accurate calibration for a camera–projectormeasurement system based on structured light projection [J]. Optics and laser inEngineering,2009,47(3-4):310-319.
    [71] Dewar R. Self-generated targets for spatial calibration of structured light opticalsectioning sensors with respect to an external coordinate systems [C]. Robots andVision’88Conference Proceedings,1988:5-13.
    [72]徐光裕,刘立峰,曾建超,石定机.一种新的基于结构光的三维视觉系统标定方法[J].计算机学报,1995,18(6):450-456.
    [73]段发阶,刘凤梅,叶声华.一种新型线结构光传感器结构参数标定方法[J].仪器仪表学报,2000,21(1):108-110.
    [74] Wei J J, Zhang G J, Xu Y. Calibration approach for structured-light-stripevision sensorbased on the invariance of double cross-ratio [J]. Optical Engineering,2003,42(10):2956–2966.
    [75]魏振忠,张广军,徐园.一种线结构光视觉传感器标定方法.机械工程学报[J].2005,41(2):210-214.
    [76] Zhou F Q, Zhang G J, Jiang J. Constructing feature points for calibrating a structuredlight vision sensor by viewing a plane from unknown orientations [J]. Optics andLasers in Engineering,2005,43(10):1056–1070.
    [77] Zhou F Q, Zhang G J. Complete calibration of a structured light stripe vision sensorthrough planar target of unknown orientations [J]. Image and Vision Computing,2005,23(1):59–67.
    [78]周富强,张广军,江洁.线结构光视觉传感器的现场标定方法[J].机械工程学报,2004,40(6):169-173.
    [79]周富强,张广军.用于结构光视觉传感器标定的特征点获取方法[J].仪器仪表学报,2005,26(4):347-350.
    [80]周富强,刘坷,张广军.交比不变获取标定点的不确定性分析[J].光电子激光,2006,17(12):1524-1528.
    [81]徐德,王麟琨,谭民.基于运动的手眼系统结构光参数标定[J].仪器仪表学报,2005,26(11):1101一1105.
    [82]刘震,张广军,魏振忠,江洁.一种高精度线结构光视觉传感器现场标定方法[J].光学学报,2009,29(11):3124-3128.
    [83]韩建栋,吕乃光,董明利,娄小平.线结构光传感系统的快速标定方法[J].光学精密工程,2009,17(5):958-963.
    [84]梁治国,徐科,徐金梧,宋强.结构光三维测量中的亚像素级特征提取与边缘检测[J].机械工程学报,2004,40(12):96-99.
    [85]吴剑波,崔振,赵宏,谭玉山.光刀中心自适应阈值提取法[J].半导体光电,2001,22(1):62-64
    [86]黎明.三维激光测量技术及应用研究[D].杭州:浙江大学,2005.
    [87]李霖.数字光带图像轮廓提取[D].成都:四川大学,2001.
    [88]胡斌,李德华,金刚,胡汉平.基于方向模板的结构光条纹中心检测方法[J].计算机工程与应用,2002,38(11):59-60.
    [89] Faugeras O. Three-dimensional computer vision [M]. London: MIT Press.
    [90] Izquierdo M A G, Sanchez M T, Ibanez A, Ullate L G. Sub-pixel measurement of3Dsurfaces by laser scanning [J]. Sensors and Actuators, A: Physical,1999,76(1-3):1-8.
    [91] Otsu N. A threshold selection method from gray-level histograms [J]. IEEETransactions on Systems, Man, and Cybernetics,1979,9(1):62-66.
    [92] Steger C. An unbiased detector of curvilinear structures [J]. IEEE Transactions onPattern Analysis and Machine Intelligence.1998,20(2):113-125.
    [93]韩九强.机器视觉技术及应用[M].北京:高等教育出版社,2009.
    [94] Steger C, Ulrich M, Wiedemanm C. Machine vision algorithms and applications [M].Germany, München: MVTec Software GmbH,2009.
    [95] Jean-Yves Bouguet. Pyramidal implementation of the lucas kanade feature trackerdescription of the algorithm [R]. Technical Report, OpenCV Document, IntelMicroprocessor Research Labs,2000.
    [96] Jean-Yves Bouguet. Camera calibration toolbox for matlab [J/OL]. http://www.vision.caltech.edu/bouguetj/calib_doc/index.html.
    [97]孙长库,叶声华.激光测量技术[M].天津:天津大学出版社,2001.
    [98]许宁.线结构光光条图像处理方法研究[D].哈尔滨:哈尔滨工程大学,2007.
    [99]徐静珠.结构光三维测量中光条中心提取方法及其评价的研究[D].南京:南京大学,2012.
    [100]Duda R O, Hart P E. Use of the Hough transformation to detect lines and curves inpictures[J]. Communications of the ACM,1971,15(1):11-15.
    [101]徐涛.数值计算方法[M].长春:吉林科学技术出版社,1998.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700