欢迎来到图海文库! | 帮助中心 分享价值,成长自我!
图海文库
全部分类
  • 机械模具>
  • 机电控制>
  • 工艺夹具>
  • 车辆工程>
  • 化工环保>
  • 土木建筑>
  • 采矿通风>
  • CAD图纸>
  • 三维模型>
  • 数控编程>
  • 文档资料>
  • ImageVerifierCode 换一换
    首页 图海文库 > 资源分类 > DOC文档下载
    分享到微信 分享到微博 分享到QQ空间

    外文翻译-一个有关移动机器人定位的视觉传感器模型.doc

    • 资源ID:22683       资源大小:849.50KB        全文页数:28页
    • 资源格式: DOC        下载积分:10金币
    微信登录下载
    验证码下载 游客一键下载
    账号登录下载
    三方登录下载: QQ登录
    二维码
    微信扫一扫登录
    下载资源需要10金币
    邮箱地址:
    验证码: 获取验证码
    温馨提示:
    支付成功后,系统会自动生成账号(用户名为邮箱地址,密码是验证码),方便下次登录下载和查询订单;
    支付方式: 支付宝    微信支付   
    验证码:   换一换

     
    账号:
    密码:
    验证码:   换一换
      忘记密码?
        
    友情提示
    2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
    3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
    4、本站资源下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。
    5、试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。

    外文翻译-一个有关移动机器人定位的视觉传感器模型.doc

    1、毕业设计(论文)外文资料翻译院 系专 业学生姓名班级学号外文出处Machine Vision and Applications指导教师评语 指导老师签名: 日期A Visual-Sensor Model for Mobile Robot LocalisationMatthias Fichtner Axel Gro_mannArti_cial Intelligence InstituteDepartment of Computer ScienceTechnische Universitat DresdenTechnical Report WV-03-03/CL-2003-02AbstractWe

    2、 present a probabilistic sensor model for camera-pose estimation in hallways and cluttered o_ce environments. The model is based on the comparison of features obtained from a given 3D geometrical model of the environment with features presentin the camera image. The techniques involved are simpler t

    3、han state-of-the-art photogrammetric approaches. This allows the model to be used in probabilistic robot localisation methods. Moreover, it is very well suited for sensor fusion. The sensor model has been used with Monte-Carlo localisation to track the position of a mobile robot in a hallway navigat

    4、ion task. Empirical results are presented for this application.1 IntroductionThe problem of accurate localisation is fundamental to mobile robotics. To solve complex tasks successfully, an autonomous mobile robot has to estimate its current pose correctly and reliably. The choice of the localization

    5、 method generally depends on the kind and number of sensors, the prior knowledge about the operating environment, and the computing resources available. Recently, vision-based navigation techniques have become increasingly popular 3. Among the techniques for indoor robots, we can distinguish methods

    6、 that were developed in the _eld of photogrammetry and computer vision, and methods that have their origin in AI robotics.An important technical contribution to the development of vision-based navigationtechniques was the work by 10 on the recognition of 3D-objects from unknown viewpoints in single

    7、images using scale-invariant features. Later, this technique was extended to global localisation and simultaneous map building 11.The FINALE system 8 performed position tracking by using a geometrical model of the environment and a statistical model of uncertainty in the robots pose given the comman

    8、ded motion. The robots position is represented by a Gaussian distribution and updated by Kalman _ltering. The search for corresponding features in camera image and world model is optimized by projecting the pose uncertainty into the camera image.Monte Carlo localisation (MCL) based on the condensati

    9、on algorithm has been applied successfully to tour-guide robots 1. This vision-based Bayesian _ltering technique uses a sampling-based density representation. In contrast to FINALE, it can represent multi-modal probability distributions. Given a visual map of the ceiling, it localises the robot glob

    10、ally using a scalar brightness measure. 4 presented a vision-based MCL approach that combines visual distance features and visual landmarks in a RoboCup application. As their approach depends on arti_cial landmarks, it is not applicable in o_ce environments.The aim of our work is to develop a probab

    11、ilistic sensor model for camerapose estimation. Given a 3D geometrical map of the environment, we want to find an approximate measure of the probability that the current camera image has been obtained at a certain place in the robots operating environment. We use this sensor model with MCL to track

    12、the position of a mobile robot navigating in a hallway. Possibly, it can be used also for localization in cluttered o_ce environments and for shape-based object detection.On the one hand, we combine photogrammetric techniques for map-based feature projection with the exibility and robustness of MCL,

    13、 such as the capability to deal with localisation ambiguities. On the other hand, the feature matching operation should be su_ciently fast to allow sensor fusion. In addition to the visual input, we want to use the distance readings obtained from sonars and laser to improve localisation accuracy.The

    14、 paper is organised as follows. In Section 2, we discuss previous work. In Section 3, we describe the components of the visual sensor model. In Section 4, we present experimental results for position tracking using MCL. We conclude in Section 5.2 Related WorkIn classical approaches to model-based po

    15、se determination, we can distinguish two interrelated problems. The correspondence problem is concerned with _nding pairs of corresponding model and image features. Before this mapping takes place, the model features are generated from the world model using a given camera pose. Features are said to match if they are located close to each other. Whereas the pose problem consists of _nding the 3D camera coordinates with respect to the origin of the world model given the pairs of corresponding features 2. Apparently, the one problem requires the other to be solved befo


    注意事项

    本文(外文翻译-一个有关移动机器人定位的视觉传感器模型.doc)为本站会员主动上传,图海文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知图海文库(点击联系客服),我们立即给予删除!




    关于我们 - 网站声明 - 网站地图 - 资源地图 - 友情链接 - 网站客服 - 联系我们

    网站客服QQ:2356858848

      客服联系电话:18503783681

    copyright@ 2008-2022 thwenku.com网站版权所有

    ICP备案:豫ICP备2022023751号-1