根據極幾何與視差之適應性匹配增強本體移動估測 = Adaptive Fe...
國立高雄大學資訊工程學系碩士班

 

  • 根據極幾何與視差之適應性匹配增強本體移動估測 = Adaptive Feature Tracking Based on Epipolar Geometry and Disparity for Ego-Motion Estimation
  • 紀錄類型: 書目-語言資料,印刷品 : 單行本
    並列題名: Adaptive Feature Tracking Based on Epipolar Geometry and Disparity for Ego-Motion Estimation
    作者: 張家泓,
    其他團體作者: 國立高雄大學
    出版地: [高雄市]
    出版者: 撰者;
    出版年: 2014[民103]
    面頁冊數: 72面圖,表 : 30公分;
    標題: 本體移動估測
    標題: Ego-Motion Estimation
    電子資源: http://handle.ncl.edu.tw/11296/ndltd/50015032792699057817
    附註: 參考書目:面55-58
    附註: 103年12月16日公開
    摘要註: 三維技術蓬勃發展當下,許多裝置在獲取環境資訊同時,卻缺乏移動感測技術,導致資訊之蒐集侷限在同一位置,或是無法與在其他位置擷取之資料進行連結。基於立體視覺的本體移動估測,能讓移動中的裝置知道本身的移動狀態,將不同時間與不同位置的資訊進行結合,得到更完整的資訊。本研究以The KITTI Vision Benchmark Suite提供之立體視覺資料,將校正過的左右相機拍攝之對正影像,以加速穩健特徵演算法(Speeded Up Robust Features, SURF)偵測特徵並產生敘述向量後,進行半全域匹配(Semi-Global Matching, SGM)以確保匹配之品質與數量,並結合相機參數計算影像中之特徵點的三維座標。在特徵追蹤階段,先對影像中特徵點位置進行時序性匹配,再從計算之三維座標、二維投影位置與相機參數以有效N點透視參數估測演算法(Efficient Perspective-n-Point Camera Pose Estimation, EPnP)得知相機的移動方式,最後採用兩段式之稀疏光束調整法(Sparse Bundle Adjustment, SBA)進行快速全域範圍調整,以避免結果陷入區域最佳解。由於移動估測的準確度會與特徵追蹤的正確性有直接關係,故本研究專注於特徵追蹤的正確性,並根據極幾何與立體匹配視差,提出一適應性方法對匹配的角度與長度進行限制,大幅提升匹配的準確性以增強本體移動估測。實驗顯示在少數移動物體的真實環境拍攝,能達到平均每移動一公尺長度誤差2公分以下、角度誤差0.008度以內之高精準度估測。 Recently, as 3D related technologies rapidly gained popularity, different 3D data acquisition devices are also being constructed and developed. Many such devices are only able to acquire the data at a particular location and time, since they are unable to determine the ego-motion. Thus it is almost impossible to associate data acquired at different locations or time frames to form more complete 3D information. In this research, we utilize the binocular stereo data provided by the KITTI Vision Benchmark Suite for 3D ego-motion estimation. The Speeded Up Robust Features method is applied to detect feature points and generate the descriptors from calibrated binocular stereo images. Afterwards, Semi-Global Matching is performed and integrated with camera parameters to calculate the 3D coordinates of the selected feature points.During the feature tracking stage, selected feature points are first matched temporally. The 3D coordinates, 2D reprojections and camera parameters are combined through the Efficient Perspective-n-Point Camera Pose Estimation algorithm to estimate the motion of the camera. Finally, a two-stage Sparse Bundle Adjustment method is applied to perform fast global adjustment and avoid local optimal results. Since the accuracy of motion estimation is directly related to feature tracking, this research also focuses on the accuracy of feature tracking. Epipolar geometry and stereo matching disparity are combined to propose an adaptive approach using angular and length parameters as limiting criteria. The method is able to increase the matching accuracy and improve ego-motion estimation. Experiment results have shown that over each meter of movement, the proposed method has on average a translation error of 2 percent and a rotation error of 0.008 degrees per meter.
館藏
  • 2 筆 • 頁數 1 •
 
310002493065 博碩士論文區(二樓) 不外借資料 學位論文 TH 008M/0019 464103 1133 2014 一般使用(Normal) 在架 0
310002493073 博碩士論文區(二樓) 不外借資料 學位論文 TH 008M/0019 464103 1133 2014 c.2 一般使用(Normal) 在架 0
  • 2 筆 • 頁數 1 •
評論
Export
取書館別
 
 
變更密碼
登入