採用MFI改良本體移動估測以提升三維重建準確度之方法 = Using M...
國立高雄大學資訊工程學系碩士班

 

  • 採用MFI改良本體移動估測以提升三維重建準確度之方法 = Using MFI to Improve Ego-motion Estimation towards more accurate 3D reconstruction
  • 紀錄類型: 書目-語言資料,印刷品 : 單行本
    並列題名: Using MFI to Improve Ego-motion Estimation towards more accurate 3D reconstruction
    作者: 陳宗毅,
    其他團體作者: 國立高雄大學
    出版地: [高雄市]
    出版者: 撰者;
    出版年: 民104[2015]
    面頁冊數: 95面圖,表 : 30公分;
    標題: 三維重建
    標題: 3-D Reconstruction
    電子資源: http://handle.ncl.edu.tw/11296/ndltd/79971014201515114756
    附註: 105年3月31日公開
    附註: 參考書目:面77-81
    摘要註: 隨著時代的演變與科技的進步,三維重建技術越來越盛行,其中雷射掃描儀是近年來最廣泛應用的空間量測工具,可迅速且精準地取得大範圍三維資料,但雷射掃描儀本身無法取得色彩資訊且缺乏移動感測技術,因此必須與雙眼視覺系統整合,除獲得點雲色彩資訊外,還能讓移動中的裝置知道本身的移動狀態,將不同時間與不同位置的資訊進行結合,如此可獲得具有紋理之三維重建模型。本研究整合兩台相機與雷射掃描儀並架設於電動車上進行大型三維重建。首先將兩台固定焦距及曝光等參數且經校正之相機擷取影像,接著將校正後的左右影像對正並以加速穩健特徵演算法偵測特徵後,再進行空間上之特徵匹配,在此使用半全域匹配方式以確保匹配品質及數量,並結合相機參數計算出特徵點於三維空間之座標。特徵追蹤階段,隨時間推移進行時序性特徵匹配,算出各時間點特徵之旋轉、位移變化,為降低追蹤時匹配到錯誤特徵點導致路徑產生偏移,我們使用H. Badino提出的影像序列整合特徵點方法,將原始匹配到的特徵點與整合特徵點比較後做修正,以免影響到後續估測,最後推演出本系統行進之路徑。為得到具紋理的三維環境模型,必須整合雷射掃描儀與相機間的剛性轉換,本文以線性方式解決此的問題,並找到剛性轉換的初步外部參數,再使用Levenberg-Marquardt演算法得到最佳化後的外部參數,再把點雲投射到相機影像上,進而驗證外部參數之精確性。最後利用估測得到之路徑將點雲對齊,並貼上相機影像材質,得到大型且具紋理之三維重建模型。 3D reconstruction techniques and systems have become more common with the advancement of relevant hardware devices. Among the various reconstruction systems, the LiDAR (Light Detection and Ranging) device is one of the most versatile and effective devices for acquiring 3D measurement on a large scale. However, the LiDAR alone does not possess the ability to acquire color and motion information. To facilitate the integration of large scale 3D data acquired by the LiDAR device, we propose an integrated 3D reconstruction system consisting of a LiDAR device and binocular stereo vision cameras. The integrated system is able to register the image data with range data from the LiDAR device to obtain colour mapped 3D point clouds. In addition, the cameras facilitate the system's ego-motion estimation, such that data from different spatial and temporal neighbourhoods can be merged in a complementary manner, towards the reconstruction of more accurate and detailed 3D models. In this work we implement an integrated 3D reconstruction system with two video cameras and a LiDAR, all mounted on top of a electrical vehicle. The cameras are calibrated and fixed with constant internal parameters. The acquired images are calibrated and SURF is used to perform feature detection. Feature matching is performed spatially using SGM to ensure the quantity and quality of the feature points. The camera parameters are used in conjunction with the matched feature points to determine the 3D coordinates. Temporal feature matching is also performed according to the time shifted image sequences, with the rotational and translational variations calculated for the feature points temporally. To reduce erroneous matches in feature tracking, we use Multi-frame Feature Integration and compare the originally matched features with integrated features, such that modifications can be performed to avoid affecting subsequent estimates. Finally, the system's path is obtained by accumulation of the feature transformations.To obtain texture mapped 3D model, we incorporate data obtained by the LiDAR device and the cameras using rigid transformations. In this work, the external parameters for the rigid transformations are initially solved in a linear manner, the Levenberg-Marquardt algorithm is then used to obtain the optimized external parameters. 3D points are then projected onto 2D images to verify the accuracy of the external parameters. Finally, the estimated path is used to align the point clouds and perform texture mapping from the images, such that a large-scale, texture mapped, 3D model can be obtained.
館藏
  • 2 筆 • 頁數 1 •
 
310002592957 博碩士論文區(二樓) 不外借資料 學位論文 TH 008M/0019 464103 7530 2016 一般使用(Normal) 在架 0
310002592965 博碩士論文區(二樓) 不外借資料 學位論文 TH 008M/0019 464103 7530 2016 c.2 一般使用(Normal) 在架 0
  • 2 筆 • 頁數 1 •
評論
Export
取書館別
 
 
變更密碼
登入