Colmap algorithm pipeline:

Colmap installation

Install Colmap in Ubuntu Docker

The data collection

Multi-view 3D reconstruction (MVS) data acquisition


Colmap GUI operation

Sparse reconstruction

Incremental SfM technology is used

SfM technology comes from GitHub openMVG/openMVG: open Multiple View Geometry library. Basis for 3D computer vision and Structure from Motion.

1. Preparation

  1. Creating a project directoryTestScan
  2. Create in itimagesDirectory and store the original image
  3. runcolmap gui, click on thefile - New ProjectPop up the Project window
  4. In the DatabaseDo clickNewIn theTestScanCreate a directoryTestScan.dbThe file is used to store the original image address, feature matching and other data
  5. inImagesDo clickSelectSelect the directory where the original image of the scene resides
  6. Finally clicksave

The initial directory structure is:

.
|-- TestScan.db
`-- images
    |-- 00000000.jpg
    |-- 00000001.jpg
    |-- ...
    `-- 00000048.jpg
Copy the code

2. Feature extraction

In this step, search corresponding points, which can be understood as global Feature matching, click Processing-Feature Extraction

  • Select the camera model asPinhole
  • chooseParameters from EXIF: Extract camera internal parameters from EXIF (generally collected effects carry EXIF files)
  • The default values of other parameters are adopted

Then click Extract for feature extraction

3. Feature matching

Click Processing-feature Matching, all parameters are selected as default, and then click Run for Feature Matching

At the end of this step, the scene graph and matching matrix will be automatically generated (the graph structure with the number of features with the same name between different views as the weight and different views as the graph nodes).

From the right side ofLogYou can see the output of these two steps in

4. Incremental modeling

Click Reconstruction-Start Reconstruction to perform one-click incremental iterative reconstruction

The aim of this step is to calculate the camera parameters of different views, obtain the sparse point cloud of the scene and determine the visual relationship between different views and the point cloud. Finally, the sparse point cloud of the scene and the camera attitude of each view can be obtained

Take image 49 (39 views) for example

  • There are 576 clouds already
  • First, conduct Pose Refinement Report
  • Then BA optimization was carried out: 149 measurement points were fused with the overall sparse point cloud, and 32 measurement points were filtered out
  • And then I Retriangulation
  • Finally, iterative global BA optimization is carried out to optimize the attitude of the existing camera and 3d sparse point cloud coordinates
============================================================================== Registering image #39 (49) ============================================================================== => Image sees 576 / 991 points Pose refinement report ---------------------- Residuals : 1132 Parameters : 8 Iterations : 7 Time : 0.0134351 [s] Initial cost: 0.535158 [px] Final cost: 0.462099 [px] Termination: Convergence => Continued observations: 540 => Added observations: 73 Bundle adjustment report ------------------------ Residuals : 24684 Parameters : 2030 Iterations : 21 Time : 0.501096 [s] Initial cost: 0.374389 [px] Final cost: 0.367663 [px] Termination: Convergence => Merged observations: 149 => Completed observations: 27 => Filtered observations: 32 => Changed observations: 0.016853 Bundle Adjustment Report ------------------------ Residuals: 24690 Parameters: 2000 Iterations: 3 Time: 0.0764892 [s] Initial cost: 0.430376 [px] Final cost: 0.427614 [px] Termination: Convergence => Merged observations: 10 => Completed observations: 1 => Filtered observations: 0 => Changed observations: 0.000891 = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = Retriangulation ============================================================================== => Completed observations: 9 => Merged observations: 186 => Retriangulated observations: 0Copy the code

Depth map estimation and optimization

Cost construction, accumulation, estimation and optimization are encapsulated together in Colmap. Solving by GEM model can be divided into four steps: matching cost construction -> cost accumulation -> depth estimation -> depth map estimation

The principle here is omitted for the time being, COLMAP of the multi-view geometry 3D reconstruction practice series

1. Image distortion removal

Click Remore-Dense Reconstruction, click Select in the dense reconstruction window to select the file location, and then click Undistortion to remove the image distortion

⚠️ Note: do not select the root directory of the project, colmap GUI will flash back due to an error when copying images. Undistortion also means only one, and the second time is also due to the fact that the path has already slipped

An image with distortion will lead to a large error in the estimation of edge time difference. Therefore, the joint constraints of optical consistency and geometric consistency are used to construct cost matching before depth map estimation

The DTU data set and the previously configured pinhole model have implied no distortion. If the self-collection data set is used, the camera model needs to be changed to the camera model with distortion parameters

2. Depth estimation

Click STEREO in the dense reconstruction window to estimate the scene depth. After the depth estimation, the depth graph and normal vector graph of photometric and Geometric can be obtained

This step is slow and resource-intensive

Then click on the ones in the red box to see the depth map and normal map following the optical and geometric consistency

Colmap uses optical consistency to estimate both the depth value and the normal value of the perspective, and uses geometric consistency to optimize the depth map


Dense reconstruction

Click Fusion to perform dense reconstruction based on depth map Fusion

Ply model files are generated in Dense after reconstruction

Visual display

Meshlab

Install MeshLab for display

sudo snap install meshlab
Copy the code

[error: Meshlab could not open ply file]

Problem analysis: Open ply file with text browser and find all the headers are garbled

After searching for a lot of materials, no solution was found. Finally, I asked my brother for a Python script to display PLY, which was successfully implemented mainly by using the Open3D library 😭. This story tells us that listening to a suggestion from my teacher and brother can save a whole afternoon of debugging time. If you need to leave a message

!!!!! Solution: After a month of groping, finally found a solution

sudo snap install --devmode meshlab
Copy the code

A recent meshlab installation used SNAP, a security model that restricts applications from viewing directory contents and opening files, so installing meshlab using DevMode breaks the restriction

Colmap GUI

Or use the official Colmap GUI visualization method, click file-import Model from… , select generated fused. Ply to open the fused point cloud effect; However, meshed-poisson.ply cannot be opened, so use Meshlab again

Intermediate data analysis — matching matrix

Click on extras-match Matrix to export the Match Matrix for the current scenario

We can see the motion rule of the camera in the reshuffled image. If the camera samples around the object, the matching matrix will have strips, and the stronger the parallel relationship of each strip, the stricter the motion control of the camera

After looking at some of the comparison diagrams and conclusions of others, the early control of the collection data set had an impact on the reproducibility


Colmap Command line operation

Prepare the file with the images image directory

  1. Feature extraction
colmap feature_extractor \
   --database_path ./database.db \
   --image_path ./images
Copy the code

Save feature points in database.db. 2. Match feature points

colmap exhaustive_matcher \
   --database_path ./database.db
Copy the code
  1. Sparse reconstruction
mkdir sparse
colmap mapper \
    --database_path ./database.db \
    --image_path ./images \
    --output_path ./sparse
Copy the code

Output: SPARSE folder with the following directory structure:

└ ─ ─ sparse └ ─ ─ 0 ├ ─ ─ cameras. Bin # camera inside ├ ─ ─ images. Bin # camera pose ├ ─ ─ points3D. Bin # sparse 3 d point └ ─ ─ project. IniCopy the code
  1. Image dedistortion
mkdir dense
colmap image_undistorter \
    --image_path ./images \
    --input_path ./sparse/0 \
    --output_path ./dense \
    --output_type COLMAP \
Copy the code

Output: Dense folder, directory structure is as follows:

└── heavy Exercises ── images │ ├─ 0.jpg │ ├─... │ └ ─ ─ 48. JPG ├ ─ ─ the run - colmap - geometric. Sh ├ ─ ─ the run - colmap - photometric. Sh ├ ─ ─ sparse │ ├ ─ ─ cameras. Bin │ ├ ─ ─ images. Bin │ └── ├─ bass ├── ├─ ├─ ├─ ├─ ├── patch-matchCopy the code
  1. Dense reconstruction
colmap patch_match_stereo \
    --workspace_path ./dense \
    --workspace_format COLMAP \
    --PatchMatchStereo.geom_consistency true    
Copy the code

Output: Dense/STEREO folder, estimating depth_map and Normal_map for each image

└── heavy Exercises ── images │ ├─ 0.jpg │ ├─... │ └ ─ ─ 48. JPG ├ ─ ─ the run - colmap - geometric. Sh ├ ─ ─ the run - colmap - photometric. Sh ├ ─ ─ sparse │ ├ ─ ─ cameras. Bin │ ├ ─ ─ images. Bin │ └ ─ ─ points3D. Bin └ ─ ─ stereo ├ ─ ─ consistency_graphs ├ ─ ─ depth_maps │ ├ ─ ─ 0. JPG. Geometric. Bin │ ├ ─ ─ 0. JPG. Photometric. Bin │ ├ ─ ─... │ ├ ─ ─... │ ├ ─ ─ 48. JPG. Geometric. Bin │ └ ─ ─ 48. JPG. Photometric. Bin ├ ─ ─ fusion. The CFG ├ ─ ─ normal_maps │ ├ ─ ─ 0. JPG. Geometric. Bin │ ├ ─ ─ 0. JPG. Photometric. Bin │ ├ ─ ─... │ ├ ─ ─... │ ├ ─ ─ 48. JPG. Geometric. Bin │ └ ─ ─ 48. JPG. Photometric. Bin └ ─ ─ patch - match. CFGCopy the code
  1. merge
./colmap stereo_fusion \
    --workspace_path ./dense \
    --workspace_format COLMAP \
    --input_type geometric \
    --output_path ./dense/result.ply
Copy the code

Output: result.ply point cloud model file

Resources

  • Multi-view geometry 3d reconstruction of the actual series COLMAP
  • 3d reconstruction _COLMAP installation, use and parameter description (translated from official documentation) _ Step by step -CSDN blog
  • 3 D reconstruction: Colmap installation and Use – HUST – Blog Garden