Shichen Peng
Shichen Peng
发布于 2023-07-13 / 14 阅读
0
0

LiDAR-Fusion Chapter 2 - Generate Painted Dataset

LiDAR-Fusion Chapter 2 - Generate Painted Dataset

In the original version of the KITTI dataset, 3D point cloud data and image data are separated. In our project, we need to use them together to detect objects. PointPainting algorithm is designed for such a workflow, which utilizes the DeepLabV3Plus network for segmenting the desired objects from given images and PointPillars for detecting objects from point cloud data. To achieve the goal, we need to generate the fused version of the dataset for further research. You need to activate the Python environment you have installed in the last chapter.

Prepare Project Files

Clone PointPainting Repository

This repository is a modified version that contains some modifications for easier usage.

git clone git@github.com:psc1453/PointPainting.git

Download Pre-Trained DeepLabV3Plus Model

To generate the painted dataset, we need to segment the target object from images and paint them on the LiDAR point cloud. A DeepLabV3Plus model will be utilized to handle this task. Since image segmentation is a mature task in the computer vision area, we do not need to train this model by ourselves, thus, using a pre-trained model is appropriate.

To download the pre-trained model, go to the painting directory under the project root, and run the command below to automatically download the model file:

bash get_deeplabv3plus_model.sh

It will download the model file and save it in painting/mmseg/checkpoints/.

You may encounter some issues in this step due to the complexity of the network environment in some countries or regions. If you do not have the ability to download it, you can download it from [BaiduNetdisk](file:///Users/psc/Documents/Study/PhD/Obsidian-Library/PSC-PhD-Study/Research/Projects/ZTE/LiDAR-Fusion/%E9%93%BE%E6%8E%A5:%20https://pan.baidu.com/s/18qTpEAYezvSodv7zZX1ghg?pwd=k56x) manually.

Once downloaded, put it into painting/mmseg/checkpoints/. DO NOT MODIFY THE FILE NAME!!!

Download KITTI Dataset

Get KITTI Dataset

KITTI 3D Object Detection Evaluation 2017 is a dataset that consists of 7481 training images and 7518 test images as well as the corresponding point clouds, comprising a total of 80.256 labeled objects collected by sensors on a car for auto-pilot development. The links are listed below.

For Single View
  1. Download left color images of object data set (12 GB)

  2. Download the 3 temporally preceding frames (left color) (36 GB) (Not compulsory)

  3. Download Velodyne point clouds, if you want to use laser information (29 GB)

  4. Download camera calibration matrices of object data set (16 MB)

  5. Download training labels of object data set (5 MB)

  6. Download object development kit (1 MB) (including 3D object detection and bird’s eye view evaluation code)

For Stereo View(If You Want to Do Further Research)

Since this project relies on it, please download the first one anyway.

  1. Download right color images of object data set (12 GB)

  2. Download the 3 temporally preceding frames (right color) (36 GB) (Not compulsory)

Supplemental Files

Here I provide some supplemental files for testing OpenPCDet, you can find them in my GitHub repository. Follow the guide in it to use them.

Full Directory Structure

After downloading and unzipping, you need to re-arrange the directory structure like this:

KITTI
├── ImageSets
│ ├── test.txt
│ ├── train.txt
│ ├── trainval.txt
│ └── val.txt
├── testing
│ ├── calib
│ ├── image_2
│ └── velodyne
└── training
├── calib
├── image_2
├── label_2
├── planes
└── velodyne

Copy Training Dataset to Project Directory

In this chapter, what we need is only the training set. Copy the whole training directory that you have downloaded to the PointPainting project’s detector/data/kitti/. You can use a terminal or GUI according to your habit.

Generate Painted KITTY Dataset

In this step, we will paint image segmentation results to LiDAR data to generate the fusion dataset for training PointPainting Model.

Configure Painting Settings

You can first modify the configuration in painting/painting.py according to your need:

TRAINING_PATH = “…/detector/data/kitti/training/” # The location of training set
TWO_CAMERAS = True # Whether you are using stereo cameras (set to False for ZTE dataset)
SEG_NET_OPTIONS = [“deeplabv3”, “deeplabv3plus”, “hma”] # Supported model for image segmentation
SEG_NET = 1 # Select model

Install Required Packages

The script painting.py relies on two additional packages that we have not installed so far. Install them by:

pip install mmcv-full terminaltables

The latest version of the mmcv package is 2.x.x. However, due to its massive API changes compared to 1.x.x which this project used, we will not install the latest version of mmcv-full. I will release updates when I successfully migrate this project to adapt it. Notice that, mmcv-full represents 1.x.x and mmcv represents 2.x.x. It is a little bit confusing.

Generate Dataset

Then, just run this Python script:

python painting.py

It will generate Painted Kitti Dataset to detector/data/kitti/training/painted_lidar/.


评论