LiDAR-Fusion Chapter 5 - Track the Motion with AB3DMOT
This chapter takes the detection result of PointPillars on the ZTE dataset and then tracks the motion of every object using AB3DMOT.
Prepare Necessary Files
The original AB3DMOT project has huge issues. The commits before the spring festival of 2022 could run as its readme file says, but the algorithm wasn’t finished thus it cannot generate full results. After that, the algorithm became completed, but it took the use of some extra data other than LiDAR point clouds such as the angle of the camera.
Clone the Repository
Here, I created a modified version of this project that can ensure you run it correctly without the use of extra data. You can clone my repository by:
git clone https://github.com/psc1453/AB3DMOT.git
Import the Detection Result
You need to rename all of the three result files to 0000.txt
and arrange them as the structure shown below:
ZTE
└── detection
├── PointPainting_Car_test
│ └── 0000.txt
├── PointPainting_Cyclist_test
│ └── 0000.txt
└── PointPainting_Pedestrian_test
└── 0000.txt
Note that you need to name all the files and directories EXACTLY THE SAME WITH THE EXAMPLE !!!
Then, put the whole ZTE
directory under the data
directory of the AB3DMOT project.
Track the Motion
Since the project has already been modified appropriately, you can directly run main.py
and the tracking results will appear in the results
directory. The files under data_N
contain the index of every object in sample N
. The files under trk_withid_N
contain the tracking result of every object in sample N
. We have only one sample in each category named 0000.txt
, so N=0
here.
Now, you finished all the steps.