lidar_dynamic_objects_detection

所属分类:雷达系统
开发工具:Python
文件大小:4128KB
下载次数:1
上传日期:2020-12-19 11:31:41
上 传 者sh-1993
说明:  激光雷达动态对象检测,,
(lidar_dynamic_objects_detection,,)

文件列表:
.vscode (0, 2020-12-19)
.vscode\settings.json (45, 2020-12-19)
LICENSE.md (1054, 2020-12-19)
detection_3d (0, 2020-12-19)
detection_3d\__init__.py (0, 2020-12-19)
detection_3d\create_dataset_lists.py (2856, 2020-12-19)
detection_3d\data_preprocessing (0, 2020-12-19)
detection_3d\data_preprocessing\pandaset_tools (0, 2020-12-19)
detection_3d\data_preprocessing\pandaset_tools\helpers.py (4748, 2020-12-19)
detection_3d\data_preprocessing\pandaset_tools\preprocess_data.py (5156, 2020-12-19)
detection_3d\data_preprocessing\pandaset_tools\transform.py (2846, 2020-12-19)
detection_3d\data_preprocessing\pandaset_tools\visualize_data.py (5356, 2020-12-19)
detection_3d\detection_dataset.py (6010, 2020-12-19)
detection_3d\losses.py (3129, 2020-12-19)
detection_3d\metrics.py (2496, 2020-12-19)
detection_3d\model.py (11559, 2020-12-19)
detection_3d\parameters.py (4040, 2020-12-19)
detection_3d\tools (0, 2020-12-19)
detection_3d\tools\augmentation_tools.py (2125, 2020-12-19)
detection_3d\tools\detection_helpers.py (7991, 2020-12-19)
detection_3d\tools\file_io.py (4657, 2020-12-19)
detection_3d\tools\statics.py (1180, 2020-12-19)
detection_3d\tools\summary_helpers.py (5021, 2020-12-19)
detection_3d\tools\training_helpers.py (3155, 2020-12-19)
detection_3d\tools\visualization_tools.py (8972, 2020-12-19)
detection_3d\train.py (6149, 2020-12-19)
detection_3d\validation_inferece.py (4904, 2020-12-19)
pictures (0, 2020-12-19)
pictures\box_parametrization.png (488416, 2020-12-19)
pictures\result.png (1513343, 2020-12-19)
pictures\topview.png (2213933, 2020-12-19)
setup.py (689, 2020-12-19)

# Dynamic objects detection in LiDAR [![MIT License](https://img.shields.io/badge/License-MIT-green.svg)](https://github.com/Dtananaev/lidar_dynamic_objects_detection/blob/master/LICENSE.md) ## The result of network (click on the image below) [![result](https://github.com/Dtananaev/lidar_dynamic_objects_detection/blob/master/pictures/result.png)](https://youtu.be/f_HZg9Cq-h4) The network weights could be loaded [weight](https://drive.google.com/file/d/1m8N5m2WXATgFNw88BRqEbUieiyV7p3S0/view?usp=sharing). ## Installation For ubuntu 18.04 install necessary dependecies: ``` sudo apt update sudo apt install python3-dev python3-pip python3-venv ``` Create virtual environment and activate it: ``` python3 -m venv --system-site-packages ./venv source ./venv/bin/activate ``` Upgrade pip tools: ``` pip install --upgrade pip ``` Install tensorflow 2.0 (for more details check the tensofrolow install tutorial: [tensorflow](https://www.tensorflow.org/install/pip)) ``` pip install --upgrade tensorflow-gpu ``` Clone this repository and then install it: ``` cd lidar_dynamic_objects_detection pip install -r requirements.txt pip install -e . ``` This should install all the necessary packages to your environment. ## The method The lidar point cloud represented as top view image where each pixel of the image corresponds to 12.5x12.5 cm. For each grid cell we project random point and get the height and intensity

We are doing direct regression of the 3D boxes, thus for each pixel of the image we regress confidence between 0 and 1, 7 parameters for box (dx_centroid, dy_centroid, z_centroid, width, height, dx_front, dy_front) and classes.

We apply binary cross entrophy for confidence loss, l1 loss for all box parameters regression and softmax loss for classes prediction. The confidence map computed from ground truth boxes. We assign the closest to the box centroid cell as confidence 1.0 (green on the image above) and 0 otherwise. We apply confidence loss for all the pixels. Other losses applied only for those pixels where we have confidence ground truth 1.0. ## The dataset preparation We work with Pandaset dataset which can be uploaded from here: [Pandaset](https://pandaset.org/) Upload and unpack all the data to dataset folder (e.g. ~/dataset). The dataset should have the next folder structure: ``` bash dataset ├── 001 # The sequence number │ ├── annotations # Bounding boxes and semseg annotations | | ├──cuboids | | | ├──00.pkl.gz | | | └── ... | | ├──semseg | | ├──00.pkl.gz | | └── ... │ ├── camera # cameras images | | ├──back_camera | | | ├──00.jpg | | | └── .. | | ├──front_camera | | └── ... │ ├── lidar # lidar data │ | ├── 00.pkl.gz │ | └── ... | ├── meta | | ├── gps.json | | ├── timestamps.json ├── 002 └── ... ``` Preprocess dataset by applying next command: ``` cd lidar_dynamic_objects_detection/detection_3d/data_preprocessing/pandaset_tools python preprocess_data.py --dataset_dir ``` Create dataset lists: ``` cd lidar_dynamic_objects_detection/detection_3d/ python create_dataset_lists.py --dataset_dir ``` This should create ```train.datatxt``` and ```val.datatxt``` into your dataset folder. Finally change into ```parameters.py``` the directory of the dataset. ## Train In order to train the network: ``` python train.py ``` In order to resume training: ``` python train.py --resume ``` The training can be monitored in tensorboard: ``` tensorboard --logdir=log ``` ## Inference on validation dataset In order to do inference on validation dataset: ``` python validation_inference.py --dataset_file /val.datatxt --output_dir --model_dir ``` The result of the inference is 3d boxes and also visualized 3d boxes on top view image. The visualized top view image (upper) concatenated with ground truth top view image (bottom).

近期下载者

相关文件


收藏者