Velodyne Lidar点云聚类算法

  • s2_125863
    了解作者
  • 2MB
    文件大小
  • zip
    文件格式
  • 0
    收藏次数
  • VIP专享
    资源类型
  • 0
    下载次数
  • 2022-05-14 08:16
    上传日期
输入Velodyne Lidar数据,对点云进行聚类,基于Qt图像界面开发,算法满足实时性,分割效果好,可用于16线,32线,64线激光雷达数据.
depth_clustering.zip
内容介绍
# Depth Clustering # [![Build Status][travis-img]][travis-link] [![Codacy Badge][codacy-img]][codacy-link] [![Coverage Status][coveralls-img]][coveralls-link] This is a fast and robust algorithm to segment point clouds taken with Velodyne sensor into objects. It works with all available Velodyne sensors, i.e. 16, 32 and 64 beam ones. Check out a video that shows all objects which have a bounding box with the volume of less than 10 qubic meters: [![Segmentation illustration](https://img.youtube.com/vi/UXHX9kFGXfg/0.jpg)](https://www.youtube.com/watch?v=UXHX9kFGXfg "Segmentation") ## Prerequisites ## I recommend using a virtual environment in your catkin workspace (`<catkin_ws>` in this readme) and will assume that you have it set up throughout this readme. Please update your commands accordingly if needed. I will be using `pipenv` that you can install with `pip`. ### Set up workspace and catkin ### Regardless of your system you will need to do the following steps: ```bash cd <catkin_ws> # navigate to the workspace pipenv shell --fancy # start a virtual environment pip install catkin-tools # install catkin-tools for building mkdir src # create src dir if you don't have it already # Now you just need to clone the repo: git clone https://github.com/PRBonn/depth_clustering src/depth_clustering ``` ### System requirements ### You will need OpenCV, QGLViewer, FreeGLUT, QT4 or QT5 and optionally PCL and/or ROS. The following sections contain an installation command for various Ubuntu systems (click folds to expand): <details> <summary>Ubuntu 14.04</summary> #### Install these packages: ```bash sudo apt install libopencv-dev libqglviewer-dev freeglut3-dev libqt4-dev ``` </details> <details> <summary>Ubuntu 16.04</summary> #### Install these packages: ```bash sudo apt install libopencv-dev libqglviewer-dev freeglut3-dev libqt5-dev ``` </details> <details> <summary>Ubuntu 18.04</summary> #### Install these packages: ```bash sudo apt install libopencv-dev libqglviewer-dev-qt5 freeglut3-dev qtbase5-dev ``` </details> ### Optional requirements ### If you want to use PCL clouds and/or use ROS for data acquisition you can install the following: - (optional) PCL - needed for saving clouds to disk - (optional) ROS - needed for subscribing to topics ## How to build? ## This is a catkin package. So we assume that the code is in a catkin workspace and CMake knows about the existence of Catkin. It should be already taken care of if you followed the instructions [here](#set-up-workspace-and-catkin). Then you can build it from the project folder: ```bash mkdir build cd build cmake .. make -j4 ctest -VV # run unit tests, optional ``` It can also be built with `catkin_tools` if the code is inside catkin workspace: ```bash catkin build depth_clustering ``` P.S. in case you don't use `catkin build` you [should][catkin_tools_docs] reconsider your decision. ## How to run? ## See [examples](examples/). There are ROS nodes as well as standalone binaries. Examples include showing axis oriented bounding boxes around found objects (these start with `show_objects_` prefix) as well as a node to save all segments to disk. The examples should be easy to tweak for your needs. ## Run on real world data ## Go to folder with binaries: ``` cd <path_to_project>/build/devel/lib/depth_clustering ``` #### Frank Moosmann's "Velodyne SLAM" Dataset #### Get the data: ``` mkdir data/; wget http://www.mrt.kit.edu/z/publ/download/velodyneslam/data/scenario1.zip -O data/moosmann.zip; unzip data/moosmann.zip -d data/; rm data/moosmann.zip ``` Run a binary to show detected objects: ``` ./show_objects_moosmann --path data/scenario1/ ``` Alternatively, you can run the data from Qt GUI (as in video): ``` ./qt_gui_app ``` Once the GUI is shown, click on <kbd>OpenFolder</kbd> button and choose the folder where you have unpacked the `png` files, e.g. `data/scenario1/`. Navigate the viewer with arrows and controls seen on screen. #### Other data #### There are also examples on how to run the processing on KITTI data and on ROS input. Follow the `--help` output of each of the examples for more details. Also you can load the data from the GUI. Make sure you are loading files with correct extension (`*.txt` and `*.bin` for KITTI, `*.png` for Moosmann's data). ## Documentation ## You should be able to get Doxygen documentation by running: ``` cd doc/ doxygen Doxyfile.conf ``` ## Related publications ## Please cite related papers if you use this code: ``` @InProceedings{bogoslavskyi16iros, title = {Fast Range Image-Based Segmentation of Sparse 3D Laser Scans for Online Operation}, author = {I. Bogoslavskyi and C. Stachniss}, booktitle = {Proc. of The International Conference on Intelligent Robots and Systems (IROS)}, year = {2016}, url = {http://www.ipb.uni-bonn.de/pdfs/bogoslavskyi16iros.pdf} } ``` ``` @Article{bogoslavskyi17pfg, title = {Efficient Online Segmentation for Sparse 3D Laser Scans}, author = {I. Bogoslavskyi and C. Stachniss}, journal = {PFG -- Journal of Photogrammetry, Remote Sensing and Geoinformation Science}, year = {2017}, pages = {1--12}, url = {https://link.springer.com/article/10.1007%2Fs41064-016-0003-y}, } ``` [travis-img]: https://img.shields.io/travis/PRBonn/depth_clustering/master.svg?style=for-the-badge [travis-link]: https://travis-ci.org/PRBonn/depth_clustering [coveralls-img]: https://img.shields.io/coveralls/github/PRBonn/depth_clustering/master.svg?style=for-the-badge [coveralls-link]: https://coveralls.io/github/PRBonn/depth_clustering [codacy-img]: https://img.shields.io/codacy/grade/338a7f3c5b9c4323b1de266900ca20ff.svg?style=for-the-badge [codacy-link]: https://www.codacy.com/project/zabugr/depth_clustering/dashboard?utm_source=github.com&amp;utm_medium=referral&amp;utm_content=PRBonn/depth_clustering&amp;utm_campaign=Badge_Grade_Dashboard [build-status-img]: https://gitlab.ipb.uni-bonn.de/igor/depth_clustering/badges/master/build.svg [coverage-img]: https://gitlab.ipb.uni-bonn.de/igor/depth_clustering/badges/master/coverage.svg [commits-link]: https://gitlab.ipb.uni-bonn.de/igor/depth_clustering/commits/master [catkin_tools_docs]: https://catkin-tools.readthedocs.io/en/latest/installing.html
评论
    相关推荐
    • K均值聚类算法
      模式识别中常用的聚类算法,采用控制台程序,输出聚类结果
    • 聚类算法程序
      聚类算法中的感知器算法,用于模式识别中,有比较完整的算法描述
    • 谱聚类 聚类算法
      谱聚类 聚类算法 spectralClustering
    • k均值聚类算法
      模式识别作业之k均值聚类算法
    • 图像聚类算法图像聚类算法
      图像聚类算法图像聚类算法图像聚类算法图像聚类算法图像聚类算法图像聚类算法
    • Ncut聚类算法
      Ncut聚类算法,可以直接运行,有例子,程序有注释。
    • DBSCAN聚类算法
      经典DBASCAN聚类算法,适合新手小白学习,提供了数据可出效果图
    • ap聚类算法
      一个很好的利用DTW距离作相似度的ap聚类算法,同时能够自适应的调整最佳参数
    • DBSCAN聚类算法
      利用经典的基于密度的聚类算法,将四线激光雷达采集的数据进行聚类,剔除干扰点
    • AP聚类算法代码
      给出了AP聚类算法的实现代码,并给出了一个对二维坐标点进行聚类的实际例子的聚类结果。