pcl_495:一个点云库 - 使用 Nodelets 的 ROS 应用程序

  • p6_285391
    了解作者
  • 929KB
    文件大小
  • zip
    文件格式
  • 0
    收藏次数
  • VIP专享
    资源类型
  • 0
    下载次数
  • 2022-05-28 08:36
    上传日期
项目介绍 使用 RGB-D 传感器和点云库,我们将地平面与场景的其余部分分开。 如果我们完成了我们所有的主要项目目标,我们将实现一个使用 nodelet 的版本。 如果我们运行项目的 nodelet 版本,我们将使用欧几里德聚类来查找场景中的对象,并可能计算出很酷的东西,例如滚动圆柱体的速度。 项目目标 测试不同的 RGB-D 传感器 测试不同的 RGB-D 传感器驱动程序(openni 与 openni2) 在rviz中可视化RGB-D传感器产生的点云 使用传感器和 PCL(点云库),将地板(或任何平行于相机框架的平面)与场景的其余部分分开 附加目标: 实现使用 ROS nodelet 的版本 在场景中查找对象 对一个对象执行额外的计算(比如找到对象的中心并随着时间的推移跟踪它) 测试硬件 可选择 Microsoft XBOX 360 Kinect 或 Asus Xtion PRO
pcl_495-master.zip
  • pcl_495-master
  • images
  • point_cloud_pass_through_all_and_voxel_grid.png
    61KB
  • rgbd_points_colored_openni2.png
    279.1KB
  • cpu_load_with_nodelets.png
    22.6KB
  • cpu_load_no_nodelets.png
    21.7KB
  • ransac_ex.png
    23.7KB
  • rgbd_points.png
    24.3KB
  • point_cloud_pass_through_x.png
    180.5KB
  • point_cloud_floor_segmentation.png
    40.3KB
  • pcl_conversions.png
    20.4KB
  • ex_pcl_doc.png
    32.8KB
  • point_cloud_pass_through_x_y_z.png
    59.5KB
  • rgbd_points_colored.png
    153KB
  • viz_nodelets.rviz
    4.4KB
  • launch
  • pcl_project.launch
    278B
  • pcl_project_w_nodelets.launch
    1.9KB
  • src
  • pcl_node.cpp
    6.3KB
  • pcl_node_eucl.cpp
    2KB
  • pcl_node_w_nodelets.cpp
    5.2KB
  • CMakeLists.txt
    5.3KB
  • README.md
    21.1KB
  • viz.rviz
    4.4KB
  • package.xml
    2.6KB
内容介绍
Project Description ------------------------ Using an RGB-D sensor and the Point Cloud Library, we will separate the ground plane from the rest of the scene. If we complete all of our main project goals, we will implement a version of this that uses nodelets. If we get a nodelet version of our project running, we will use euclidean clustering to find objects in the scene and potentially calculate cool things like the velocity of a rolling cylinder. Project Goals ------------------------ 1. Test different RGB-D sensors 2. Test different RGB-D sensor drivers (openni vs openni2) 3. Visualize the point cloud produced by the RGB-D sensors in rviz 4. Using the sensor and the PCL (Point Cloud Library), separate the floor (or any plane parallel to the camera's frame) from the rest of the scene Additional goals: 1. Implement version of this that uses ROS nodelets 2. Find objects in scene 1. Perform additional calculations on an object (like finding the center of the object and tracking it over time) Testing the Hardware ----------------------- Choice of Microsoft XBOX 360 Kinect or Asus Xtion PRO LIVE From our perspective, the main differences between the Microsoft Kinect and the Asus Xtion PRO Live are the size, form factor and power supply. The Kinect is both larger and needs to plug into an outlet to be powered. The Asus is smaller and is powered over USB. There are a few other small differences between the two sensors (some can be found [here](http://wiki.ipisoft.com/Depth_Sensors_Comparison) and [here](http://answers.ros.org/question/12900/microsoft-kinect-or-asus-xtion/)) Our experience using each of the two devices is summarized below : ### Asus Xtion PRO Live * Plug it into usb port * run roslaunch openni_launch openni.launch * works on Alex's computer, does not work on Ritwik or Ji-hoon's * run roslaunch openni2_launch openni2.launch * works on everyone's computer * run rviz * set global options fixed frame to “camera_link” * add PointCloud2 topic to rviz * set PointCloud2 topic to “/camera/depth/points” ### Microsoft Kinect * Plug it into usb port (and the power cord) * run roslaunch openni_launch openni.launch * Initially this caused a problem. We we're being told the no device was found. We are using ROS Indigo and Ubuntu 14.04, but we found [this](http://answers.ros.org/question/60562/ubuntu-12042-and-openni_launch-not-detecting-kinect-after-update/) help page. * And yep, that fixes our problem! It seems as though the previously installed openni driver has a problem. Running the script from the top rated answer on the afformentioned help page worked and solved our problem * re-run roslaunch openni_launch openni.launch * run rviz * set global options fixed frame to “camera_link” * add PointCloud2 topic to rviz * set PointCloud2 topic to “/camera/depth/points” Using either sensor, you can see the camera image in a different viewer by running the following command: ``` rosrun image_view image_view image:=/camera/depth/image ``` When you first start up the sensors, the resulting point clouds are not registered. To register the point clouds you can follow these steps: * rosrun rqt_reconfigure rqt_reconfigure * under the camera tab, click ‘driver’ * on the driver menu, check the box next to ‘depth_registration’ * close rqt_reconfigure * go back to rviz * change PointCloud2 topic to '/camera/depth_registered/points’ We tested the Kinect and Xtion simultaneously on two separate computers. It was immediately obvious that there is interference between the two sensors. The Xtion seemed particularly sensitive within any proximity to the Kinect sensor. According to the above sources, the Xtion sensor should be resiliant to interference with another Xtion sensor, but it is clear that it is susceptible to interference from a Kinect. Summary of differences between Openni and Openni2 drivers ----------------------- We recognized that there are differences between which devices work with openni1 and openni2. The way we have it set up so far * Out of the box, openni1 works with asus on Alex's computer * It did not work for Ritwik or Ji-hoon out of the box (they do have the same computer). They had to use openni2 instead, which worked out of the box * Running the [Avin2](http://answers.ros.org/question/60562/ubuntu-12042-and-openni_launch-not-detecting-kinect-after-update/) fix that is linked above makes openni1 work with the Kinect, but breaks the Asus driver. We can then uses Jarvis’s [solution](http://answers.ros.org/question/109411/asus-xtion-problems-with-ubuntu-1204-running-ros-fuerte/#109831) to fix Asus to work with openni1, otherwise we can just use openni2 Installing and setting up the PCL (Point Cloud Library) ----------------------- >The Point Cloud Library (PCL) is a standalone, large scale, open project for 2D/3D image and point cloud processing. [PCL Website](http://pointclouds.org/) Essentially, this library makes it easy to work with depth data provided by rgb-d sensors. It represents the depth data as a cloud of points in (x,y,z) space and implements a number of useful algorithms to filter and otherwise process the depth data. Additionally, it is extra nice because PCL can be used in ROS, with ROS-specific data types. Less nice is the available documentation. Development is ongoing and changes often occur quickly and without sufficient documentation. Hopefully this write up will explain some of it more clearly and help you get started using PCL in your ROS application. **Below is a detailed account of our experience getting PCL set up in ROS.** For some reason Alex already had it installed on his system but Ritwik did not. If you do not have it installed on your computer, run: ``` sudo apt-get install ros-indigo-pcl-* ``` This will include ‘pcl-ros’, ‘pcl-msgs’, ‘pcl-conversions’ packages. When developing our ROS node, we initialized our Package.xml file to depend on: geometry_msgs, roscpp, rospy, std_msgs, sensor_msgs, pcl_ros, pcl_msgs, pcl_conversions We later found out that pcl_conversions is either mostly or completely depricated. The main purpose of this package is to convert between ROS specific PCL messages (that can be sent on topics just like standard ROS messages) and the PCL objets. However PCL is moving towards being able to handle the ros_pcl messages just as if they were PCL objects. We actually ended up converting between the two object types at certain times and not converting at other times. It is our belief that you do not need to convert between the two types if you are using ROS Indigo and the newest version of PCL (1.7). Back to the task at hand. To get PCL up and running in our ROS application, we started by following the online [tutorial](http://wiki.ros.org/pcl/Tutorials). The first thing to note, is that the documentation for PCL in Indigo is unavailable, so we decided to follow the Hydro documentation. We copied the example file. We gave the node a name, and re-ran catkin_make. The build succeeded, so the conversion from hydro to indigo seems to have gone smoothly so far! Our next step was to test some basic functionality. We wanted to see the example code run so we needed to run * roscore * openni2 * rviz * our node all together. When running this, we needed to remap the output of openni2 to our /input topic. You can see the resulting point cloud in the following pictures ![Point Cloud](images/rgbd_points.png) ![Point Cloud](images/rgbd_points_colored.png) After a few trials, the process of running all of these files together was frustrating, so we decided to write a launch file. We created a launch directory and a launch file ‘pcl_project.launch’. The launch file includes ‘openni2’, launches our own node, and launches ‘rviz’. We tested it and it worked. We then updated our launch file to launch openni2 with the argument, depth_registration=true, which automatically depth registers values in t
评论
    相关推荐
    • 系统聚类 java
      对系统聚类的描述 以及开发java语言,很好用,很好用,很好用
    • ISOData 自适应聚类
      ISODATA实现图像的自适应聚类,VC6.0
    • 聚类算法程序
      聚类算法中的感知器算法,用于模式识别中,有比较完整的算法描述
    • 聚类算法程序
      基于VC++实现数字图片,图形图片的聚类分析
    • 图像聚类算法图像聚类算法
      图像聚类算法图像聚类算法图像聚类算法图像聚类算法图像聚类算法图像聚类算法
    • 分级聚类算法
      使用VC++6.0实现的分级聚类算法
    • 聚类,凸包
      在visual studio 2010环境下编译运行, 鼠标左键点击出若干个点, 在K编辑框中填入分类数量, 在T编辑框中输入迭代次数。 点击cluster按钮, 程序将实现聚类,不同类别以不同颜色显示,并且用一个凸包包围该类别
    • 聚类程序.rar
      C++的聚类程序,MFC的
    • java 聚类算法实验
      聚类算法实验
    • 聚类分析软件
      实用软件,聚类分析,简单易操作,实用性强,适于触及学者