Kinect 3D Video Capture Project

所属分类:Windows编程
开发工具:C++
文件大小:369KB
下载次数:2
上传日期:2017-12-13 01:12:32
上 传 者MinhLe
说明:  Kinect 3D Video Capture Project version 3.2 Copyright (c) 2010-2016 Oliver Kreylos

文件列表:
AlignPoints.cpp (17470, 2016-10-15)
AlignPoints2.cpp (23342, 2016-10-15)
BuildRoot (0, 2016-10-15)
BuildRoot\Packages.Kinect (1979, 2016-10-15)
COPYING (17995, 2016-10-15)
CalibrateCameras.cpp (6526, 2016-10-15)
CalibrateDepth.cpp (5487, 2016-10-15)
CalibrationCheckTool.cpp (6960, 2016-10-15)
CalibrationCheckTool.h (3038, 2016-10-15)
ColorCompressionTest.cpp (3123, 2016-10-15)
CompressDepthFile.cpp (4075, 2016-10-15)
DepthCompressionTest.cpp (4416, 2016-10-15)
DepthCorrectionTool.cpp (9473, 2016-10-15)
DepthCorrectionTool.h (2928, 2016-10-15)
FindBlobs.h (2731, 2016-10-15)
FindBlobs.icpp (5004, 2016-10-15)
GridTool.cpp (29457, 2016-10-15)
GridTool.h (4401, 2016-10-15)
HISTORY (8837, 2016-10-15)
IFFChunkWriter.h (4739, 2016-10-15)
Kinect (0, 2016-10-15)
Kinect\Camera.cpp (51790, 2016-10-15)
Kinect\Camera.h (12561, 2016-10-15)
Kinect\CameraRealSense.cpp (25486, 2016-10-15)
Kinect\CameraRealSense.h (5655, 2016-10-15)
Kinect\CameraV2.cpp (11956, 2016-10-15)
Kinect\CameraV2.h (3114, 2016-10-15)
Kinect\ColorFrameReader.cpp (6227, 2016-10-15)
Kinect\ColorFrameReader.h (1972, 2016-10-15)
Kinect\ColorFrameWriter.cpp (4519, 2016-10-15)
Kinect\ColorFrameWriter.h (2218, 2016-10-15)
Kinect\Config.h (1374, 2016-10-15)
Kinect\CornerExtractor.cpp (28593, 2016-10-15)
Kinect\CornerExtractor.h (8253, 2016-10-15)
Kinect\DepthFrameReader.cpp (5545, 2016-10-15)
Kinect\DepthFrameReader.h (3276, 2016-10-15)
Kinect\DepthFrameWriter.cpp (11904, 2016-10-15)
Kinect\DepthFrameWriter.h (3233, 2016-10-15)
Kinect\DirectFrameSource.cpp (20008, 2016-10-15)
... ...

======================================================================== README for Kinect 3D Video Capture Project version 3.2 Copyright (c) 2010-2016 Oliver Kreylos ======================================================================== Requirements ============ This software requires the Vrui VR toolkit, version 4.2 build 001 or newer. It also requires libusb version 1.0, and that Vrui was configured with support for libusb prior to its installation. To properly work with multiple Kinect-for-Xbox version 1473 and/or Kinect-for-Windows devices, the libusb library needs to provide USB bus topology query functions such as those provided by the libusbx fork. This software can optionally create Kinect 3D video streaming plug-ins for Vrui's collaboration infrastructure (version 2.7 or newer). The presence of Vrui's collaboration infrastructure will be detected automatically during the build process. Installation Guide ================== It is recommended to download or move the source packages for Vrui and the Kinect 3D Video Capture Project into a src directory underneath the user's home directory. Otherwise, references to ~/src in the following instructions need to be changed. 0. Install Vrui from ~/src/Vrui-- (see Vrui README file). 0.5. (Optional) Install Vrui's collaboration infrastructure from ~/src/CollaborationInfrastructure- (see its README file). 1. Change into ~/src directory and unpack the Kinect 3D Video Capture Project tarball: > cd ~/src > tar xfz /Kinect-.tar.gz - or - > tar xf /Kinect-.tar 2. Change into the Kinect 3D Video Capture Project's base directory: > cd Kinect- 3. If the Vrui version installed in step 0 was not 4.2, or Vrui's installation directory was changed from the default of /usr/local, adapt the makefile using a text editor. Change the value of VRUI_MAKEDIR close to the beginning of the file as follows: VRUI_MAKEDIR := /share/make Where is the installation directory chosen in step 0. Use $(HOME) to refer to the user's home directory instead of ~. 4. Build the Kinect 3D Video Capture Project: > make 5. Install the Kinect 3D Video Capture Project: > make install 6. (Optional, Linux-only) Install a udev rule file to give access to all Kinect devices to the user currently logged in to the computer's physical console. Read below for details. > sudo make installudevrules This will ask for the user's password to copy the rule file, 69-Kinect.rules, into the udev rule directory /etc/udev/rules.d. Use === 1. Run ./bin/KinectUtil list to list all Kinect devices connected to the host computer. 2a. Run ./bin/KinectUtil getCalib to download factory calibration data from the -th Kinect device on the host's USB bus (with the first device having index 0) and create a intrinsic camera calibration file for that device in the Kinect package's configuration file directory. -- or -- 2b. Run ./bin/RawKinectViewer to intrinsically calibrate the -th Kinect device on the host's USB bus (with the first device having index 0), using a semi-transparent checkerboard calibration target (see next section for details). This generates a binary IntrinsicParameters-.dat file in /etc/Kinect-, where is the unique factory-assigned serial number of the Kinect device as displayed in step 1. Step 2 (a or b) only have to be performed once for each Kinect device, unless the resulting calibration proves unsatisfactory. Kinect devices typically retain their intrinsic calibration over time. 3. Run ./bin/KinectViewer -c , where is the zero-based index of the Kinect device to use if there are multiple ones. Installing a udev rule (Linux-only) =================================== By default, Kinect devices can only be accessed by the root user. This is inconvenient and a security risk, as all Kinect applications must be run as root. The Kinect package contains a udev rule file to give full access to any Kinect device to the user currently logged in to the local console, see optional installation step 6 above. After the rule file has been installed, it needs to be activated. The recommended way is to restart the device manager: > sudo udevadm control --reload > sudo udevadm trigger --action=change Alternatively, the rule can be activated by unplugging the Kinect from its USB port, waiting a few seconds, and then plugging it back in, or, if everything else fails, by restarting the computer. After initial installation, the rule file will be activated automatically when the computer boots, or when a Kinect device is plugged into any USB port. Intrinsically Calibrating a Single Kinect ========================================= Before a Kinect can be used as a 3D camera, its two cameras need to be calibrated. Calibration will calculate a 4x4 projection matrix to map the depth camera's image stream into 3D space as a 3D point cloud or triangulated surface, and a 4x4 projection matrix to map the color camera's image stream onto that point cloud or surface. After proper calibration, the virtual 3D objects captured by a Kinect will precisely match the original real objects in shape and size. The quickest way to get intrinsic calibration data for a Kinect is to download the factory calibration data that is stored in each Kinect's firmware. The Kinect package contains a utility for this purpose. In a terminal, run: > KinectUtil getCalib where is the index of the Kinect whose data you want to download. If you have only one Kinect connected to your computer, its index is 0. KinectUtil will download the data and copy it into a file that will later be found automatically by all Kinect applications. The quality of factory calibration is generally not as good as what can be achieved by custom calibration, see below, especially for Xbox 360 Kinects. Custom intrinsic calibration has two sub-procedures. The first, optional, one is to calculate per-pixel depth correction equations. Due to radial lens distortion in the IR pattern projector and the IR camera, a completely flat surface seen by the Kinect will not be reconstructed as a flat surface, but as a bowl shape. Depth correction will calculate a linear equation for each depth pixel that can rectify distance values. The second sub-procedure is to calculate 4x4 homogeneous depth unprojection and color projection matrices. The results of sub-procedure two depend on sub-procedure one, meaning that the second has to be performed every time after the first one is performed. Per-pixel Depth Correction -------------------------- Calculating per-pixel depth correction equations only requires a flat and ideally vertical surface large enough to fill the Kinect's field-of-view at larger distances (up to about 2 meters). These are the calibration procedure's steps: 1. Start RawKinectViewer for the Kinect to be calibrated. 2. Create a "Calibrate Depth Lens" tool by binding it to two (2) buttons (keyboard keys or mouse buttons). Here, we will use the keys "1" and "2". Pressing the "1" key will capture the current depth image as a calibration point, and pressing the "2" key will calculate per-pixel depth correction equations after several depth images have been captured. 3. Move the Kinect such that it faces the flat surface orthogonally and at close distance. The best approach is to position the Kinect to be roughly orthogonal to the surface, and push it so close that the entire depth image turns black (this happens at a distance of around 50cm). Then slowly pull the Kinect back until about half the pixels have valid depth values. Then carefully rotate the Kinect until the valid pixels are distributed evenly across the depth image, and then pull back slightly further until most or all pixels have valid depth. 4. Press the "1" key to capture the current depth image. This will capture a number of frames, as indicated by the pop-up window. Do not move the Kinect while the pop-up window stays on the screen. 5. Move the Kinect further back, ideally until a sharp change in depth mapping color. Carefully rotate the Kinect such that pixels of different colors are evenly distributed, and adjust the distance until there are approximately the same number of pixels of either color. 6. Repeat from step 4, until the distance from the Kinect to the flat surface is larger than the largest distance you intend to capture during use, or repeat until the Kinect is approximately 2m away from the surface (the maximum practical capture distance). 7. Press the "2" key to calculate per-pixel depth correction equations. When the calculation is complete, the correction parameters will automatically be written to a DepthCorrection-.dat file in the Kinect package's configuration directory. The correction calculation may fail if there are too many depth pixels that don't have valid depth data, i.e., show up in black, in too many depth images. If an error message appears after pressing "2", capture additional depth images within the desired depth range as in step 4, and press "2" again. 8. Exit from RawKinectViewer. Projection Matrix Calculation ----------------------------- The new intrinsic calibration method uses a semi-transparent checkerboard calibration target. The idea is that the target looks like a checkerboard both to the depth and color cameras. The user presents the calibration target to the Kinect in a sequence of different poses, and manually matches a 2D distorted grid to the observed calibration target in the depth and color cameras' views using the RawKinectViewer utility. After a sufficient number of poses have been captured, RawKinectViewer calculates the two calibration matrices and stores them in a uniquely identified file for subsequent retrieval whenever a Kinect device is activated. The easiest and most precise way to build a semi-transparent checkerboard is to print a grid pattern onto a large sheet of paper, glue the entire sheet of paper onto an IR-transparent glass plate, cut precisely along all grid lines using a sharp knife and metal ruler, and then peel off every other paper square and remove any glue residue from the now transparent tiles. The grid should have an odd number of tiles in both directions such that all four corner tiles can stay opaque. Mark the center of the lower-left corner tile with a small dot (big enough to be visible in the Kinect's color image stream). This mark will be used to check for grid alignment during calibration. The number of tiles, and the size of each tile, can be configured in RawKinectViewer, but the default grid layout has 7x5 tiles of 3.5"x3.5" each. This leads to an overall target size of 24.5"x17.5", which is just the right size to cover the Kinect's full viewing range. The following is the intrinsic calibration procedure for a Kinect. There are two tutorial videos on YouTube: * http://www.youtube.com/watch?v=Qo05LVxdlfo explains steps 0 to 9 in the following procedure. * http://www.youtube.com/watch?v=VQh4joyZwx8 explains step 10 in the following procedure. These are the steps: 0. Prepare calibration target 1. Start RawKinectViewer for the Kinect to be calibrated. If the calibration target does not have the default layout, specify the layout using the -gridSize and -tileSize command line options. 2. Create a "Draw Grids" tool by binding it to six (6) buttons (keyboard keys or mouse buttons). Here, we will use the keys "1", "W", "2", "3", "4", "5", in that order. This will create two green grids, one in the depth and one in the color image stream. The grid can be deformed by grabbing any grid intersection using the mouse and the "1" key, or translated by grabbing the dot in the center of the grid, or rotated around the center by grabbing the dot slightly outside the right edge of the grid, all using the "1" key. 3. Place the calibration target in front of the Kinect at the next position and orientation. The target should be placed in a range of distances, starting from the near depth cutoff to the end of the desired tracking range, and in a variety of orientations from straight-on to about 45 degree angles. Calibration requires at least 4 calibration poses, but ideally a larger number to improve calibration quality. 4. Collect an average depth frame by turning on the "Average Frames" button in the main menu. Wait until the depth image does not change anymore. 5. Align both the depth and color grids grids to the observed grids. The dot in the center of the lower-left corner tile must roughly coincide with the mark in the lower-left corner tile of the calibration target to ensure that the grid is not flipped or rotated. The orientation of the depth and color stream grids will be very similar. 6. Store the current calibration grids by pressing the "2" key. 7. Turn off the "Average Frames" button in the main menu so that the depth image stream updates in real time again. 8. Repeat from step 3 with the next calibration pose. 9. After all calibration poses have been captured, calculate the intrinsic calibration by pressing the "4" key. This will print some diagnostic information to the terminal, and create the calibration file specific to the Kinect device in the Kinect package's configuration directory. The files are named IntrinsicParameters-.dat and contain the two calibration matrices in binary format. They cannot be hand-edited. 10. After calibration, use KinectViewer and a 3D measurement tool to ensure that the size and shape of the virtual calibration target as reconstructed by the Kinect matches the real calibration target. A good calibration will create a match to a few percent. Keep in mind that measurements inside KinectViewer will be reported in centimeters. There is a complementary write-up of the calibration procedure at http://doc-ok.org/?p=289 Recording 3D Movies =================== KinectViewer now has a built-in recorder for 3D movies from one or more Kinect devices. When running a live view (using one or more -c and/or -p command line arguments), the live 3D streams can be saved to a set of files by selecting the "Save Streams" main menu entry. KinectViewer will ask for a file name prefix, and then save the depth and color streams of all enabled Kinect devices to files -.depth and -.color. It will also record synchronized audio from the default capture device (selectable from the sound control panel) and save it to .wav. Previously recorded 3D movies can be played back by running KinectViewer with one or more -f - arguments, one per saved 3D stream, and an optional -s argument to play back synchronized audio. Merging 3D Facades from Multiple Kinects ======================================== To merge the 3D reconstructions of multiple Kinect cameras, first intrinsically calibrate each one using RawKinectViewer, as described above and shown in the videos. Then extrinsically calibrate all Kinect devices with respect to an arbitrarily chosen world coordinate system. 1. Place calibration target into field of view of all Kinect devices to be calibrated (use RawKinectViewer on each to check). 2. For each Kinect device: 2a. Run RawKinectViewer and fit a grid to the calibration target's image in the depth stream (it is not necessary to fit to the color stream). 2b. Save and unproject the grid to receive a list of 3D tie points on the console. 2c. Copy the tie points into a file KinectPoints-.csv. 3. Create a file TargetPoints.csv, containing the 3D interior corner positions of the calibration target, in some arbitrary right-handed world coordinate system using an arbitrary unit of measurement, in the same order as printed by RawKinectViewer (going from left to right, and then bottom to top). See example target file below. 4. For each Kinect device: 4a. Run the Kinect device's tie points and the target points through AlignPoints: AlignPoints -OG KinectPoints-.csv TargetPoints.csv 4b. Observe the mismatches between purple camera points and green target points in AlignPoints' display. If the discrepancies are too large, or there is no fit at all, repeat step two for the Kinect device. 4c. Paste the final best fitting transform displayed by AlignPoints (everything after the "Best transformation:" header) into a ExtrinsicParameters-.txt file in the Kinect package's configuration directory (Kinect- in the Vrui installation's etc directory). 5. Run KinectViewer on all Kinect devices. This will show all individual 3D video streams matching more or less seamlessly, depending on how much care was taken during intrinsic and extrinsic calibration. Under ideal circumstances, there should be no noticeable mismatches between individual streams. Unlike intrinsic calibration, extrinsic calibration has to be repeated any time any of the calibrated Kinect devices are moved, even by very little. It is recommended to rigidly attach all Kinect devices to a sturdy frame before attempting precise extrinsic calibration in a production setting. If precision is paramount, it might even be necessary to repeat calibration periodically if nothing changed, to account for time-varying deformations inside the devices themselves. An example TargetPoints.csv file, for a 5x4 calibration grid with a tile size of 3" (a 5x4 grid has 4x3 interior corners): 0, 0, 0 3, 0, 0 6, 0, 0 9, 0, 0 0, 3, 0 3, 3, 0 6, 3, 0 9, 3, 0 0, 6, 0 3, 6, 0 6, 6, 0 9, 6, 0 The resulting world space will use whatever units were used to define the target points; in the example above, world space will use inches. There is a complementary write-up of the calibration procedure at http://doc-ok.org/?p=295

近期下载者

相关文件


收藏者