rodrigob-doppia-ba2f818c30cb

所属分类:matlab编程
开发工具:matlab
文件大小:20099KB
下载次数:35
上传日期:2014-03-28 21:35:46
上 传 者xiaowei19870119
说明:  在视频图像中实现目标的实时目标分类和检测、跟踪
(Achieve the goal of real-time video image object classification and detection, tracking)

文件列表:
rodrigob-doppia-ba2f818c30cb\.hg_archival.txt (148, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\.hgignore (2439, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\common_settings.cmake (9986, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\data\sample_test_images\bahnhof\image_00000000_0.png (450464, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\data\sample_test_images\bahnhof\image_00000000_1.png (449272, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\data\sample_test_images\bahnhof\image_00000001_0.png (450031, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\data\sample_test_images\bahnhof\image_00000001_1.png (450001, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\data\sample_test_images\bahnhof\image_00000002_0.png (449723, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\data\sample_test_images\bahnhof\image_00000002_1.png (449925, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\data\sample_test_images\bahnhof\image_00000003_0.png (449584, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\data\sample_test_images\bahnhof\image_00000003_1.png (449038, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\data\sample_test_images\bahnhof\image_00000004_0.png (449394, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\data\sample_test_images\bahnhof\image_00000004_1.png (449347, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\data\sample_test_images\bahnhof\image_00000005_0.png (449090, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\data\sample_test_images\bahnhof\image_00000005_1.png (449212, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\data\sample_test_images\bahnhof\image_00000006_0.png (448352, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\data\sample_test_images\bahnhof\image_00000006_1.png (448876, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\data\sample_test_images\bahnhof\image_00000007_0.png (448908, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\data\sample_test_images\bahnhof\image_00000007_1.png (449514, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\data\sample_test_images\bahnhof\image_00000008_0.png (447952, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\data\sample_test_images\bahnhof\image_00000008_1.png (449101, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\data\sample_test_images\bahnhof\image_00000009_0.png (447210, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\data\sample_test_images\bahnhof\image_00000009_1.png (448087, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\data\sample_test_images\bahnhof\image_00000010_0.png (446010, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\data\sample_test_images\bahnhof\image_00000010_1.png (447230, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\data\sample_test_images\inria\crop_000012.png (540374, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\data\sample_test_images\inria\person_009.png (538372, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\data\sample_test_images\inria\person_062.png (539955, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\data\sample_test_images\inria\person_and_bike_027.png (537824, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\data\sample_test_images\inria\person_and_bike_181.png (536613, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\data\trained_models\2012_04_04_1417_trained_model_multiscales_synthetic_softcascade.proto.bin (1215770, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\data\trained_models\2012_04_04_2281_trained_model_octave_0_synthetic_cascade.proto.bin.bootstrap2 (242098, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\generate_protocol_buffer_files.sh (567, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\libs\CImg\CHANGES.txt (72098, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\libs\CImg\CImg.h (1900139, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\libs\CImg\examples\CImg_demo.cpp (78272, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\libs\CImg\examples\captcha.cpp (9207, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\libs\CImg\examples\check_all_functions.cpp (3959, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\libs\CImg\examples\curve_editor.cpp (12837, 2013-11-23)
rodrigob-doppia-ba2f818c30cb\libs\CImg\examples\dtmri_view.cpp (24823, 2013-11-23)
... ...

____ __ ____ ____ __ __ ( \ / \( _ \( _ \( ) / _\ ) D (( O )) __/ ) __/ )( / \ (____/ \__/(__) (__) (__)\_/\_/ What is this ? ============== The doppia repository contains the open source release of the following research publications * Pedestrian detection at 100 frames per second R. Benenson, M. Mathias, R. Timofte, L. Van Gool; presented at CVPR 2012. * Stixels estimation without depth map computation R. Benenson, R. Timofte, L. Van Gool; presented at ICCV 2011, CVVT workshop. * Fast stixels estimation for fast pedestrian detection R. Benenson, M. Mathias, R. Timofte, L. Van Gool; presented at ECCV 2012, CVVT workshop. the code is released with a _research only_ license (see license.text), the license is _not_ [OSI compatible](http://opensource.org/docs/osd) (due, at least, to term 6 since we do discriminate against "fields of endeavor"). The main authors of this work are * [Rodrigo Benenson](http://rodrigob.github.com) (code architecture, test time code) * Markus Mathias (training time code) * [Radu Timofte](http://homes.esat.kuleuven.be/~rtimofte) (contributed to stixels estimation code) and the code is owned by the [KU Leuven university](https://securewww.esat.kuleuven.be/psi/visics). This C++ and CUDA code base provides means to compute stixels estimates from stereo images, detect pedestrians for single images, and train the detector for new classes. ![Example result](http://rodrigob.github.com/figures/2012_eccv_cvvt_workshop_example_system_results_small.png "Example result") Introduction ============ Before starting looking into the code a few warning need to be provided: 1. The doppia open source release is code that has been _extracted_ from a much larger code base (that includes more stereo methods, optical flow, visual odometry and many other things). Because of this: * There is chances some things may break down, since (more) bug may have been introduced while extracting the code. * If wonder "why is this method templated?" or "why there is an abstract class for something that has only one child?"; the answer is always "because in the original code base has many variants of this method". 2. The name `doppia` aims to hint about "double". Doppia is a library for _stereo images_ processing, the assumed default input is stereo images. The monocular images input is a special case. This is visible in the code. 3. In case it is not already clear, this is **research quality code**. The code has been developed over a 3 year lapse, mainly with the focus on easy exploration of multiple variants and some concern on computational efficiency. The readability and compactness of the code where secondary considerations. We are releasing the code with the hope that it will enable other researchers push forward the frontiers of human knowledge. 4. We did our best to keep a regular code style everywhere. The main exception is the folder `doppia/src/applications/boosted_learning` which has a more loose style, since it contains code imported from a previously separate code base. Requirements ============ * Linux (the code can in theory compile and run on windows, but practice has shown this to be a bad idea). * C++ and CUDA compilation environments properly set. Only gcc 4.5 or superior [are supported](https://bitbucket.org/rodrigob/doppia/issue/2/stixel_world-building-fix-of-building#comment-1842139). * A GPU with CUDA capability 2.0 or higher (only for objects detection code, stixels code is CPU only), and ~200 Mb of free memory (for images of ***0x480 pixels). * All boost libraries. * Google protocol buffer. * OpenCv installed (2.4, but code also work with older versions). If speed is your concern, I strongly recommend to compile OpenCv on your machine using CUDA, enabled all relevant SIMD instructions and using `-ffast-math -funroll-loops -march=native` flags. * libjpeg, libpng. * libSDL * CMake (and knowledge on how to use it). * Fair amount of patience to get things running. How to compile the code ? ========================== The folder `/data` contains data that will be useful to do some mini-tests of the code. The folder `/tools` contains many (python) scripts used to manipulate data, prepare datasets, generate plots, transform data. The folder `/src` is where all the juicy pieces are, each subfolder groups code by topic. The folder `/src/applications` is where all the code that compiles is located and it is where you will focus your attention at this stage. This code has been compiled on multiple machines multiple times. This code has run on Ubuntu and Fedora distributions. To the best of our knowledge things should compile and run smoothly; but experience tells that you may encounter some pain here. Use your experience to understand and figure out what could be the problem. If you think there is something intrinsically wrong in our code, please fill a bug report (see "FAQ" section). Step 1: where is what ? ------------------------ Before trying to compile anything you must edit the file `common_settings.cmake` to add the configuration specfic to your own machine. Step 2: compile CPU only code ------------------------------ I suggest you first try to compile `doppia/src/applications/ground_estimation`. This is the simplest program and should make sure that your cmake + C++ pipeline is working fine. 1. To compile any program in doppia simply go to the corresponding application directory, `cd doppia/src/applications/ground_estimation` 2. Run `cmake . && make`. If you know what you are doing you can run `ccmake .` first and setup things right. If you know that you have enough memory you can run `cmake . && make -j5` (or `-j10`) to make things go faster. 3. If things went well you should be able to run `cmake . && make -j2 && ./ground_estimation -c test.config.ini` and see fireworks ! 4. For more information on how to use the application(s) read section "How to use the code ?". If `ground_estimation` compiled fine, then you should be able to use the same steps to compile `stixel_world`. cd doppia/src/applications/stixel_world cmake . && make && cmake . && make -j2 && OMP_NUM_THREADS=4 ./stixel_world -c fast.config.ini --gui.disable false cmake . && make && cmake . && make -j2 && OMP_NUM_THREADS=4 ./stixel_world -c fast_uv.config.ini --gui.disable false Step 3: compile test time code ------------------------------- In step 2 all compiled code was only C++, now we will include CUDA code. Having properly done step 1 is critical. 1. `cd doppia/src/applications/objects_detection` 2. `cmake . && make -j2` 3. `cmake . && make -j2 && OMP_NUM_THREADS=4 ./objects_detection -c very_fast_over_bahnhof_cvpr2012.config.ini --gui.disable false` When starting ./objects_detection may spend a few seconds (~10 seconds) precomputing before launching. If things go well you should see a (very brief) video of stixels being estimated and pedestrian being detected. You can also run cmake . && make -j2 && OMP_NUM_THREADS=4 ./objects_detection -c inria_pedestrians_cvpr2012.config.ini --gui.disable false and cmake . && make -j2 && OMP_NUM_THREADS=4 ./objects_detection -c chnftrs_over_bahnhof.config.ini --gui.disable false for more fun. Consult section "How to use the code ?" for more information. If you want to use our objects detector as part of a larger system, you may be interested in also consulting "What is `objects_detection_lib` ?". Step 4: (optional) compile objects_detection_lib ------------------------------------------------- Same as steps 2 and 3 (if more details needed, fill a bug report; see FAQ below). Step 5: (optional) compile training time code ---------------------------------------------- Same as steps 2 and 3 (if more details needed, fill a bug report; see FAQ below). How to use the code ? ==================== All executables inside doppia have the following behaviour: * `./the_application --help` will list all the command options and short description for each one. Since the applications contain multiple algorithms and each algorithm has multiple options, in practice the application has dozens or even hundred of options. Most options have a sane default value. * Since there are many options, all applications have a `--configuration_file` option, that can also be called using `--c`. We have provided example configuration files (with extension `*.config.ini`) as starting point (see previous section for the full command lines). * All options listed via `--help` can be set either via command line or via the configuration file. If an option is present both in the configuration file and in the command line, command line takes precedence. This feature is very useful for experimentation, where you can have a "base configuration" and then test different options on top. A typical example of this is `--gui.disable true` / `--gui.disable false` to disable or enable the user interface. `stixel_world` -------------- This application computes the ground plane estimate and the stixel distance. The user interface has different "modes" that can be switched by pushing the numbers in the keyboard. The configuration files provided should give a good idea of the relevant options (look for the options that are not commented out). `objects_detection` -------------------- This application detects objects. Again the configuration files provided should give a good idea of the relevant options (look for the options that are not commented out). The option `save_detections` will save files in the [protocol buffer format](https://developers.google.com/protocol-buffers/docs/overview) specified by `doppia/src/objects_detection/detections.proto`. Protocol buffer files can be read and manipulated by most programming languages. You can use scripts such as `doppia/tools/objects_detection/detections_to_caltech.py` to convert the detections into another format of predilection. Possibly the most relevant option is `objects_detector.method`. We provide CPU and GPU implementations of the following papers: * P. Dollar et al. 2009 Integral Channel Features * P. Dollar et al. 2010 Fastest pedestrian detector in the West * Benenson et al. 2012 Pedestrian detection at 100 frames per second please notice that the CPU versions are **not** identical to the GPU versions. The features computation provide slightly different results, which in turn create different performances. Currently, the GPU version is the only to have fine tunned features computation. Also we use more the GPU versions than the CPU versions, so there is more chances than the CPU version are buggy. When `objects_detector.method = cpu_channels` or `objects_detector.method = gpu_channels` the actual method used depends on the input model. If the input method is a single scale model (such as 2012_*_trained_model_octave_0_*.proto.bin) then P. Dollar 2009 will be used. If the input is a multiscale model, then the image features will only be computed at the require scales, and the multiple models are used (see plot 3.a of Benenson et al. 2012). When using the detector(s) on new datasets the properly setting `objects_detector.min_scale/.max_scale/.num_scales` is critical. We also recommend trying `objects_detector.ignore_soft_cascade = false` since the soft cascade was adjusted to keep the performance on the INRIA dataset and may be sub-optimal on other datasets (without the soft cascade the detector runs ~20x slower). For more details of the content of the trained models please consult `doppia/src/objects_detection/detector_model.proto`, notice that not all the options considered on the files and currently handled. When `gui.disable = false`, the user interface will present images such as the one above. The more colorful a window is, the higher the confidence on the detection; inversely, the darker the window the lower the detection score (the closer to `objects_detector.score_threshold`). `boosted_learning` -------------------- TO BE DONE. For more details see also section "How train the detector for a new class ?". What is `objects_detection_lib` ? ======================================= If you simply want to detect pedestrians in a stereo or monocular dataset we recommend you use the `objects_detection` application and use the output protocol buffer file. On the other hand if your applications requires detections "on the fly" or a tight integration of the detector, then `objects_detection_lib` could be for you. Look at `doppia/src/applications/objects_detection_lib/objects_detection_lib.hpp` to see how simple the interface is and `TestObjectsDetectionApplication` of an example usage. Please not that to keep a simple interface `objects_detection_lib` uses singleton objects. Due to this _only a single instance can be used_; trying to run two instances in different threads will possibly have catastrophic consequences. For a deeper integration you will have to use the actual classes used to implement `objects_detection_lib` (see `objects_detection_lib.cpp` and `doppia/src/applications/objects_detection/ObjectsDetectionApplication.cpp`). How train the detector for a new class ? ========================================= TO BE DONE. (If you have compiled everything, looked at the code, the configuration files, read the papers, and still have some questions; please contact the authors, we will then update this readme). FAQ ============ I am using this code for my new publication, how should I cite it ? ------------------------------------------------------------------- Please cite our CVPR2012 paper @inproceedings{Benenson2012Cvpr, author = {R. Benenson and M. Mathias and R. Timofte and L. {Van Gool}}, title = {Pedestrian detection at 100 frames per second}, booktitle = {CVPR}, year = {2012} } I found a bug, what should I do ? --------------------------------- Please fill a bug report in the [bitbucket repository](https://bitbucket.org/rodrigob/doppia). I cannot commit to answering or fixing all bugs, but we will do our best. Before filling a bug report make sure you have tested the latest version of the source code available at [bitbucket repository](https://bitbucket.org/rodrigob/doppia). "atomicAdd" is undefined ------------------------ When running `cmake . && make` to compile `boosted_learning` you may encounter the following error error: identifier "atomicAdd" is undefined then run `cmake . && make` again (sorry could not figure out the proper cmake fix, so I used this "run it twice" workaround). opencv2/gpu/gpu.hpp not found ----------------------------- OpenCv 2.4.2 defined `cv::gpu::CudaMem` but by default does not install the corresponding header files (OpenCv 2.3 did). Copy `/OpenCV-2.4.2/modules/gpu/include/opencv2/gpu/gpu.hpp` into `your_system_include/opencv2/gpu/gpu.hpp` and things will go better. What is this nvcc-4.4.sh file? Where do I find it? -------------------------------------------------- `nvcc.sh` is just a small work around we use to make cuda work when you have "too new" gcc installed. Its content is #!/bin/bash /usr/local/cuda/bin/nvcc --compiler-bindir /home/rodrigob/code/references/cuda/gcc-4.4 $@ and the gcc-4.4 folder contains $ ls -lh /home/rodrigob/code/references/cuda/gcc-4.4 total 4.0K lrwxrwxrwx 1 rodrigob rodrigob 16 2011-08-11 15:42 g++ -> /usr/bin/g++-4.4 lrwxrwxrwx 1 rodrigob rodrigob 16 2011-08-11 15:42 gcc -> /usr/bin/gcc-4.4 -rwxr-xr-x 1 rodrigob rodrigob 102 2011-11-17 19:26 nvcc-4.4.sh using this, `nvcc` knows it should use gcc-4.4 to compile (instead of the system default, more up to date gcc, with which cuda 4 is not compatible). I think cuda 5 fixes this issue (but not sure).

近期下载者

相关文件


收藏者