bnn-master

所属分类:人工智能/神经网络/深度学习
开发工具:Python
文件大小:18809KB
下载次数:4
上传日期:2018-09-26 18:41:34
上 传 者123xy
说明:  一个高度优化的轻量深度学习前向框架,使用C/C++语言开发,跨平台,支持读取Caffe模型文件,主要处理卷积神经网络。与市面上大多数移动端解决方案不同,我们的量化压缩技术不仅针对模型的权重,还涉及到输入的特征向量压缩。针对这一特性我们在模型文件和内存大小得到裁剪的同时还对框架的性能做了大量优化。
(A highly optimized lightweight deep learning forward framework, developed using C/C++ language, cross-platform, supported to read Caffe model files, mainly dealing with convolutional neural networks. Unlike most mobile solutions on the market, our quantitative compression techniques are not only for model weights, but also for input eigenvector compression. This feature allows us to tailor model files and memory sizes while greatly optimizing the performance of the framework.)

文件列表:
compare_label_dbs.py (1265, 2018-09-22)
counts_over_days.png (174953, 2018-09-22)
data.py (7657, 2018-09-22)
day_count_stats.py (1197, 2018-09-22)
dump_bee_crops.py (1832, 2018-09-22)
freeze_graph.sh (255, 2018-09-22)
generate_graph_pbtxt.py (1181, 2018-09-22)
label.201802_sample.db (254976, 2018-09-22)
label_db.py (3032, 2018-09-22)
label_ui.py (5810, 2018-09-22)
materialise_label_db.py (1483, 2018-09-22)
merge_dbs.py (1017, 2018-09-22)
model.py (5127, 2018-09-22)
parse_predict_out.py (22, 2018-09-22)
plot.R (566, 2018-09-22)
predict.py (3709, 2018-09-22)
predict_from_frozen.py (3299, 2018-09-22)
rasp_pi (0, 2018-09-22)
rasp_pi\capture.service (274, 2018-09-22)
rasp_pi\capture_and_send.sh (755, 2018-09-22)
rasp_pi\capture_stills.py (903, 2018-09-22)
resize.py (568, 2018-09-22)
reverse_optimise.py (3339, 2018-09-22)
rgb_labels_predictions.png (370212, 2018-09-22)
rotate_ccw.sh (133, 2018-09-22)
run_sample_training_pipeline.sh (1410, 2018-09-22)
sample.py (1718, 2018-09-22)
sample_data (0, 2018-09-22)
sample_data\labels.db (9216, 2018-09-22)
sample_data\test (0, 2018-09-22)
sample_data\test\20180206_133348.jpg (699478, 2018-09-22)
sample_data\test\20180207_115534.jpg (713542, 2018-09-22)
sample_data\test\20180213_084656.jpg (684229, 2018-09-22)
sample_data\test\20180213_122022.jpg (710139, 2018-09-22)
sample_data\test_002_001.png (1351798, 2018-09-22)
sample_data\training (0, 2018-09-22)
... ...

# BNN v2 unet style image translation from image of hive entrance to bitmap of location of center of bees. trained in a semi supervised way on a desktop gpu and deployed to run in real time on the hive using either a [raspberry pi](https://www.raspberrypi.org/) using a [neural compute stick](https://developer.movidius.com/) or a [je vois embedded smart camera](http://jevois.org/) see [this blog post](http://matpalm.com/blog/counting_bees/) for more info.. here's an example of predicting bee position on some held out data. the majority of examples trained had ~10 bees per image. ![rgb_labels_predictions.png](rgb_labels_predictions.png) the ability to locate each bee means you can summarise with a count. note the spike around 4pm when the bees at this time of year come back to base. ![counts_over_days.png](counts_over_days.png) ## usage ### gathering data the `rasp_pi` sub directory includes one method of collecting images on a raspberry pi. ### labelling start by using the `label_ui.py` tool to manually label some images and create a sqlite `label.db` the following command starts the labelling tool for some already labelled (by me!) sample data provided with in this repro. ``` ./label_ui.py \ --image-dir sample_data/training/ \ --label-db sample_data/labels.db \ --width 768 --height 1024 ``` hints * left click to label the center of a bee * right click to remove the closest label * press up to toggle labels on / off. this can help in tricky cases. * use left / right to move between images. it's often helpful when labelling to quickly switch back/forth between images to help distinguish background * use whatever system your OS provides to zoom in; e.g. in ubuntu super+up / down you can merge entries from `a.db` into `b.db` with `merge_db.py` ``` ./merge_dbs.py --from-db a.db --to-db b.db ``` ### training before training we materialise a `label.db` (which is a database of x,y coords) into black and white bitmaps using `./materialise_label_db.py` ``` ./materialise_label_db.py \ --label-db sample_data/labels.db \ --directory sample_data/labels/ \ --width 768 --height 1024 ``` we can visualise the training data with `data.py`. this will generate a number of `test*png` files with the input data on the left (with data augmentation) and the output labels on the right. ``` ./data.py \ --image-dir sample_data/training/ \ --label-dir sample_data/labels/ \ --width 768 --height 1024 ``` ![sample_data/test_002_001.png](sample_data/test_002_001.png) train with `train.py`. `run` denotes the subdirectory for ckpts and tensorboard logs; e.g. `--run r12` checkpoints under `ckpts/r12/` and logs under `tb/r12`. use `--help` to get complete list of options including model config, defining validation data and stopping conditions. e.g. to train for a short time on `sample_data` run the following... (for a more realistic result we'd want to train for many more steps on much more data) ``` ./train.py \ --run r12 \ --steps 300 \ --train-steps 50 \ --train-image-dir sample_data/training/ \ --test-image-dir sample_data/test/ \ --label-dir sample_data/labels/ \ --width 768 --height 1024 ``` progress can be visualised with tensorboard (serves at localhost:6006) ``` tensorboard --log-dir tb ``` ### inference predictions can be run with `predict.py`. to specifiy what type of output set one of the following... * `--output-label-db` to create a label db; this can be merged with a human labelled db, using `./merge_dbs.py` for semi supervised learning * `--export-pngs centroids` to export output bitmaps equivalent as those made by `./materialise_label_db.py` * `--export-pngs predictions` to export explicit model output (i.e. before connected components post processing) NOTE: given the above step that only runs a short period on a small dataset we DON'T expect this to give a great result; these instructions are more included to prove the plumbing works... ``` ./predict.py \ --run r12 \ --image-dir sample_data/unlabelled \ --output-label-db sample_predictions.db \ --export-pngs predictions ``` output predictions can be compared to labelled data to calculate precision recall. (we deem a detection correct if it is within a thresholded distance from a label) ``` ./compare_label_dbs.py --true-db ground_truth.db --predicted-db predictions.db precision 0.936 recall 0.797 f1 0.861 ``` ### running on compute stick ( note: this still doesn't work; possibly because of something in these steps, or possibly something about the tf api support of the stick. see [this forum post](https://ncsforum.movidius.com/discussion/692/incorrect-inference-results-from-a-minimal-tensorflow-model#latest) for more info... ) ### some available datasets * [Jonathan Byrne's](https://github.com/squeakus) [to-bee-or-not-to-bee](https://www.kaggle.com/jonathanbyrne/to-bee-or-not-to-bee) kaggle dataset

近期下载者

相关文件


收藏者