• masterqkkx
    了解作者
  • Python
    开发工具
  • 5MB
    文件大小
  • zip
    文件格式
  • 0
    收藏次数
  • 1 积分
    下载积分
  • 4
    下载次数
  • 2020-10-27 22:51
    上传日期
Nature Machine Intelligence 2020 上一篇文章Neural circuit policies enabling auditable autonomy的源码
keras-ncp-1.0.0.zip
  • mlech26l-keras-ncp-2ebcddb
  • reproducibility
  • README.md
    13.8KB
  • misc
  • ncp_cnn.png
    33.7KB
  • sine.webp
    4.9MB
  • wirings.png
    40.5KB
  • kerasncp
  • datasets
  • icra2020_lidar_collision_avoidance.py
    2.7KB
  • __init__.py
    671B
  • wirings
  • wirings.py
    13.2KB
  • __init__.py
    679B
  • ltc_cell.py
    14.8KB
  • utils.py
    1.3KB
  • __init__.py
    778B
  • LICENSE
    11.1KB
  • setup.py
    1.7KB
  • README.md
    5KB
  • .gitignore
    1.8KB
内容介绍
# Reproducibility materials for the paper *Neural Circuit Policies Enabling Auditable Autonomy* This page serves the purpose of documenting the code, materials, and data used for performing the experiments reported in the paper. Note that the python code for training and evaluating the models has been written over a period of more than two years and a lot of changes have been made during that time. As a result, there are a few caveats with the code: - The code contains a lot *legacy code*, which is not used anymore (e.g., some parts of the data augmentation). - The code is very sparsely documented - The code is written in TensorFlow 1.X (tested with 1.14) For a polished, much more user-friendly, TensorFlow 2.x reference implementation of NCPs, we refer to [the main project page](https://github.com/mlech26l/keras-ncp/). ## Introduction The following page describes the ```ncp_lab_notebook.tar.gz``` archive that was submitted alongside the paper for the peer-review. To ensure that we do not temper the archive after publication, the SHA-256 sum the file is ```0f22b3d7b2986e343e1d3012f51fdf09f876cde2f1f0a6fdb850acded4fc387e```. As the full archive is around 200 GB large, we are unable to publicly host all parts. For inquiries requesting the full archive please drop an [email](mathias.lechner@ist.ac.at). We are able to publicly host the complete python training code and Matlab analysis code [here (87 MB)](https://seafile.ist.ac.at/f/ca20fdb80a7d44af9817/?dl=1). Moreover, we are able to publicly host the code as above and the data generated by the active test runs that were analysis by our Matlab scripts [here (1.5 GB)](https://seafile.ist.ac.at/f/f8faadac60794200a0ae/?dl=1) In particlar, both smaller archives linked above do not include the training data (passive and active), as well as the rosbag recordings from the real car. Note that the Apache License 2.0 of this repository does not apply to the reproducibility materials downloadable by the links above. The copyright of the reproducibility materials belongs to the authors of the paper. ## Archive structure Generally, this archive contains the materials to do the following three things: 1. Train various models on the *passive dataset* 2. Train various models on the *active dataset* 3. Analyze the logs of the control of the car by the active models Not included in the archive is the code stack that runs on the car and is used to collect the training data, as well as deploy the models for controlling the real car. ## System description and external libraries used All models were trained on Ubuntu 16.04 machines using Python3.5 with TensorFlow 1.14.0. Data analysis was performed using Python2.7, Python3.6, and Matlab 2019a. ## Directory structure description This archive is composed of 11 sub-directories: - ```training_scripts```: Contains the code to train the passive and active models - ```active_test_analysis```: Contains the code to analyze the logs produced by testing the models on the active steering test - ```pretrained_active_models```: Pretrained weights of the models tested on the active steering test - ```pretrained_passive_models```: Pretrained weights of the models tested on the passive evaluation - ```training_data_active```: Training data for the active steering test - ```training_data_passive```: Training data for the passive steering test - ```active_test_recordings```: Logs produced by testing the models on the active steering test - ```Lipschitz_analysis```: contains code to reproduce the smoothness analysis of the RNN dynamics in active steering test - ```Neural_activity_analysis```: contains code to project neural activity of different RNNs on the road during active steering test - ```PCA_analysis```: contains code to compute principle component analysis on RNN's internal dynamics - ```SSIM_Crash_analysis```: contains code to compute the structural similarity index of saliency maps while input noise variance is increasing. - ```analysis_data```: data for the Lipschitz, neural activity, PCA, and SSIM analysis - ```saliency_widget```: HTML visualization to inspect the attention maps of all active test recordings ## Auditing the training data Ideally, we want to have exactly one dataset that we can use for passive evaluation as well as train our model for the active test. However, in our scenario, this is not possible as the roads at our private active test track are from a different probability distribution than the streets observed on public roads. In particular, the roads at our private test track are narrower than standard public roads and furthermore lack any lane markers. Consequently, no model was observed to generalize well from training on data of public roads to an evaluation on the private test track. As a result, we collected a separate training set by recording a human driving that navigated through the private test track. We train all models for the active test on only the data from the private test track. Our rationale behind this choice is that we have plenty of passive data (from public roads), whereas the private test track is limited in diversity and size. Therefore, training a model on both the data obtained on public roads and the data obtained on the private test track would create an imbalanced towards an excess of public road training data. All training samples need to be cropped and rescaled before feeding them into any neural network. The data itself are located in the directories ```training_data_active``` and ```training_data_passive``` respectively in the form of h5-files. ## Auditing the training pipeline Generally, we want to share as much code as possible for training the active and the passive model in order to have the same conditions in both scenarios. However, due to the difference in objectives in both cases, there are some files in the training pipelines that are not shared. In particular, for the passive evaluation, we perform a 10-fold cross-testing evaluation, whereas, in the active evaluation, we have a single training set and test the model on the real car. Consequently, exactly the data loading scripts, as well as the logging scripts, are the two files that are not shared between the passive and the active training pipeline. The logging code is interleaved with the main training scripts. Therefore we have the following partitioning: ### Active test only files: - ```active_data_provider.py```: Loads the training data for the active test - ```train_active_test.py```: Main file to train models for the active test ### Passive test only files: - ```passive_test_data_provider.py```: Loads the training data for the passive test and performs splitting for the cross-testing - ```run_passive_test.py```: Main file to train models for the passive test ### Files that are shared between the training pipeline of the passive and active test - ```augmentation_utils.py```: Code to perform the shadow augmentation and the sample weighting - ```perspective_transformation.py```: Code to crop and adjust the input images before processing them by the models - ```convolution_head.py```: Implementation of the convolutional layers that precede each RNN model - ```wormflow3.py```: Implementation of the NCP model - ```models/cnn_model.py```: Implementation of the feedforward convolutional neural network model used as a baseline - ```models/rnn_models.py```: Implementation of the Vanilla RNN, CT-RNN, GRU, and CT-GRU - ```models/e2e_lstm.py```: Implementation of the LSTM + wrapper to make it compatible to the training pipeline - ```models/e2e_worm_pilot.py```: Wrapper that makes the NCP model compatible to the training pipeline - ```models/e2e_rnn.py```: Wrapper that makes various RNNs compatible to the training pipeline ## Auditing the model implementation Implementation of the NCP model, including the ODE solver and architectural design, is implemented in the file ```wormflow3.py```. Furthermore, this module contains methods to exp
评论
    相关推荐
    • matlabcnhelp.rar
      matlab中文帮助很难找的,快速下载
    • MobilePolice.rar
      移动警察,车牌识别,车牌定位系统源代码,已经运用在移动车载稽查系统中。
    • SVM(matlab).rar
      支持向量机(SVM)实现的分类算法源码[matlab]
    • svm.zip
      用MATLAB编写的svm源程序,可以实现支持向量机,用于特征分类或提取
    • Classification-MatLab-Toolbox.rar
      模式识别matlab工具箱,包括SVM,ICA,PCA,NN等等模式识别算法,很有参考价值
    • VC++人脸定位实例.rar
      一个经典的人脸识别算法实例,提供人脸五官定位具体算法及两种实现流程.
    • QPSK_Simulink.rar
      QPSK的Matlab/Simulink的调制解调仿真系统,给出接收信号眼图及系统仿真误码率,包含载波恢复,匹配滤波,定时恢复等重要模块,帮助理解QPSK的系统
    • LPRBPDemo2009KV.rar
      车牌识别,神经网络算法,识别率高达95%,识别时间低于80ms。
    • MODULATION.RAR
      这个源程序代码包提供了通信系统中BPSK,QPSK,OQPSK,MSK,MSK2,GMSK,QAM,QAM16等调制解调方式 用matlab的实现,以及它们在AWGN和Rayleigh信道下的通信系统实现及误码率性能
    • algorithms.rar
      十大算法论文,包括遗传算法,模拟退火,蒙特卡罗法等等,对于初学者很有帮助!!