NeRFool

所属分类:模式识别(视觉/语音等)
开发工具:Python
文件大小:0KB
下载次数:0
上传日期:2023-09-19 08:49:35
上 传 者sh-1993
说明:  [ICML 2023]永干(Yonggan)《NeRFool:Discovering the Vulnerable Radiance Fields of Generalizable Neural Radiance Forces对抗对抗扰动的脆弱性》...,
([ICML 2023] "NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations" by Yonggan Fu, Ye Yuan, Souvik Kundu, Shang Wu, Shunyao Zhang, Yingyan (Celine) Lin)

文件列表:
LICENSE (1067, 2023-09-19)
config.py (13357, 2023-09-19)
configs/ (0, 2023-09-19)
configs/eval_deepvoxels.txt (438, 2023-09-19)
configs/eval_llff.txt (391, 2023-09-19)
configs/eval_nerf_synthetic.txt (446, 2023-09-19)
configs/finetune_llff.txt (728, 2023-09-19)
configs/pretrain.txt (685, 2023-09-19)
configs/pretrain_dp.txt (686, 2023-09-19)
data/ (0, 2023-09-19)
data/download_eval_data.sh (773, 2023-09-19)
env.yml (2370, 2023-09-19)
eval/ (0, 2023-09-19)
eval/__init__.py (0, 2023-09-19)
eval/__pycache__/ (0, 2023-09-19)
eval/__pycache__/geo_interp.cpython-37.pyc (1379, 2023-09-19)
eval/__pycache__/pc_grad.cpython-37.pyc (3720, 2023-09-19)
eval/eval.py (11960, 2023-09-19)
eval/eval_adv.py (51009, 2023-09-19)
eval/eval_deepvoxels.sh (429, 2023-09-19)
eval/eval_llff_all.sh (786, 2023-09-19)
eval/eval_nerf_synthetic_all.sh (864, 2023-09-19)
eval/finetune_llff.sh (1232, 2023-09-19)
eval/geo_interp.py (1331, 2023-09-19)
eval/lpips_tensorflow/ (0, 2023-09-19)
eval/lpips_tensorflow/LICENSE (1315, 2023-09-19)
eval/lpips_tensorflow/__pycache__/ (0, 2023-09-19)
eval/lpips_tensorflow/__pycache__/lpips_tf.cpython-37.pyc (3011, 2023-09-19)
eval/lpips_tensorflow/export_to_tensorflow.py (2564, 2023-09-19)
eval/lpips_tensorflow/lpips_tf.py (3338, 2023-09-19)
eval/lpips_tensorflow/requirements-dev.txt (45, 2023-09-19)
eval/lpips_tensorflow/requirements.txt (10, 2023-09-19)
eval/lpips_tensorflow/setup.py (307, 2023-09-19)
eval/lpips_tensorflow/test_network.py (1252, 2023-09-19)
eval/pc_grad.py (10645, 2023-09-19)
eval/render_llff.sh (1054, 2023-09-19)
... ...

# NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations Yonggan Fu, Ye Yuan, Souvik Kundu, Shang Wu, Shunyao Zhang, Yingyan (Celine) Lin Accepted at ICML 2023. [ [Paper](http://proceedings.mlr.press/v202/fu23g/fu23g.pdf) | [Video](https://www.youtube.com/watch?v=oC8Xi4cEGKw) | [Slide](https://drive.google.com/file/d/1PCDSLrnuf8CZ3VloqBGldq8mR2CWM22o/view?usp=drive_link) ] ## An Overview of NeRFool - Generalizable Neural Radiance Fields (GNeRF) are one of the most promising real-world solutions for novel view synthesis, thanks to their cross-scene generalization capability and thus the possibility of instant rendering on new scenes. While adversarial robustness is essential for real-world applications, little study has been devoted to understanding its implication on GNeRF. In this work, we present NeRFool, which to the best of our knowledge is the first work that sets out to understand the adversarial robustness of GNeRF. Specifically, NeRFool unveils the vulnerability patterns and important insights regarding GNeRF's adversarial robustness and provides guidelines for defending against our proposed attacks.

## Citation - If you find our work interesting or helpful to your research, welcome to cite our paper: ``` @article{fu2023nerfool, title={NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations}, author={Fu, Yonggan and Yuan, Ye and Kundu, Souvik and Wu, Shang and Zhang, Shunyao and Lin, Yingyan}, journal={arXiv preprint arXiv:2306.06359}, year={2023} } ``` ## Code Usage ### Prerequisites * **Install the conda environment**: ``` conda env create -f env.yml ``` * **Prepare the evaluation data**: The evaluation datasets, including LLFF, NeRF Synthetic, and DeepVoxels, are organized in the following structure: ``` ├──data/ ├──nerf_llff_data/ ├──nerf_synthetic/ ├──deepvoxels/ ``` They can be downloaded by running the following command under the `data/` directory: ``` bash download_eval_data.sh ``` * **Prepare the pretrained model**: To evaluate the adversarial robustness of pretrained GNeRFs, you can download the official [IBRNet](https://github.com/googleinterns/IBRNet) model from [here](https://drive.google.com/uc?id=165Et85R8YnL-5NcehG0fzqsnAUN8uxUJ). * **Update the paths** to datasets & pretrained models in the configuration files: `configs/eval_*`. ### Attacking GNeRFs using NeRFool - Attack a specific view direction using **a view-specific attack scheme** on the LLFF dataset: ``` CUDA_VISIBLE_DEVICES=0 python eval_adv.py --config ../configs/eval_llff.txt --expname test --num_source_views 4 --adv_iters 1000 --adv_lr 1 --epsilon 8 --use_adam --adam_lr 1e-3 --lr_gamma=1 --view_specific ``` - Generate **universal adversarial perturbations** across different views on the LLFF dataset: ``` CUDA_VISIBLE_DEVICES=0 python eval_adv.py --config ../configs/eval_llff.txt --expname test --num_source_views 4 --adv_iters 1000 --adv_lr 1 --epsilon 8 --use_adam --adam_lr 1e-3 --lr_gamma=1 ``` ## Acknowledgement This codebase is modified on top of [[IBRNet]](https://github.com/googleinterns/IBRNet).

近期下载者

相关文件


收藏者