MDTVSFA

所属分类:collect
开发工具:Python
文件大小:0KB
下载次数:1
上传日期:2022-09-16 03:40:41
上 传 者sh-1993
说明:  [官方]混合数据集训练的野外视频统一质量评估(IJCV 2021),
([official] Unified Quality Assessment of In-the-Wild Videos with Mixed Datasets Training (IJCV 2021),)

文件列表:
CNNfeatures.py (8967, 2022-09-15)
Framework.png (243695, 2022-09-15)
LICENSE (1092, 2022-09-15)
VQAdataset.py (4283, 2022-09-15)
VQAloss.py (2668, 2022-09-15)
VQAmodel.py (5138, 2022-09-15)
VQAperformance.py (1239, 2022-09-15)
cross_dataset_evaluation.py (7079, 2022-09-15)
cross_job.sh (13086, 2022-09-15)
data/ (0, 2022-09-15)
data/CVD2014info.mat (558304, 2022-09-15)
data/KoNViD-1kinfo.mat (3232785, 2022-09-15)
data/LIVE-Qualcomminfo.mat (499558, 2022-09-15)
data/LIVE-VQCinfo.mat (1469762, 2022-09-15)
data/data_info_maker.m (2066, 2022-09-15)
data/test.mp4 (816397, 2022-09-15)
job.sh (11869, 2022-09-15)
main.py (8893, 2022-09-15)
models/ (0, 2022-09-15)
models/MDTVSFA.pt (2163133, 2022-09-15)
requirements.txt (199, 2022-09-15)
test_demo.py (2963, 2022-09-15)

# Unified Quality Assessment of In-the-Wild Videos with Mixed Datasets Training [![License](https://img.shields.io/github/license/mashape/apistatus.svg?maxAge=2592000)](License) ## Description MDTVSFA code for the following paper: - Dingquan Li, Tingting Jiang, and Ming Jiang. [Unified Quality Assessment of In-the-Wild Videos with Mixed Datasets Training](https://link.springer.com/article/10.1007%2Fs11263-020-01408-w). International Journal of Computer Vision (IJCV) Special Issue on Computer Vision in the Wild, 2021. [[arxiv version]](https://arxiv.org/abs/2011.04263) ![Framework](Framework.png) ## How to? ### Install Requirements ```bash conda create -n reproducibleresearch pip python=3.6 source activate reproducibleresearch pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple # source deactive ``` Note: Make sure that the CUDA version is consistent. If you have any installation problems, please find the details of error information in `*.log` file. ### Download Datasets Download the [KoNViD-1k](http://database.mmsp-kn.de/konvid-1k-database.html), [CVD2014](https://www.mv.helsinki.fi/home/msjnuuti/CVD2014/) ([alternative link](https://zenodo.org/record/2646315#.X6OmVC-1H3Q)), [LIVE-Qualcomm](http://live.ece.utexas.edu/research/incaptureDatabase/index.html), and [LIVE-VQC](http://live.ece.utexas.edu/research/LIVEVQC/index.html) datasets. Then, run the following `ln` commands in the root of this project. ```bash ln -s KoNViD-1k_path KoNViD-1k # KoNViD-1k_path is your path to the KoNViD-1k dataset ln -s CVD2014_path CVD2014 # CVD2014_path is your path to the CVD2014 dataset ln -s LIVE-Qualcomm_path LIVE-Qualcomm # LIVE-Qualcomm_path is your path to the LIVE-Qualcomm dataset ln -s LIVE-VQC_path LIVE-VQC # LIVE-VQC_path is your path to the LIVE-VQC dataset ``` ### Training and Evaluating on Multiple Datasets ```bash # Feature extraction CUDA_VISIBLE_DEVICES=0 python CNNfeatures.py --database=KoNViD-1k --frame_batch_size=64 CUDA_VISIBLE_DEVICES=1 python CNNfeatures.py --database=CVD2014 --frame_batch_size=32 CUDA_VISIBLE_DEVICES=0 python CNNfeatures.py --database=LIVE-Qualcomm --frame_batch_size=8 CUDA_VISIBLE_DEVICES=1 python CNNfeatures.py --database=LIVE-VQC --frame_batch_size=8 # Training, intra-dataset evaluation, for example chmod 777 job.sh ./job.sh -g 0 -d K -d C -d L > KCL-mixed-exp-0-10-1e-4-32-40.log 2>&1 & # Cross-dataset evaluation (after training), for example chmod 777 cross_job.sh ./cross_job.sh -g 1 -d K -d C -d L -c N -l mixed > KCLtrained-crossN-mixed-exp-0-10.log 2>&1 & ``` ### Test Demo The model weights provided in `models/MDTVSFA.pt` are the saved weights when running the 9-th split of KoNViD-1k, CVD2014, and LIVE-Qualcomm. ```bash python test_demo.py --model_path=models/MDTVSFA.pt --video_path=data/test.mp4 ``` ### Contact Dingquan Li, dingquanli AT pku DOT edu DOT cn.

近期下载者

相关文件


收藏者