itwmm-master

所属分类:视频捕捉采集剪辑
开发工具:Python
文件大小:2087KB
下载次数:0
上传日期:2019-11-07 18:02:47
上 传 者软软的泡泡糖
说明:  expression master net

文件列表:
LICENSE.txt (1536, 2018-06-29)
evaluation (0, 2018-06-29)
evaluation\__init__.py (1, 2018-06-29)
evaluation\benchmark.py (7215, 2018-06-29)
evaluation\demo.ipynb (271871, 2018-06-29)
evaluation\eos.py (1140, 2018-06-29)
evaluation\eos_landmark_settings.pkl (8110474, 2018-06-29)
evaluation\fw_on_eos_low_res_settings.pkl (2224930, 2018-06-29)
evaluation\kfeval.py (2096, 2018-06-29)
evaluation\lsfm_lms_indexes.pkl (212, 2018-06-29)
evaluation\trilist.pkl (915422, 2018-06-29)
itwmm (0, 2018-06-29)
itwmm\__init__.py (335, 2018-06-29)
itwmm\base.py (1282, 2018-06-29)
itwmm\fitting (0, 2018-06-29)
itwmm\fitting\__init__.py (129, 2018-06-29)
itwmm\fitting\algorithm.py (7076, 2018-06-29)
itwmm\fitting\base.py (1982, 2018-06-29)
itwmm\fitting\derivatives.py (9677, 2018-06-29)
itwmm\fitting\hessian.py (5208, 2018-06-29)
itwmm\fitting\initialization.py (745, 2018-06-29)
itwmm\fitting\jacobian.py (3278, 2018-06-29)
itwmm\fitting\projectout.py (1875, 2018-06-29)
itwmm\model (0, 2018-06-29)
itwmm\model\__init__.py (3637, 2018-06-29)
itwmm\model\extractimage.py (5125, 2018-06-29)
itwmm\model\math.py (3388, 2018-06-29)
itwmm\visualize.py (4473, 2018-06-29)
notebooks (0, 2018-06-29)
notebooks\1. Building an "in-the-wild" texture model.ipynb (11346, 2018-06-29)
notebooks\2. Creating an expressive 3DMM.ipynb (7644, 2018-06-29)
notebooks\3. Fitting "in-the-wild" images.ipynb (8368, 2018-06-29)
notebooks\4. Fitting "in-the-wild" videos.ipynb (9012, 2018-06-29)
setup.py (324, 2018-06-29)

# In The Wild 3D Morphable Models The repository provides the source code of the algorithm of 3D reconstruction of "In-the-Wild" faces in **images** and **videos**, as outlined in the following papers: > [**3D Reconstruction of "In-the-Wild" Faces in Images and Videos** --- > J. Booth, A. Roussos, E. Ververas, E. Antonakos, S. Ploumpis, Y. Panagagakis, S. Zafeiriou. > Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), accepted for publication (2018).](https://doi.org/10.1109/TPAMI.2018.2832138) > [**3D Face Morphable Models “In-the-Wild”** --- J. Booth, E. Antonakos, S. Ploumpis, G. Trigeorgis, Y. Panagagakis, S. Zafeiriou. CVPR 2017.](https://ibug.doc.ic.ac.uk/media/uploads/documents/booth2017itw3dmm.pdf) If you use this code, **please cite the above papers**. The following topics are covered, each one with its own dedicated notebook: 1. Building an "in-the-wild" texture model 2. Creating an expressive 3DMM 3. Fitting "in-the-wild" images 4. Fitting "in-the-wild" videos ### Release Notes (28/06/2018) - There has been a major update in the code and the problems of the previous preliminary version have been addressed. The current version is fully functional. ### Prerequisites To leverage this codebase users need to independently source the following items to construct an "in-the-wild" 3DMM: - A collection of "in-the-wild" images coupled with 3D fits - A parametric facial shape model of identity and expression And then to use this model, users will need to provide data to fit on: - "in-the-wild" images or videos with iBUG 68 annotations Examples are given for working with some common facial models (e.g. LSFM) and it shouldn't be too challenging to adapt these examples for alternative inputs. Just bear in mind that fitting parameters will need to be tuned when working with different models. ### Notes Please note that this public version of our code has some differences compared to what is described in our aforementioned papers (TPAMI and CVPR): - The default parameters provided here work in generally quite well, but in our papers these had been fine-tuned in different sets of experiments. - The "in-the-wild" texture model that can be extracted following Notebook 1 is a simplified version of the texture model used in our papers. This is because Notebook 1 uses a smaller set of images coming only from (Zhu, Xiangyu, et al. CVPR 2016). Also, the video fitting in Notebook 4 is still using this texture model, instead of the video-specific texture model described in our TPAMI paper. - The video fitting in Notebook 4 uses a simpler initialisation than the one described in our TPAMI paper and corresponding supplementary material: this initialisation comes from a per-frame 3D pose estimation using the mean shape of the adopted parametric facial shape model. These differences result to a simplified version of our image and video fitting algorithms. In practice, we have observed that these simplifications do not have a drastic effect on the accuracy of the results and result to acceptable results. ### Installation 1. Follow the instructions to [install the Menpo Project with conda](http://www.menpo.org/installation/conda.html). 2. Whilst in the conda environment containing menpo, run `pip install git+https://github.com/menpo/itwmm`. 3. Download a [copy of the code](https://github.com/menpo/itwmm/archive/master.zip) into your Downloads folder. 4. Run `jupyter notebook` and navigate to the `notebooks` directory in the downloaded folder. 5. Explore the notebooks in order to understand how to use this codebase. # Evaluation Code We are also providing evaluation code under the folder "evaluation". This will help you evaluate and compare 3D face reconstruction methods using our 3dMDLab & 4DMaja benchmarks (available in our [iBug website](https://ibug.doc.ic.ac.uk/resources/itwmm/)) or other similar benchmarks. Please see the demo Notebook evaluation/demo.ipynb ("Reconstruction Evaluation Demo"), where you can also find detailed comments.

近期下载者

相关文件


收藏者