CNN

所属分类:人工智能/神经网络/深度学习
开发工具:Python
文件大小:11233KB
下载次数:24
上传日期:2018-11-02 10:27:49
上 传 者sdauzcy
说明:  通过卷积网络,自动实现对图片特征的提取,通过训练,得到有效的权值,进行图像分类
(Through convolution network, automatic extraction of image features can be realized. Through training, effective weights can be obtained and image classification can be carried out.)

文件列表:
CNN\.project (374, 2018-10-10)
CNN\.pydevproject (435, 2018-10-10)
CNN\docs.md (2768, 2018-10-10)
CNN\evaluate.py (8510, 2018-10-10)
CNN\examples\content\chicago.jpg (190251, 2018-10-10)
CNN\examples\content\fox.mp4 (1554735, 2018-10-10)
CNN\examples\content\stata.jpg (435233, 2018-10-10)
CNN\examples\results\chicago_la_muse.jpg (433097, 2018-10-10)
CNN\examples\results\chicago_rain_princess.jpg (414113, 2018-10-10)
CNN\examples\results\chicago_the_scream.jpg (283993, 2018-10-10)
CNN\examples\results\chicago_udnie.jpg (308676, 2018-10-10)
CNN\examples\results\chicago_wave.jpg (390489, 2018-10-10)
CNN\examples\results\chicago_wreck.jpg (350836, 2018-10-10)
CNN\examples\results\fox_udnie.gif (4155150, 2018-10-10)
CNN\examples\results\stata_udnie.jpg (304822, 2018-10-10)
CNN\examples\results\stata_udnie_header.jpg (350952, 2018-10-10)
CNN\examples\style\la_muse.jpg (220115, 2018-10-10)
CNN\examples\style\rain_princess.jpg (286356, 2018-10-10)
CNN\examples\style\the_scream.jpg (54881, 2018-10-10)
CNN\examples\style\the_shipwreck_of_the_minotaur.jpg (826318, 2018-10-10)
CNN\examples\style\udnie.jpg (87874, 2018-10-10)
CNN\examples\style\wave.jpg (123262, 2018-10-10)
CNN\examples\thumbs\la_muse.jpg (146728, 2018-10-10)
CNN\examples\thumbs\rain_princess.jpg (143606, 2018-10-10)
CNN\examples\thumbs\the_scream.jpg (91973, 2018-10-10)
CNN\examples\thumbs\the_shipwreck_of_the_minotaur.jpg (148470, 2018-10-10)
CNN\examples\thumbs\udnie.jpg (89044, 2018-10-10)
CNN\examples\thumbs\wave.jpg (118870, 2018-10-10)
CNN\setup.sh (209, 2018-10-10)
CNN\src\optimize.py (5924, 2018-10-10)
CNN\src\transform.py (2650, 2018-10-10)
CNN\src\utils.py (975, 2018-10-10)
CNN\src\vgg.py (1993, 2018-10-10)
CNN\style.py (6161, 2018-10-10)
CNN\transform_video.py (1986, 2018-10-10)
CNN\examples\content (0, 2018-10-10)
CNN\examples\results (0, 2018-10-10)
CNN\examples\style (0, 2018-10-10)
... ...

## Fast Style Transfer in [TensorFlow](https://github.com/tensorflow/tensorflow) Add styles from famous paintings to any photo in a fraction of a second! [You can even style videos!](#video-stylization)

It takes 100ms on a 2015 Titan X to style the MIT Stata Center (1024—680) like Udnie, by Francis Picabia.

Our implementation is based off of a combination of Gatys' [A Neural Algorithm of Artistic Style](https://arxiv.org/abs/1508.06576), Johnson's [Perceptual Losses for Real-Time Style Transfer and Super-Resolution](http://cs.stanford.edu/people/jcjohns/eccv16/), and Ulyanov's [Instance Normalization](https://arxiv.org/abs/1607.08022). ### License Copyright (c) 2016 Logan Engstrom. Contact me for commercial use (or rather any use that is not academic research) (email: engstrom at my university's domain dot edu). Free for research use, as long as proper attribution is given and this copyright notice is retained. ## Video Stylization Here we transformed every frame in a video, then combined the results. [Click to go to the full demo on YouTube!](https://www.youtube.com/watch?v=xVJwwWQlQ1o) The style here is Udnie, as above. See how to generate these videos [here](#stylizing-video)! ## Image Stylization We added styles from various paintings to a photo of Chicago. Click on thumbnails to see full applied style images.


## Implementation Details Our implementation uses TensorFlow to train a fast style transfer network. We use roughly the same transformation network as described in Johnson, except that batch normalization is replaced with Ulyanov's instance normalization, and the scaling/offset of the output `tanh` layer is slightly different. We use a loss function close to the one described in Gatys, using VGG19 instead of VGG16 and typically using "shallower" layers than in Johnson's implementation (e.g. we use `relu1_1` rather than `relu1_2`). Empirically, this results in larger scale style features in transformations. ## Documentation ### Training Style Transfer Networks Use `style.py` to train a new style transfer network. Run `python style.py` to view all the possible parameters. Training takes 4-6 hours on a Maxwell Titan X. [More detailed documentation here](docs.md#stylepy). **Before you run this, you should run `setup.sh`**. Example usage: python style.py --style path/to/style/img.jpg \ --checkpoint-dir checkpoint/path \ --test path/to/test/img.jpg \ --test-dir path/to/test/dir \ --content-weight 1.5e1 \ --checkpoint-iterations 1000 \ --batch-size 20 ### Evaluating Style Transfer Networks Use `evaluate.py` to evaluate a style transfer network. Run `python evaluate.py` to view all the possible parameters. Evaluation takes 100 ms per frame (when batch size is 1) on a Maxwell Titan X. [More detailed documentation here](docs.md#evaluatepy). Takes several seconds per frame on a CPU. **Models for evaluation are [located here](https://drive.google.com/drive/folders/0B9jhaT37ydSyRk9UX0wwX3BpMzQ?usp=sharing)**. Example usage: python evaluate.py --checkpoint path/to/style/model.ckpt \ --in-path dir/of/test/imgs/ \ --out-path dir/for/results/ ### Stylizing Video Use `transform_video.py` to transfer style into a video. Run `python transform_video.py` to view all the possible parameters. Requires `ffmpeg`. [More detailed documentation here](docs.md#transform_videopy). Example usage: python transform_video.py --in-path path/to/input/vid.mp4 \ --checkpoint path/to/style/model.ckpt \ --out-path out/video.mp4 \ --device /gpu:0 \ --batch-size 4 ### Requirements You will need the following to run the above: - TensorFlow 0.11.0 - Python 2.7.9, Pillow 3.4.2, scipy 0.18.1, numpy 1.11.2 - If you want to train (and don't want to wait for 4 months): - A decent GPU - All the required NVIDIA software to run TF on a GPU (cuda, etc) - ffmpeg 3.1.3 if you want to stylize video ### Citation ``` @misc{engstrom2016faststyletransfer, author = {Logan Engstrom}, title = {Fast Style Transfer}, year = {2016}, howpublished = {\url{https://github.com/lengstrom/fast-style-transfer/}}, note = {commit xxxxxxx} } ``` ### Attributions/Thanks - This project could not have happened without the advice (and GPU access) given by [Anish Athalye](http://www.anishathalye.com/). - The project also borrowed some code from Anish's [Neural Style](https://github.com/anishathalye/neural-style/) - Some readme/docs formatting was borrowed from Justin Johnson's [Fast Neural Style](https://github.com/jcjohnson/fast-neural-style) - The image of the Stata Center at the very beginning of the README was taken by [Juan Paulo](https://juanpaulo.me/) ### Related Work - Michael Ramos ported this network [to use CoreML on iOS](https://medium.com/@rambossa/diy-prisma-fast-style-transfer-app-with-coreml-and-tensorflow-817c3b90dacd)

近期下载者

相关文件


收藏者