DeepDarkFantasy

所属分类:人工智能/神经网络/深度学习
开发工具:Haskell
文件大小:2195KB
下载次数:0
上传日期:2018-05-10 06:46:16
上 传 者sh-1993
说明:  DeepDarkFantasy,一种用于深度学习的编程语言
(DeepDarkFantasy,A Programming Language for Deep Learning)

文件列表:
.travis.yml (734, 2018-05-10)
CONTRIBUTING.md (419, 2018-05-10)
DDF (0, 2018-05-10)
DDF\Bimap.hs (2005, 2018-05-10)
DDF\Bool.hs (396, 2018-05-10)
DDF\Char.hs (197, 2018-05-10)
DDF\DBI.hs (3559, 2018-05-10)
DDF\Diff.hs (7241, 2018-05-10)
DDF\DiffWrapper.hs (282, 2018-05-10)
DDF\Double.hs (1060, 2018-05-10)
DDF\Dual.hs (947, 2018-05-10)
DDF\Eval.hs (4879, 2018-05-10)
DDF\Fix.hs (320, 2018-05-10)
DDF\Float.hs (818, 2018-05-10)
DDF\FreeVector.hs (410, 2018-05-10)
DDF\IO.hs (1029, 2018-05-10)
DDF\ImpW.hs (6102, 2018-05-10)
DDF\ImportMeta.hs (579, 2018-05-10)
DDF\Int.hs (538, 2018-05-10)
DDF\Lang.hs (11432, 2018-05-10)
DDF\List.hs (492, 2018-05-10)
DDF\Map.hs (2417, 2018-05-10)
DDF\Meta (0, 2018-05-10)
DDF\Meta\Diff.hs (376, 2018-05-10)
DDF\Meta\DiffWrapper.hs (420, 2018-05-10)
DDF\Meta\Dual.hs (329, 2018-05-10)
DDF\Meta\FreeVector.hs (95, 2018-05-10)
DDF\Meta\Util.hs (1120, 2018-05-10)
DDF\Meta\VectorTF.hs (192, 2018-05-10)
DDF\Option.hs (277, 2018-05-10)
DDF\Ordering.hs (1760, 2018-05-10)
DDF\PE.hs (14324, 2018-05-10)
DDF\Prod.hs (905, 2018-05-10)
DDF\Sam (0, 2018-05-10)
DDF\Sam\Hello.lhs (2959, 2018-05-10)
DDF\Sam\Poly.lhs (3574, 2018-05-10)
DDF\Sam\Xor.lhs (5677, 2018-05-10)
DDF\Show.hs (4400, 2018-05-10)
... ...

# DEPRECATED. # DeepDarkFantasy [![Hackage](https://img.shields.io/hackage/v/DeepDarkFantasy.svg)](http://hackage.haskell.org/package/DeepDarkFantasy) [![Join the chat at https://gitter.im/ThoughtWorksInc/DeepDarkFantasy](https://badges.gitter.im/ThoughtWorksInc/DeepDarkFantasy.svg)](https://gitter.im/ThoughtWorksInc/DeepDarkFantasy?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [![Build Status](https://travis-ci.org/ThoughtWorksInc/DeepDarkFantasy.svg?branch=master)](https://travis-ci.org/ThoughtWorksInc/DeepDarkFantasy) [![Coverage Status](https://coveralls.io/repos/github/ThoughtWorksInc/DeepDarkFantasy/badge.svg?branch=master)](https://coveralls.io/github/ThoughtWorksInc/DeepDarkFantasy?branch=master) ### What if we combine Functional Programming and Deep Learning? As we all know, a neural network is just a computable math expression (and hence a program). **Can we add 'ordinary' programming construct to a 'neural network', like branch, loop, pair, sum, list, and function?** Of course, I must still be able to train the network. **Yes! I had add all the above construct, and I am planning to add more.** They all had their own special gradient structure to propagate loss accordingly. However, in the end of the day, what is updated is only container of double (or other representation of real). Having those construct only make you write networks easier, but does not offer fundamentally different learning capability. ---------- Can we make the language typed so we can detect error before we train the network? **Sort of.** I am able to type most stuff, but I am having trouble adding higher kinded type/generic type. However, they can be written as Haskell function (macro in DDF). ---------- Can we make the language modular and extensible so all people can write all sorts of Chuck Norris move into the language? **Yes Yes Yes!** The whole language is structured in [finally tagless style](http://okmij.org/ftp/tagless-final/JFP.pdf), so, it is possible to add new operation/constructor, and still retain type safety. In fact, there isn't even a core language! All feature(function, double, back propagation, pretty printing) is added as ordinary plugin so you can extend/subset the language as you can wish. # Patchouli Go! You should read the [blog](http://marisa.moe/2017/DLPL/) before anything. We have a few example on using DDF: [Hello world](DDF/Sam/Hello.lhs) [Solving polynomial equation](DDF/Sam/Poly.lhs) [Training XOR Network](DDF/Sam/Xor.lhs) If you want to look into the code base, it is necessary to understand [Finally Tagless](http://www.cs.cornell.edu/info/projects/nuprl/PRLSeminar/PRLSeminar2011/Chung-chiehShan-FinallyTaglessPartiallyEevaluated.pdf). # FA Q Q: How is the speed? A: Unoptimized. This is more of a proof of concept that we can use function in neural network, than something that can get you good kaggle score right off the shelf. We are working on Partial Evaluation. Q: Why does this work theoretically? A: See [DDFADC](https://github.com/MarisaKirisame/DDFADC) Q: What does this have to do with [Yang Bo](https://github.com/Atry)'s [DeepLearning.scala](https://github.com/ThoughtWorksInc/DeepLearning.scala/)? A: We work on a prototype for 2-3 months, and split apart. Q: You seems to have a space in FAQ. A: I like it that way. Q: What are you currently working on? A: I am trying to add a neural network demo. # Thank You Sir This is heavily inspired by [Neural Networks, Types, and Functional Programming](http://colah.github.io/posts/2015-09-NN-Types-FP/), and my colleague, [Yang Bo](https://github.com/Atry). Also, I'd like to thanks [dram](https://github.com/dramforever) for getting it to work without Incoherent Instances, and fixing it on stack, cabal & travis. And [izgzhen](https://github.com/izgzhen) helps with the initial version of Partial Evaluation. You can be the next contributor! ![I Want You](img/I_Want_You.png "Baka!") [image courtesy of Milk Mage](http://www.pixiv.net/member_illust.php?id=461351) [aquired here](img/I_Want_You_Image_Courtesy.png) Please look at [This Issue](https://github.com/ThoughtWorksInc/DeepDarkFantasy/issues/174) and help solve it.

近期下载者

相关文件


收藏者