python-dbn-master
所属分类:图形图像处理
开发工具:Python
文件大小:30KB
下载次数:15
上传日期:2018-03-09 14:28:42
上 传 者:
xixi362878733
说明: 运用python语言,基于dbn的手写数字体识别
(Handwritten numeral recognition based on dbn using python language)
文件列表:
dbn (0, 2013-09-23)
dbn\__init__.py (96, 2013-09-23)
dbn\base.py (3107, 2013-09-23)
dbn\connections.py (762, 2013-09-23)
dbn\dbn.py (2311, 2013-09-23)
dbn\genchar.py (2273, 2013-09-23)
dbn\layers.py (2448, 2013-09-23)
dbn\logistic_regression.py (1821, 2013-09-23)
dbn\multi.py (1, 2013-09-23)
dbn\onehot.py (755, 2013-09-23)
dbn\rbm.py (4699, 2013-09-23)
dbn\recurrent_layer.py (940, 2013-09-23)
dbn\sparse.py (1289, 2013-09-23)
dbn\utils.py (553, 2013-09-23)
examples (0, 2013-09-23)
examples\8_autoencoder.py (432, 2013-09-23)
examples\multinomial.py (649, 2013-09-23)
examples\text_data_experiment.py (1246, 2013-09-23)
old_stuff (0, 2013-09-23)
old_stuff\assoc_words (74449, 2013-09-23)
old_stuff\cpm.py (1902, 2013-09-23)
old_stuff\rbm.py (2632, 2013-09-23)
Deep Belief Nets for Python
===========================
`python-dbn` aims to make experimenting with different deep learning
architectures. Right now, it's capability is limited to specifying deep
feed-forward networks.
An example:
```python
from dbn import DBN
from dbn.layers import *
net = DBN([
OneHotSoftmax(8),
Sigmoid(3)
],3)
net.fit(train_data,train_labels)
net.predict(test_data)
```
###There's a still a lot left to do
- **Persistence of learnt weights**
Of highest importance
- **Multiple visible layer sets per RBM**
Multiple types of input for each RBM. Hopefully this will make
experimenting different ways for data input to the network possible.
- **Auto-encoders**
Building on to the DBN class to create an intuitive way to
build auto-encoders just by specifying the layer dimensions.
- **Documentation**
I'd like the library to be as thoroughly documented as scikit-learn.
I've learnt a ton from that library, largely because the documentation
of the library is so complete, going right down to the theory level of
things, just enough so you can quickly understand and start implementing.
My hope is that this library achieves similar goals.
###Getting the damn theory right
I'm still realy new at this, so I'd appreciate any help I can get with the
theory side of things.
Things I'm still unsure of that are already implemented:
- When do I stop training for RBMs? I've looked at Hinton's tutorial on
training RBMs, but it doesn't seem to be helping for some of the tasks
I've tested it with.
- With a softmax layer one-hot input, should my reconstruction be also be
turning just one neuron on based in the probability densities? Or should
I just leave the values as probabilities? I've heard (though I can't
remember from where) that this results in slower learning due to the more
distributed nature of the updates.
- Learning rate decay with RBMs: Yay or nay?
近期下载者:
相关文件:
收藏者: