2-layers-simple-ANN.rar

  • PUDN用户
    了解作者
  • matlab
    开发工具
  • 16KB
    文件大小
  • rar
    文件格式
  • 0
    收藏次数
  • 1 积分
    下载积分
  • 17
    下载次数
  • 2009-11-26 20:23
    上传日期
不调用MATLAB本身的神经网络工具箱实现两层神经网络
2-layers-simple-ANN.rar
  • 两层ANN(不调用MATLAB的toolbox)
  • NN2weights.m
    168B
  • NN2train.m
    1.5KB
  • General NN2.pdf
    18.8KB
  • NN2.m
    488B
内容介绍
<html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta charset="utf-8"> <meta name="generator" content="pdf2htmlEX"> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> <link rel="stylesheet" href="https://static.pudn.com/base/css/base.min.css"> <link rel="stylesheet" href="https://static.pudn.com/base/css/fancy.min.css"> <link rel="stylesheet" href="https://static.pudn.com/prod/directory_preview_static/622baba615da9b288b5ad625/raw.css"> <script src="https://static.pudn.com/base/js/compatibility.min.js"></script> <script src="https://static.pudn.com/base/js/pdf2htmlEX.min.js"></script> <script> try{ pdf2htmlEX.defaultViewer = new pdf2htmlEX.Viewer({}); }catch(e){} </script> <title></title> </head> <body> <div id="sidebar" style="display: none"> <div id="outline"> </div> </div> <div id="pf1" class="pf w0 h0" data-page-no="1"><div class="pc pc1 w0 h0"><img class="bi x0 y0 w1 h1" alt="" src="https://static.pudn.com/prod/directory_preview_static/622baba615da9b288b5ad625/bg1.jpg"><div class="t m0 x1 h2 y1 ff1 fs0 fc0 sc0 ls0 ws0">Simple Neural Net<span class="_ _0"></span>work<span class="ff2">.</span></div><div class="t m0 x1 h3 y2 ff2 fs0 fc0 sc0 ls0 ws0">The enclosed M-<span class="_ _0"></span>files implement a very simple 2<span class="_ _0"></span> layer neural <span class="_ _0"></span>network. </div><div class="t m0 x1 h3 y3 ff2 fs0 fc0 sc0 ls0 ws0">The three M-fil<span class="_ _0"></span>es are:</div><div class="t m0 x1 h3 y4 ff3 fs0 fc0 sc0 ls0 ws0">&#8226; <span class="_ _1"> </span><span class="ff2">NN2weights</span></div><div class="t m0 x1 h3 y5 ff3 fs0 fc0 sc0 ls0 ws0">&#8226; <span class="_ _1"> </span><span class="ff2">NN2train</span></div><div class="t m0 x1 h3 y6 ff3 fs0 fc0 sc0 ls0 ws0">&#8226; <span class="_ _1"> </span><span class="ff2">NN2 </span></div><div class="t m0 x1 h2 y7 ff1 fs0 fc0 sc0 ls0 ws0">NN2weights<span class="ff2"> - creates<span class="_ _0"></span> the initial<span class="_ _0"></span> random weights used in<span class="_ _0"></span> the training p<span class="_ _0"></span>rocess</span></div><div class="t m0 x1 h2 y8 ff1 fs0 fc0 sc0 ls0 ws0">InitWeights=NN2weight<span class="_ _0"></span>s(netsize)</div><div class="t m0 x1 h3 y9 ff2 fs0 fc0 sc0 ls0 ws0">netsize is a v<span class="_ _0"></span>ector contiai<span class="_ _0"></span>ning the number of input<span class="_ _0"></span> units and output<span class="_ _0"></span> units. </div><div class="t m0 x1 h3 ya ff2 fs0 fc0 sc0 ls0 ws0">If you have a network<span class="_ _0"></span> with two input uni<span class="_ _0"></span>ts and one neuron<span class="_ _0"></span> netsize = [ 2 1]</div><div class="t m0 x1 h3 yb ff2 fs0 fc0 sc0 ls0 ws0">InitWeights is a s<span class="_ _0"></span>tructure cont<span class="_ _0"></span>aining four fields.</div><div class="t m0 x1 h3 yc ff2 fs0 fc0 sc0 ls0 ws0">InitWeights.inputs<span class="_ _0"></span> is the weights<span class="_ _0"></span> connecting the <span class="_ _0"></span>input layer to<span class="_ _0"></span> the output neur<span class="_ _0"></span>ons. </div><div class="t m0 x1 h3 yd ff2 fs0 fc0 sc0 ls0 ws0">The weights assoc<span class="_ _0"></span>iated with a sp<span class="_ _0"></span>ecific neuron ar<span class="_ _0"></span>e stored along t<span class="_ _0"></span>he rows, so if you had <span class="_ _0"></span>a network with two i<span class="_ _0"></span>nputs; I1 </div><div class="t m0 x1 h3 ye ff2 fs0 fc0 sc0 ls0 ws0">and I2, </div><div class="t m0 x1 h3 yf ff2 fs0 fc0 sc0 ls0 ws0">and two outputs; <span class="_ _0"></span>O1 and O2, with connect<span class="_ _0"></span>ing weights W11, W12, W21, W22 with the first number bei<span class="_ _0"></span>ng the input </div><div class="t m0 x1 h3 y10 ff2 fs0 fc0 sc0 ls0 ws0">node, and the </div><div class="t m0 x1 h3 y11 ff2 fs0 fc0 sc0 ls0 ws0">second number being th<span class="_ _0"></span>e output node. the<span class="_ _0"></span> weight matrix would be<span class="_ _0"></span> stored as</div><div class="t m0 x1 h3 y12 ff2 fs0 fc0 sc0 ls0 ws0">InitWeights.inputs<span class="_ _0"></span>=[W11 W21;W12 W22]</div><div class="t m0 x1 h3 y13 ff2 fs0 fc0 sc0 ls0 ws0">This makes the computation<span class="_ _0"></span> of the activati<span class="_ _0"></span>on function much simpler.</div><div class="t m0 x1 h2 y14 ff1 fs0 fc0 sc0 ls0 ws0">NN2<span class="ff2"> - implements the neural<span class="_ _0"></span> network</span></div><div class="t m0 x1 h2 y15 ff1 fs0 fc0 sc0 ls0 ws0">OutputValue=NN2(tr<span class="_ _0"></span>ainedweights,inputvect<span class="_ _0"></span>or,af)</div><div class="t m0 x1 h3 y16 ff2 fs0 fc0 sc0 ls0 ws0">trainedweight<span class="_ _0"></span>s - weight value<span class="_ _0"></span>s that implement the des<span class="_ _0"></span>ired function of the <span class="_ _0"></span>neural network</div><div class="t m0 x1 h3 y17 ff2 fs0 fc0 sc0 ls0 ws0">input vector -<span class="_ _0"></span> column vector conta<span class="_ _0"></span>ining the input<span class="_ _0"></span> of the neural net<span class="_ _0"></span>work</div><div class="t m0 x1 h3 y18 ff2 fs0 fc0 sc0 ls0 ws0">af - activation<span class="_ _0"></span> function wanting t<span class="_ _0"></span>o be used</div><div class="t m0 x2 h3 y19 ff2 fs0 fc0 sc0 ls0 ws0">'p' for perceptron learn<span class="_ _0"></span>ing rule</div><div class="t m0 x2 h3 y1a ff2 fs0 fc0 sc0 ls0 ws0">anything else for<span class="_ _0"></span> logisitic l<span class="_ _0"></span>earning rule</div><div class="t m0 x1 h3 y1b ff2 fs0 fc0 sc0 ls0 ws0">Lets assume we have tr<span class="_ _0"></span>ained our networ<span class="_ _0"></span>k to implement the logica<span class="_ _0"></span>l AND function with two i<span class="_ _0"></span>nputs and we wish t<span class="_ _0"></span>o use </div><div class="t m0 x1 h3 y1c ff2 fs0 fc0 sc0 ls0 ws0">that network</div><div class="t m0 x1 h3 y1d ff2 fs0 fc0 sc0 ls0 ws0">the function call<span class="_ _0"></span> would be</div><div class="t m0 x1 h2 y1e ff1 fs0 fc0 sc0 ls0 ws0">result=NN2(tweights,<span class="_ _0"></span>[1 1]','p')</div><div class="t m0 x1 h2 y1f ff1 fs0 fc0 sc0 ls0 ws0">NN2train<span class="ff2"> - function u<span class="_ _0"></span>sed to train t<span class="_ _0"></span>he two layer neur<span class="_ _0"></span>al network</span></div><div class="t m0 x1 h2 y20 ff1 fs0 fc0 sc0 ls0 ws0">trainedweights=NN2trai<span class="_ _0"></span>n(InitWeights<span class="_ _0"></span>,TDIN,TDOUT,LR,ErTh,af)</div><div class="t m0 x1 h3 y21 ff2 fs0 fc0 sc0 ls0 ws0">InitWeights -obta<span class="_ _0"></span>ined by using the<span class="_ _0"></span> function NN2weights(<span class="_ _0"></span>netsize)</div><div class="t m0 x1 h3 y22 ff2 fs0 fc0 sc0 ls0 ws0">LR - Learning rat<span class="_ _0"></span>e of the neural ne<span class="_ _0"></span>twork</div><div class="t m0 x1 h3 y23 ff2 fs0 fc0 sc0 ls0 ws0">ErTh - Error Thre<span class="_ _0"></span>shold, the value<span class="_ _0"></span> for which the tra<span class="_ _0"></span>ining algorit<span class="_ _0"></span>hm tries to achieve<span class="_ _0"></span> by altering t<span class="_ _0"></span>he weights</div><div class="t m0 x1 h3 y24 ff2 fs0 fc0 sc0 ls0 ws0">af - activation<span class="_ _0"></span> function to be used<span class="_ _0"></span> in training;<span class="_ _0"></span> 'p' for perceptron or anyt<span class="_ _0"></span>hing else for the<span class="_ _0"></span> logistic functi<span class="_ _0"></span>on</div><div class="t m0 x1 h3 y25 ff2 fs0 fc0 sc0 ls0 ws0">TDIN - input trai<span class="_ _0"></span>ning data</div><div class="t m0 x1 h3 y26 ff2 fs0 fc0 sc0 ls0 ws0">TDOUT - output trai<span class="_ _0"></span>ning data</div><div class="t m0 x1 h3 y27 ff2 fs0 fc0 sc0 ls0 ws0">To continue with t<span class="_ _0"></span>he above AND function </div><div class="t m0 x1 h3 y28 ff2 fs0 fc0 sc0 ls0 ws0">TDIN = [0 0; 0 1; 1 0; 1 1]'</div><div class="t m0 x1 h3 y29 ff2 fs0 fc0 sc0 ls0 ws0">TDOUT = [0 0 0 1]</div><div class="t m0 x1 h3 y2a ff2 fs0 fc0 sc0 ls0 ws0">LR=.1</div><div class="t m0 x1 h3 y2b ff2 fs0 fc0 sc0 ls0 ws0">ErTh = .01</div><div class="t m0 x1 h3 y2c ff2 fs0 fc0 sc0 ls0 ws0">af = 'p'</div><div class="t m0 x1 h3 y2d ff2 fs0 fc0 sc0 ls0 ws0">trainedweight<span class="_ _0"></span>s=NN2train(Ini<span class="_ _0"></span>tWeights,TDIN,TDOUT,LR,ErTh,af)</div><div class="t m0 x1 h3 y2e ff2 fs0 fc0 sc0 ls0 ws0">These trained we<span class="_ _0"></span>ights can then <span class="_ _0"></span>be used to run the<span class="_ _0"></span> neural network<span class="_ _0"></span>.</div></div><div class="pi" data-data='{"ctm":[1.568627,0.000000,0.000000,1.568627,0.000000,0.000000]}'></div></div> </body> </html>
评论
    相关推荐
    • ANN-BP.rar
      分两部分,第一部分是神经网络的基本概念,第二部分是用BP网络实现函数逼近算法
    • ANN-BP.rar
      对“data2.m”数据,用其中一半的数据采用ANN-BP算法设计分类器,另一半数据用于测试分类器性能。
    • BP-ANN-matlab-ok.zip
      BP前向神经网络逼近和实现,利用matlab分析软件快速实现逼近,寻优效果好,
    • Bp-Ann.rar
      BP神经网络的vc++实现,注释很多,容易学习上手.
    • Design of ANN-BP Classifier in MATLAB.zip
      这个操作文档,在matlab中ANN-BP分类器设计,可以学习一下
    • BP-ANN.rar
      BP网络是一种按误差逆传播算法训练的多层前馈网络,目前应用较为广泛。它能学习和存贮大量的输入-输出模式映射关系,而无需事前揭示描述这种映射关系的数学方程。本文讲述了一种关于BP神经网络应用实例。
    • ANN-Output-Calculation-by-BP-algm.zip
      Artificial neuarl network is one of the recent developed softcomputing technique work on the basis of human neural system. The traning and testing are the two important process in artificial neural ...
    • BPANN.rar
      自己在科研期间对神经网络进行了一个较为简单的尝试,属于新手入门级,还有很多地方需要改进,请各位大神多多指教。
    • ANN-BP分类器设计
      对data中的数据,用其中的一半的数据采用人工神经网络算法设计分类器,并用另外的一半数据来验证
    • ANN-BP算法设计分类器
      ANN-BP 算法全程为人工神经网络反向传播算法。其主要思想是从后向前逐层传播输出层的误差,以间接算出隐层误差。算法分为两个阶段:第一个阶段输入信息从输入层经隐层逐层计算各单元的输出值;第二阶段内输出误差...