ART2source-Skapura
所属分类:人工智能/神经网络/深度学习
开发工具:C/C++
文件大小:32KB
下载次数:95
上传日期:2006-04-27 09:10:19
上 传 者:
badge000
说明: 神经网络 ART2神经网络源码 ART2神经网络源码
(neural network ART2 neural network source ART2 neural network-source)
文件列表:
ACTIVATE.C (3872, 1996-01-17)
ART1.C (17979, 1996-01-17)
ART2.C (21821, 1996-01-17)
ART2TEST.DAT (23, 1996-01-17)
ARTTEST.DAT (111, 1996-01-17)
BPN.C (16796, 1996-01-17)
BUILDNET.C (4541, 1996-01-17)
CPN.C (19281, 1996-01-17)
EXEMPLAR.C (9089, 1996-01-17)
INITNET.C (2712, 1996-01-17)
LETTERS.DAT (3735, 1996-01-17)
NETSTRCT.H (2428, 1996-01-17)
PROPGATE.C (1734, 1996-01-17)
SHOWNET.C (2319, 1996-01-17)
UTILITY.C (1002, 1996-01-17)
XOR.DAT (179, 1996-01-17)
***********************************************************
Notice:
This code is copyright (C) 1995 by David M. Skapura. It
may be used as is, or modified to suit the requirements of
a specific application without permission of the author.
There are no royalty fees for use of this code, as long as
the original author is credited as part of the final work.
In exchange for this royalty free use, the user agrees
that no guarantee or warantee is given or implied.
**********************************************************
The code in the .c and .h files accompanying this test are
used to create, train, save, and restore neural network
models simulated on standard computer systems. Because we
could not predict which computers might be used to simulate
these networks, we have made the following assumptions in
all of our models:
1. All input and output from the simulators is
accomplished through data stored in files. Each simulator
has its own functions to read and write data files, as
required.
2. No attempt has been made to create a user interface for
these simulators. Rather, the simulators are all
implemented in ANSI C to improve portability, at the
expense of the user interface.
3. Where possible, all simulators share common functions,
such as those that create and connect layer structures in
computer memory.
4. Each paradigm is defined by a file named "
.c,"
where is the acronym for the network model. For
example, the file "bpn.c" is the top level file for the
backpropagation network paradigm, while "art1.c" contains
the definition of the ART1 model.
5. In all paradigms, the layer is the central construct
upon which the network model is built. The code will allow
you to create and test networks of various sizes and
configurations, but you are ultimately responsible for
ensuring that the network topology is correct for your
application.
Also, because we could not predict your comfort level in
using makefiles, the code for each network paradigm is
designed to be #included as part of the top level file,
instead of being compiled and linked separately.
Each simulator has been compiled and tested on the example
.dat files included with these code files. These example
files are relativly small and easy to understand. The
format for the .dat file is described in the file
"exemplar.c", and the code that reads these exemplars is
also contained in that file.
Because we could not predict the specific example problems
that you might want to try, these simulators are designed
to create networks that are configured from commands that
are processed in the top level function call for the
paradigm. To use a network model to test a specific
application, modify the code in the main() function in the
paradigm file that you want to try. Define the size of the
network you will need, set the parameters for the network
appropriately, and tell the simulator where to look for the
training exemplars. After training completes, the
simulator always produces a network data file containing
the weight values for the connections in the network after
training. You can extend the function of these simulators
to allow pattern matching after training by using the
functions to create the network, then set the connection
weights using the "restore" function defined for each
network.
The code for the simulator is also designed to try to
simplify the process of developing a working network as
much as possible. Capitalized symbols in the code are
always shorthand notation to allow you to read and
understand the code, while also allowing the c compiler to
generate code as efficiently as possible. As an example,
consider the main() function in the "bpn.c" file, which is
shown below for clarification.
void main ()
{
bpn *n;
int *layers;
layers = define_layers (3, 2, 4, 1);
n = build_bpn (layers);
connect (LAYER[0], LAYER[1], COMPLETE, RANDOM);
connect (LAYER[1], LAYER[2], COMPLETE, RANDOM);
set_parameters (LAYER[1], DOT_PRODUCT, SIGMOID, 1.0, 0.6,
0.3);
set_parameters (LAYER[2], DOT_PRODUCT, SIGMOID, 1.0, 0.6,
0.3);
train_net (n, "xor.dat", 0.001, 10000);
free (layers);
destroy_bpn (n);
}
This function says, in English:
Begin by creating a data structure to specify the
configuration of the network. In this case, we are creating
a 3 layer network, with 2 input units, 4 units on a single
hidden layer, and 1 output unit.
Build the neural network structure for this network, and
assign the network pointer to the variable "n".
Connect the input layer to the hidden layer, using random
weights to initialize the connections.
Connect the hidden layer to the output layer, using random
weights to initialize the connections.
Set the learning parameters for the hidden layer. In this
example, use the vector inner product function to compute
the input to the layer, use the sigmoid activation function
to produce the output from the layer, use 1.0 as the
activation modifier (tau) for the layer, 0.6 as the value
for the learning rate, and 0.3 as the momentum value.
Set the learning parameters for the output layer. Notice
that, for both layers, calling set_parameters not only
defines the activation function for the layer, but also
installs the correct derivative function for the
backpropagation computation (see activate.c and initnet.c
for more details).
Train the network, using exemplars contained in the file
"xor.dat" Continue training until the global error goes
below 0.001 or for 10,000 epochs, whichever occurs first.
Once done with training, free the data structures and
destroy the network. Notice that the network configuration
does not get saved unless the network converges to a
solution. You may want to change this functionality to
save partially trained networks.
Good luck, and may all of your networks converge.
David
***********************************************************
近期下载者:
相关文件:
收藏者: