Artificial neurons (ANs) can be combined in many ways to compute more complex functions than could be computed by a single AN. The most fundamental way is by combining ANs into a layered, feedforward neural network (FFNN). Likewise, FFNNs can learn in many ways but the most fundamental way is supervised learning. Moreover, FFNNs may be used for many tasks but the two most fundamental are classification and function approximation, of which classification is the easier to visualize. These ANN fundamentals — FFNNs, supervised learning, and classification — are the topics of this homework.
The goal of this assignment is to give you experience with supervised learning using backpropagation of error using FFNNs for classification.
Consider a two-layer FFNN—that is, one with two layers of computational elements (ANs)—used for classification in a 2D space with augmented vectors. The ANs in this FFNN are all SUs with sigmoidal activation functions with λ=1 and η = 0.5. The target values for γ1 and γ2 are 0.9 and 0.1, respectively. There are three ANs in the hidden layer and one in the output layer, with the weights v1,1=−0.1, v2,1=0.2, v3,1=−0.9, v1,2=0.6, v2,2=0.8, v3,2=−0.1, v1,3=−0.2, v2,3=−0.2, v3,3=−0.4, w1=−0.2, w2=0.6, w3=0.0, and w4=0.1. (Note that this is the same network structure with the same weights as found in Homework 2.)
Complete the following exercises:
Turn in a neatly handwritten copy of your answers to the exercises for this assignment. You may also turn in a scanned electronic copy of this assignment as a backup in case your paper copy is misplaced.