# Neural Networks

 This lecture roughly follows the beginning of Bishop Chapter 3.

# Linear Models-an overview.

## Why Linear Models?

We study linear models because:

• Linear methods should be a perforamnce floor for other methods (since linear methods are so understandable, we should reject tricky non-linear methods whose performance is no better than direct linear methods).
• Linear models give us insights into how nonlinear models work.

## Kinds of Learning in a linear model.

1. Supervised Learning:

For each input, the agent has a target output. The actual output is compared with the target output, and the agent is altered to bring them closer together.

2. Unsupervised Learning:

The job of this algorithm is to determine the structure of the inputs.

3. Reinforcement learning:

The world receives the action of the agent. Under certain circumstances, the agent receives a reward. The agent adjusts its action to maximize the reward.

# Some linear analysis in historical order.

## 11 Widrow-Hopf

### 11.1 Definitions

This is the simplest non-trivial model of a neuron.

• The are the inputs to the neuron.
• are the weights assigned to the inputs.
• is the threshold of input to induce activity; the and the define the neuron.
• is the output determined by the inputs.

If we have several trials (Indexed by which could be time, but almost never is in these cases.), then there are vectors and outputs , as well as desired outputs .

 Our Goal: Find an incremental way of finding the optimal , that is, the values of space minimizing the error functional:

### 11.2 Method: Gradient Descent, or Widrow-Hoff LMS

Since depends on , we treat as a function of , and seek out the yielding minimal . Since the error function is quadratic, following the gradient towards the single local minimum in sufficiently small steps will lead us to the minimum error spot.

We let change by , where is precisely described as small''.

The coordinate of for the trial is computed via:

So that the update rule is given by (1)

 (1)

### 11.3 Analysis

Whenever we analyze one of these methods, we are concerned with the following questions:

1. Does it converge?
2. How fast does it converge?
3. How do we set the learning parameter (in this case .)

When we slect , we have to keep in mind that choosing an that is too small may cause slow convergence, while choosing an too large may cause us to skip over the minimum point (see figure 1.1.3)

We want to choose so that the successive approximations converge to some position in weight space.

If we teach the neuron with several trials, the total error is . This is the average of the errors.

 It is important to include several trials, since, if space in some trial, doesn't matter, and we can't find a value for it.

Widrow-Hopf LMS was first used in adaptive radar beam forming programs in the late 50's and early 60's.

## 12 Ronsenblatt's Perceptron

### Description

The perceptron problem includes the following:

• The inputs .
• The weights .
• The threshold ; and define the neuron.
• .
• The output .
• The desired output .

### 12.1 The Goal:

Given inputs and outputs in , find such that .

### 12.2 The Method:

The standard method for finding is called linear discrimination. Given the whole of space is divided into that yield 0 output and that yield 1 ouput (see figure 1.2.2).

Given a putative , we get that whenever . is a line in space with dimension and with normal . One side of this hyperplane consists of all that will yield an on position, and the other side consists of all values that will yield an off position.

Since our trials tell us what values actually yield on and off, choosing the and is a matter of choosing a plane that puts the on and off dots on the correct sides (see figure 1.2.2).

### 12.3 Specific Linear Programming problem.

For the sake of simplicity, let for a moment. It is easy to take care of the afterards.

We want change thus:

 Let . Let space be set to be the smallest positive value where

We would like to guarantee that , so that

.

This means that:

.

So that:

If we say .

Where is just barely large enough to force the to change sign. (Remember that the was set to the largest possible value that would not change the sign.)

### 12.4 Evaluation and batch solution

We still want to know :

• Does this method converge?
• How fast?
• How do we set the learning parameters?

we here implement the perceptron learning problem as a batch solution:

Given trials indexed (again) by , we have and . and . For some , , and for some, . In order for each of the trials to match the desired value, we need for all . This is a linear programming problem. Every defines a half space in - coordinates defined by , and the possible 's must lie in the intersection of all these spaces.

Neural Networks

This document was generated using the LaTeX2HTML translator Version 99.1 release (March 30, 1999)

Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999, Ross Moore, Mathematics Department, Macquarie University, Sydney.

The command line arguments were:
latex2html -split 1 -white 8_28.tex

The translation was initiated by Ben Jones on 2000-08-30

Ben Jones
2000-08-30