# CS 591.2: Topics in Learning and Adaptive Systems

Instructor: Barak Pearlmutter
<bap@cs.unm.edu>

Time: Tue/Thur 2:00 - 3ish (tentative)

Location: FEC 141
The first meeting will be on Thurs Jan 23. We will (briefly) discuss
people's interests, and then get started with an overview of the
field. Many people seem particularly interested in reinforcement
learning, so an emphasis on that is one strong possibility. There has
been a surprising amount of progress in that area in the last five
years.

The class is open to both graduate students and advanced
undergraduates. There are no formal prerequisites, aside from knowing
what a matrix is and how to take a derivative. A little bit of
``mathematical sophistication'' wouldn't hurt though.

Please send me e-mail if you are interested in attending.

### Capsule Description

We will cover topics in learning and adaptive systems, especially
neural networks and related areas. After a gentle introduction to the
history of the field (perceptrons, cybernetics, symbolic vs
connectionist approaches to AI) we will go into a little depth on
early work, through the backpropgation algorithm. We then develop the
modern probabilistic framework, and move on to explore a few advanced
topics of choice from the following smorgasboard.
Depending on student interest, possibilities include: stochastic
optimization; the EM optimization algorithm and generalizations
thereof; reinforcement learning architectures; recent theoretical
results in reinforcement learning; hidden Markov models, the Kalman
filter, and the Condensation algorithm; unsupervised learning;
Helmholtz machines and their cousins; real applications and hairy
real-world issues; Bayesian networks; theories of generalization;
experimental methodologies for believable learning benchmarks;
relaxation networks; recurrent networks.

**If you are interested in making computers more adaptive but
don't know what most of the above terms mean, then this course is for
you!**

Throughout we will concentrate not on low-level mathematical details,
but on the underlying concepts and intuitions.

Students will be expected to do a class project. This would most
likely be implementing something covered in class. Another
possibility is an exploration of a body of literature. Team projects
will be encouraged. Joint projects for students also taking
*Advanced Topics in AI* (David Ackley, CS 538.1) can be
arranged.

### Syllabus

A syllabus will appear here at some point.
I've made some references available, organized
by topic. I will annotate them later. And integrate them with some
class notes. And tidy up all exposed surfaces in my office...

I'm also putting together a collection of code related to
algorithms and topics discussed in class. So far all that's there is
EM on a simple gaussian mixture model.

### Handouts

Reinforcement learning online
tutorial
Intro to generalization and VC dimension, postscript, overheads, four-up.