The first meeting will be on Thurs Jan 23. We will (briefly) discuss people's interests, and then get started with an overview of the field. Many people seem particularly interested in reinforcement learning, so an emphasis on that is one strong possibility. There has been a surprising amount of progress in that area in the last five years.
The class is open to both graduate students and advanced undergraduates. There are no formal prerequisites, aside from knowing what a matrix is and how to take a derivative. A little bit of ``mathematical sophistication'' wouldn't hurt though.
Please send me e-mail if you are interested in attending.
Depending on student interest, possibilities include: stochastic optimization; the EM optimization algorithm and generalizations thereof; reinforcement learning architectures; recent theoretical results in reinforcement learning; hidden Markov models, the Kalman filter, and the Condensation algorithm; unsupervised learning; Helmholtz machines and their cousins; real applications and hairy real-world issues; Bayesian networks; theories of generalization; experimental methodologies for believable learning benchmarks; relaxation networks; recurrent networks.
If you are interested in making computers more adaptive but don't know what most of the above terms mean, then this course is for you!
Throughout we will concentrate not on low-level mathematical details, but on the underlying concepts and intuitions.
Students will be expected to do a class project. This would most likely be implementing something covered in class. Another possibility is an exploration of a body of literature. Team projects will be encouraged. Joint projects for students also taking Advanced Topics in AI (David Ackley, CS 538.1) can be arranged.
I've made some references available, organized by topic. I will annotate them later. And integrate them with some class notes. And tidy up all exposed surfaces in my office...
I'm also putting together a collection of code related to algorithms and topics discussed in class. So far all that's there is EM on a simple gaussian mixture model.
Intro to generalization and VC dimension, postscript, overheads, four-up.