"Boosting" is a process of improving the performance of a learning
algorithm by applying it repeatedly.  For instance, most learning
algorithms can be trivially modified to put "weights" on the
exemplars, and to attempt to reduce the weighted error on the
exemplars, instead of giving each exemplar equal weight.  If a
learning algorithm can be guaranteed to always to slightly better than
chance, we can "boost" its performance by applying it to the dataset,
marking the exemplars it got wrong up increasing their weights,
running the algorithm again, and doing this process repeatedly.  We
save all the classifiers created, and use them together ("voting") to
make a better classifier.  This can result in a shockingly good
improvement: a "weak" learner (which given enough exemplars almost
always does slightly better than chance) can be boosted into a
"strong" learner (which given enough exemplars, almost always does
almost perfectly.)

See wikipedia article on AdaBoost for details.