Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

December 19, 2011

Back from NIPS 2011

Posted by David Corfield

Somehow 5 years have slipped by since my post Back from NIPS 2006. These NIPS (Neural Information Processing Systems) conferences bring together the machine learning community every December for a main conference in a city, followed by two days of workshops in a ski resort nearby. This time it was the turn of Granada and Sierra Nevada.

I could only manage time off for the workshops, where I was invited to participate in the Machine Learning and Philosophy session. The main thrust of my talk was to indicate that philosophy of science has shown that there’s much more to learning than what can be achieved by classifying and regression algorithms. There’s a tendency to believe that results in formal learning theory have a great deal to say about learning in general. It seems to me rather like believing that theorems in proof theory are the final word on what is philosophically interesting about mathematics. I have always believed instead that concept formation in science and mathematics is of the utmost importance.

It’s intriguing to think of what had to be learned about how to learn in the passage to modern science. We had to learn: to trust instruments; to believe it possible to test theoretical constructs; that mathematics is an appropriate language for science; that experimentation involving the removal of the thing to be studied was worthwhile; not to expect the Bible to help us with regard to scientific truth, and so on.

There wasn’t time for me to attend much more, but something I did enjoy was Stéphane Mallat’s talk explaining mathematically why deep belief networks work. Deep belief network people managed to overcome the problem that when you try to allow a neural network to have many layers of neurons, old training techniques would just adjust weights in a couple of layers. People like Geoffrey Hinton and Yann LeCun found a way to overcome this limitation, but by doing so they devised a whole bunch of tricks, and it is not at all clear why they work. Mallat showed (I think the nearest papers available are two with Bruna, Classification with Scattering Operators and Group Invariant Scattering) how the use of wavelet transforms on function spaces on Lie groups places their work in a clearer mathematical setting.

Perhaps we could see this work as the latest version of the program Helmholtz and Poincaré initiated, relating group theory to our visual sensation.

Posted at December 19, 2011 11:18 AM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2477

0 Comments & 0 Trackbacks

Post a New Comment