On Tuesday April 219th, Garrett Kenyon, a neuroscientist at the Santa Fe Institute and researcher in the Petavision research project, visited the crowded Machine Perception and Cognitive Robotics lab at Florida Atlantic University’s Boca Raton campus.
On the same day he gave a speech on campus titled “A Deconvolutional Competitive Algorithm (DCA) for Generating Deep Sparse Representations of Form, Depth and Motion”, but before he did he listened to the many different research projects students are working on in the MPCR lab.
“I don’t know who will solve it, but if I had to guess, I’d say google. But just follow your instincts to the field,” said Kenyon to a question from a student. The student was concerned about whether it was worth pursuing a psychology major or not while interested in A.I.
One of the major projects of the lab, the rover that’s programmed to learn by q-learning, was especially interesting to Kenyon. Paul Morris from the lab explained how the q-learning and other programs were put into the physical rover to help test the code, the code being tested at the time being to learn how to find the pink-colored wall in a box the quickest way possible.
Morris also commented on Google Deepmind’s ability to play videogames as an aspiration, but Kenyon disagreed, “Look skeptically, all the screen snapshots from games like Pong don’t teach the real world.” He further went on to explain that programs that learn from many different photos cannot actually learn a real environment, the programs need to learn from video.
With that Kenyon left to make his speech at the Engineering building shaking the hands of the many students who came to see him.