There are many teams and subsections of the MPCR lab. Although each group has a focus on machine learning and artificial intelligence, the ways of pursuit and central focus are different. If you see a group you’re interested in, feel free to contact the team leader and get more information, or stop by a team meeting and see for yourself.
In the lab, we work on an open-source platform for autonomous vehicle development that can be used to train vehicles to navigate autonomously using the most advanced machine learning techniques. This platform was designed to have interchangeable models that allow for researchers to implement their own algorithms for testing theories of artificial intelligence, machine learning, and visual perception.
Team Leaders: Alex Clark and Anthony Sanchez
Meeting times: Monday 11am, TBD
The BCI section of the MPCR lab is working on EEG recordings of electrical patterns of brain activity relating to the presentation of certain stimuli and/or while performing specific activities. These sessions are completed using the actiCHamp channel amplifier which allows for an integration with MATLAB software producing a visualization of human brain activity paired with the ability to translate it into functioning code. Once all required data is obtained, the team will work with the neural network developers in the lab to create a system that is capable of identifying the originally presented stimuli based on the brain wave data. The BCI team also intends to utilize the EGG/ANN integration to develop a BCI that will permit a user to remotely manipulate a rover/drone utilizing thought alone. These experiments will allow for the development and testing of theories relating to the nature of machine learning and human neurological wave patterns and their meanings.
Team Leader: Regynald Augustin
Meeting times: Wednesday and Fridays 4-6pm
Location: Behavioral Sciences 405, Florida Atlantic University
Idle is creating an autonomous system that can outfit golf carts, trolleys, and other low-speed vehicles in situations of casual, short distance, and off-road transportation.
Team Leader: Evan Clark
Meeting times: Tuesdays and Thursdays at 2:00pm in BS 405
Our team is dedicated to developing cutting edge toolsets to solve the most complex of biological problems. We aim to utilize the power and flexibility of neural networks to enrich thousands of biological datasets to identify unique genomic signatures across hundreds of samples. Our current project focus is developing a pipeline to identify gene expression signatures within different kidney cancer cell populations to develop an analysis toolset for rapidly identifying cancer sub-types from a single test. The long term goal for this team is to develop novel and patentable diagnostics for use in medicine and biomedical research.
Team Leader: Michael Keller
The 3D Kinect Camera is a revolutionary device used for body tracking, gaming, and other applications. This team’s current goal is to use the camera to track the body’s movements and discourage bad habits. For example, say someone is watching a movie and they start to slouch over. Our goal is to make an application, known as the Good Posture Application (GPA) to interface the Kinect with the volume control of the device the user is working with and lower the volume, so as to encourage them to sit up straight. So far, the GPA is currently able to track several people at a time; detect their faces, and display the data on screen. We are working on facial recognition as our next long term feature. This will enable us to save the posture data of the users so they can see their progress over time. We will also add sound feedback very soon as a good first step towards linking into the system’s volume control. With these changes implemented, we hope to release the GPA for wide use. This project is very useful for those who want to eliminate subconscious physical habits that may be detrimental to their health.
Meeting times: Fridays at 4:15 PM
Team Leader: Michael Kleiman
Meeting time: Tuesdays at 1:30 pm in BS 404
Eye Tracking is used across many fields and disciplines, including marketing, education, gaming, and clinical psychology. Our goal is to use eye movement data in conjunction with deep neural networks to predict anything from one’s personality type to chess proficiency to likelihood of developing a clinical disorder. Join us if you would like to participate in the cutting edge of psychology research!
Team Leader: Emily Stark
Meeting Times: Mondays from 9:30 – 11:00am
Olfaction, as a sense is a largely unexplored field yet seems to be an obvious application of Deep Learning. When a human being senses a smell, a gas has passed across their olfactory receptors, been converted into a signal traveling to the brain and is perceived as a specific scent. Currently robotic olfaction is being worked on across the globe for a variety of applications, however we are focusing on two main branches: olfaction as a perception and olfaction for disease management intervention technology. While gas sensors and “e-Noses” have been around for a handful of decades, this project is paring the artificial nose with the artificial brain.
Team Leader: Rachel St. Clair
DeepBind is a computational toolbox for exploring novel applications of machine learning networks in proteomics molecular binding. To discover the unknown function of various proteins, a model capable of tackling big datasets via sorting through the amino acid sequence is essential. Deep learning neural networks are breaking their way into the language of life: the genetic code. Applications of base prediction, feature detection, and classification inference are bountiful in proteomic investigations. This versatile model examines protein composition to predict function and elucidate unknown features of primary protein structure. Our final goal is to prediction of molecular binding (ie. antibody-antigen binding), a highly versatile computational task to many field so scientific research and application.
Team Leader: Rachel St. Clair
CyclePsych is a patient priming platform for modeling visual and auditory effects found by recent psychonautic research in Augmented Reality/Virtual Reality (XR) systems. Combining multiple mechanisms of consciousness state alteration will provide a basis for an AR program that could prepare patients for psychedelic therapy. Deep learning networks are incorporated into Magic Leap One augmented reality environment to manipulate visual effects. Ultimately we will draw upon multiple fields of brain science to create a program that may have utility in psychedelic therapy for priming patients to receive maximum benefits of such therapy.
Copyright © 2019 | Machine Perception & Cognitive Robotics Laboratory - Center for Complex Systems and Brain Sciences - Florida Atlantic University