Season 4 of our Theory and Practice podcast investigates the powerful new world of AI applications and what it means to be human in the age of human-like artificial intelligence. In Episode 6, we’re joined by James DiCarlo, the Peter de Florez Professor of Neuroscience at MIT and Director of the MIT Quest for Intelligence. Trained in biomedical engineering and medicine, Professor DiCarlo brings a technical mindset to understand the machine-like processes in our brains — particularly, the machinery that enables us to see.
“Anything that our brain is achieving is because there's some kind of machine running in there,” he says. “That means there is some machine that could emulate what we do. Our job is to figure out the details of that machine.”
In our conversation, Professor DiCarlo explores how well convolutional neural networks (CNNs) mimic the human brain. These networks excel at finding patterns in images to recognize objects. But human vision not only feeds information into different areas of the brain; it also receives feedback. We can see a thing that triggers a feeling, and look again to validate the feeling.
Professor DiCarlo argues that CNNs help him and his team understand how our brains gather vast amounts of information in a millisecond-long glimpse and from a tiny vantage point: we get that information from just 10 degrees of our field of vision. As he says, “We are on a path to discovering the machine that's running inside our brain to support our visual behavior.”
Alex and Anthony also discuss potential clinical applications for machine learning, from using an ECG to determine biological age to assessing cardiovascular health from retina images.