Published
08/09/23
share

Claire Cui Dr. Claire Cui, Google Fellow at Google DeepMind

In the last year, chatbots have taken the world by storm using large language models (LLMs) to aid prediction and context. The Theory & Practice podcast explores how machine learning got here and what comes next for LLMs.

In the second episode of Theory and Practice: Season 4, our guest is Dr. Claire Cui, a Google Fellow at Google DeepMind. We discuss the underlying architecture of LLMs, how self-supervising algorithms work, and the developments that have driven these innovations to date. We also revisit the concept of groundedness that we discussed with Google Health’s Dr. Greg Corrado in episode one and delve into some of the ideas presented in her paper, “Mind’s Eye,” which goes deeper into the importance of physical simulations for grounded language model reasoning.

How do we empower the next generation of LLMs with greater deduction skills and efficiency? We explore a future where introspection is added to LLMs, and Dr. Cui gives broader context to our current thinking about AI’s vast potential.

We also learn about modular neural nets and how they mimic the human brain, with different areas primed for specific tasks. And she tells us about multimodal systems that need to gain introspection to cope with uncertainty, signal when they are not confident in their knowledge, and understand when they are being creative, just as most humans can.

How can human creativity and discernment help these systems to become more intelligent, and how can we learn to use these powerful tools responsibly? Tune in to this fascinating conversation with Dr. Cui to find out: Apple Podcasts, Google Podcasts, and Spotify.