Leila Wehbe: What Harry Potter and Recurrent Neural Networks Tell Us About the Brain’s Inner Narrative

Why certain aspects of neuroscience are contingent on the development of AI technologies, according to researcher Leila Wehbe.

Leila Wehbe is a postdoctoral researcher within the Helen Wills Neuroscience Institute at Berkeley, where she uses fMRI and MEG techniques to study how the brain represents the meaning of words, sentences, and stories.

Key Takeaways

  • The AI community needs to develop richer, more representative language models to enable researchers to conduct more detailed studies of the brain.

  • Neural network tools for language and image understanding can help researchers perform experiments about more abstract concepts, like the experience of reading books.

  • Further understanding from neuroscience about how the brain represents language and simulates ideas would help people in the AI community build more powerful language models.

Jack: How has your study of the brain challenged your own assumptions about language?

Leila: The biggest assumption I had from reading the literature before starting this investigation is that language is only constrained to the left hemisphere or certain parts of the temporal cortex. Once we started studying language in a more naturalistic setting, like making subjects read entire stories, we found that not only are the regions involved in language processing larger than previously thought, but they are also very bilateral—they span ...

Get Artificial Intelligence: Teaching Machines to Think Like People now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.