The large language models (LLMs) that energy at present’s chatbots have gotten so astoundingly capable, AI researchers are arduous pressed to evaluate these capabilities—it appears that evidently no sooner is there a brand new take a look at than the AI programs ace it. However what does that efficiency actually imply? Do these fashions genuinely perceive our world? Or are they merely a triumph of information and calculations that simulates true understanding?
To hash out these questions, IEEE Spectrum partnered with the Computer History Museum in Mountain View, Calif. to carry two opinionated specialists to the stage. I used to be the moderator of the occasion, which came about on 25 March. It was a fiery (however respectful) debate, effectively price watching in full.
Emily M. Bender is a College of Washington professor and director of its computational linguistics laboratory, and he or she has emerged over the previous decade as one of many fiercest critics of at present’s main AI firms and their strategy to AI. She’s also called one of many coauthors of the seminal 2021 paper “On the Dangers of Stochastic Parrots,” a paper that laid out the attainable dangers of LLMs (and induced Google to fireplace coauthor Timnit Gebru). Bender, unsurprisingly, took the “no” place.
Taking the “sure” place was Sébastien Bubeck, who just lately moved to OpenAI from Microsoft, the place he was VP of AI. Throughout his time at Microsoft he coauthored the influential preprint “Sparks of Artificial General Intelligence,” which described his early experiments with OpenAI’s GPT-4 whereas it was nonetheless underneath improvement. In that paper, he described advances over prior LLMs that made him really feel that the mannequin had reached a brand new stage of comprehension.
With no additional ado, we carry you the matchup that I name “Parrots vs. Sparks.”
From Your Website Articles
Associated Articles Across the Internet