with @withfries2 @smc90 What's real, what's hype when it comes to all the recent buzz around the language model GPT-3? What is "it", how does it work, where does it fit into the arc of broader tech trends (natural language, neural networks, deep learning approaches, more) -- and are we really getting closer to artificial general intelligence?
In this special "2x" explainer episode of 16 Minutes -- where we talk about what's in the news, and where we are on the long arc of various tech trends -- we cover all the buzz around GPT-3, the pre-trained machine learning model that's optimized to do a variety of natural-language processing tasks. The paper about GPT-3 was released in late May, but OpenAI (the AI "research and deployment" company behind it) only recently released private access to its API or application programming interface, which includes some of the technical achievements behind GPT-3 as well as other models.
It's a commercial product, built on research; so what does this mean for both startups AND incumbents... and the future of "AI as a service"? And given that we're seeing all kinds of (cherrypicked!) examples of output from OpenAI's beta API being shared -- from articles and press releases and screenplays and Shakespearean poetry to business advice to "ask me anything" search and even designing webpages and plug-ins that turn words into code and even does some arithmetic too -- how do we know how good it really is or isn't? And when we things like founding principles for a new religion or other experiments that are being shared virally (like "TikTok videos for nerds"), how do we know the difference between "looks like" a toy and "is" a toy (especially given that many innovations may start out so)?
And finally, where are we, really, in terms of natural language processing and progress towards artificial general intelligence? Is it intelligent, does that matter, and how do we know (if not with a Turing Test)? Finally, what are the broader questions, considerations, and implications for jobs and more? Frank Chen (who's shared a primer on AI/machine learning/deep learning as well as resources for getting started in building products with AI inside and more) explains what "it" actually is and isn't; where it fits in the taxonomy of neural networks, deep learning approaches, and more in conversation with host Sonal Chokshi. And the two help tease apart what's hype/ what's real here... as is the theme of this show.
image source: Gwern.net