In early 2024, the notion of high fidelity, believable AI-generated video seemed a distant future to many. Yet, a mere few weeks into the year, OpenAI unveiled Sora, its new state of the art text-to-video model producing videos of up to 60 seconds. The output shattered expectations – even for other builders and researchers within generative AI – sparking widespread speculation and awe.
How does Sora achieve such realism? And are explicit 3D modeling techniques or game engines at play?
In this episode of the a16z Podcast, a16z General Partner Anjney Midha connects with Stefano Ermon, Professor of Computer Science at Stanford and key figure at the lab behind the diffusion models now used in Sora, ChatGPT, and Midjourney. Together, they delve into the challenges of video generation, the cutting-edge mechanics of Sora, and what this all could mean for the road ahead.
Resources:
Find Stefano on Twitter: https://twitter.com/stefanoermon
Find Anjney on Twitter: https://twitter.com/anjneymidha
Learn more about Stefano’s Deep Generative Models course: :
https://deepgenerativemodels.github.io
Stay Updated:
Find a16z on Twitter: https://twitter.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Subscribe on your favorite podcast app: https://a16z.simplecast.com/
Follow our host: https://twitter.com/stephsmithio
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
The podcast a16z Podcast is embedded on this page from an open RSS feed. All files, descriptions, artwork and other metadata from the RSS-feed is the property of the podcast owner and not affiliated with or validated by Podplay.