The AI Podcast
The AI Podcast
About The AI Podcast
Generative AI and large language models (LLMs) are stirring change across industries — but according to NVIDIA Senior Product Manager of Developer Marketing Annamalai Chockalingam, “we’re still in the early innings.” In the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with Chockalingam about LLMs: what they are, their current state and their future potential. LLMs are a “subset of the larger generative AI movement” that deals with language. They’re deep learning algorithms that can recognize, summarize, translate, predict and generate language. AI has been around for a while, but according to Chockalingam, three key factors enabled LLMs. One is the availability of large-scale data sets to train models with. As more people used the internet, more data became available for use. The second is the development of computer infrastructure, which has become advanced enough to handle “mountains of data” in a “reasonable timeframe.” And the third is advancements in AI algorithms, allowing for non-sequential or parallel processing of large data pools. LLMs can do five things with language: generate, summarize, translate, instruct or chat. With a combination of “these modalities and actions, you can build applications” to solve any problem, Chockalingam said. Enterprises are tapping LLMs to “drive innovation,” “develop new customer experiences,” and gain a “competitive advantage.” They’re also exploring what safe deployment of those models looks like, aiming to achieve responsible development, trustworthiness and repeatability. New techniques like retrieval augmented generation (RAG) could boost LLM development. RAG involves feeding models with up-to-date “data sources or third-party APIs” to achieve “more appropriate responses” — granting them current context so that they can “generate better” answers. Chockalingam encourages those interested in LLMs to “get your hands dirty and get started” — whether that means using popular applications like ChatGPT or playing with pretrained models in the NVIDIA NGC catalog. NVIDIA offers a full-stack computing platform for developers and enterprises experimenting with LLMs, with an ecosystem of over 4 million developers and 1,600 generative AI organizations. To learn more, register for LLM Developer Day on Nov. 17 to hear from NVIDIA experts about how best to develop applications.
Talk about going after low-hanging fruit. Afresh is an AI startup that helps grocery stores and retailers reduce food waste by making supply chains more efficient. In the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with the company’s cofounder and president, Nathan Fenner, about its mission, offerings and the greater challenge of eliminating food waste. Most supply chain and inventory management offerings targeting grocers and retailers are outdated. Fenner and his team noticed those solutions, built for the nonperishable side of the business, didn’t work as well on the fresh side — creating enormous amounts of food waste and causing billions in lost profits. The team first sought to solve the store-replenishment challenge by developing a platform to help grocers decide how much fresh produce to order to optimize costs while meeting demand. They created machine learning and AI models that could effectively use the data generated by fresh produce, which is messier than data generated by nonperishable goods because of factors like time to decay, greater demand fluctuation and unreliability caused by lack of barcodes, leading to incorrect scans at self-checkout registers. The result was a fully integrated, machine learning-based platform that helps grocers make informed decisions at each node of the operations process. The company also recently launched inventory management software that allows grocers to save time and increase data accuracy by intelligently tracking inventory. That information can be inputted back into the platform’s ordering solution, further refining the accuracy of inventory data. It’s all part of Afresh’s greater mission to tackle climate change. “The most impactful thing we can do is reduce food waste to mitigate climate change,” Fenner said. “It’s really one of the key things that brought me into the business: I think I’ve always had a keen eye to work in the climate space. It’s really motivating for a lot of our team, and it’s a key part of our mission.”
Clinician-led healthcare AI company Harrison.ai has built an AI system that serves as “spell checker” for radiologists — flagging critical findings to improve the speed and accuracy of radiology image analysis, reducing misdiagnoses. In the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with Harrison.ai CEO and cofounder Aengus Tran about the company’s mission to scale global healthcare capacity with autonomous AI systems. Harrison.ai’s initial product, annalise.ai, is an AI tool that automates radiology image analysis to enable faster, more accurate diagnoses. It can produce 124-130 different possible diagnoses and flag key findings to aid radiologists in their final diagnosis. Currently, annalise.ai works for chest X-rays and brain CT scans. While an AI designed for categorizing traffic lights, for example, doesn’t need perfection, medical tools must be highly accurate — any oversight could be fatal. To overcome this challenge, annalise.ai was trained on millions of meticulously annotated images — some were annotated three to five times over before being used for training. Harrison.ai is also developing Franklin.ai, a sibling AI tool aimed to accelerate and improve the accuracy of histopathology diagnosis — in which a clinician performs a biopsy and inspects the tissue for the presence of cancerous cells. Similarly to annalise.ai, Franklin.ai flags critical findings to assist pathologists in speeding and increasing the accuracy of diagnoses. Ethical concerns about AI use are ever-rising, but for Tran, the concern is less about whether it’s ethical to use AI for medical diagnosis but “actually the converse: Is it ethical to not use AI for medical diagnosis,” especially if “humans using those AI systems simply pick up more misdiagnosis, pick up more cancer and conditions?” Tran also talked about the future of AI systems and suggested that the focus is dual: first, focus on improving preexisting systems and then think of new cutting-edge solutions. And for those looking to break into careers in AI and healthcare, Tran says that the “first step is to decide upfront what problems you’re willing to spend a huge part of your time solving first, before the AI part,” emphasizing that the “first thing is actually to fall in love with some problem.”
Artificial intelligence is now a household term. Responsible AI is hot on its heels. Julia Stoyanovich, associate professor of computer science and engineering at NYU and director of the university’s Center for Responsible AI, wants to make the terms “AI” and “responsible AI” synonymous. In the latest episode of the NVIDIA AI Podcast, host Noah Kravitz spoke with Stoyanovich about responsible AI, her advocacy efforts and how people can help.
Generative AI-based models can not only learn and understand natural languages — they can learn the very language of nature itself, presenting new possibilities for scientific research. Anima Anandkumar, Bren Professor at Caltech and senior director of AI research at NVIDIA, was recently invited to speak at the President’s Council of Advisors on Science and Technology. At the talk, Anandkumar says that generative AI was described as “an inflection point in our lives,” with discussions swirling around how to “harness it to benefit society and humanity through scientific applications.” On the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with Anandkumar on generative AI’s potential to make splashes in the scientific community. It can, for example, be fed DNA, RNA, viral and bacterial data to craft a model that understands the language of genomes. That model can help predict dangerous coronavirus variants to accelerate drug and vaccine research. Generative AI can also predict extreme weather events like hurricanes or heat waves. Even with an AI boost, trying to predict natural events is challenging because of the sheer number of variables and unknowns. However, Anandkumar explains that it’s not just a matter of upsizing language models or adding compute power — it’s also about fine-tuning and setting the right parameters. “Those are the aspects we’re working on at NVIDIA and Caltech, in collaboration with many other organizations, to say, ‘How do we capture the multitude of scales present in the natural world?’” she said. “With the limited data we have, can we hope to extrapolate to finer scales? Can we hope to embed the right constraints and come up with physically valid predictions that make a big impact?” Anandkumar adds that to ensure AI models are responsibly and safely used, existing laws must be strengthened to prevent dangerous downstream applications. She also talks about the AI boom, which is transforming the role of humans across industries, and problems yet to be solved. “This is the research advice I give to everyone: the most important thing is the question, not the answer,” she said.
In the global entertainment landscape, TV show and film production stretches far beyond Hollywood or Bollywood — it's a worldwide phenomenon. However, while streaming platforms have broadened the reach of content, dubbing and translation technology still has plenty of room for growth. Deepdub acts as a digital bridge, providing access to content by using generative AI to break down language and cultural barriers. On the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with the Israel-based startup’s co-founder and CEO, Ofir Krakowski. Deepdub uses AI-driven dubbing to help entertainment companies boost efficiency and cut costs while increasing accessibility. The company is a member of NVIDIA Inception, a free program that offers startups go-to-market support, expertise and technological assistance. Traditional dubbing is slow, costly and often missing the mark, Krakowski says. Current technology struggles with the subtleties of language, leaving jokes, idioms or jargon lost in translation. Deepdub offers a web-based platform that enables people to interact with sophisticated AI models to handle each part of the translation and dubbing process efficiently. It translates the text, generates a voice and mixes it into the original music and audio effects. But as Krakowkski points out, even the best AI models make mistakes, so the platform involves a human touchpoint to verify translations and ensure that generated voices sound natural and capture the right emotion. Deepdub is also working on matching lip movements to dubbed voices. Ultimately, Krakowski hopes to free the world from the restrictions placed by language barriers. “I believe that the technology will enable people to enjoy the content that is created around the world,” he said. “It will globalize storytelling and knowledge, which are currently bound by language barriers.” https://blogs.nvidia.com/blog/2023/08/30/deepdub/
Replit aims to empower the next billion software creators. In this week’s episode of NVIDIA’s AI Podcast, host Noah Kraviz dives into a conversation with Replit CEO Amjad Masad. Masad says the San Francisco-based maker of a software development platform, which came up as a member of NVIDIA’s startup accelerator program, wants to bridge the gap between ideas and software, a task simplified by advances in generative AI. “Replit is fundamentally about reducing the friction between an idea and a software product,” Masad said. The company’s Ghostwriter coding AI has two main features: a code completion model and a chat model. These features not only make suggestions as users type their code, but also provide intelligent explanations of what a piece of code is doing, tracing dependencies and context. The model can even flag errors and offers solutions — like a full collaborator in a Google Docs for code. The company is also developing “make me an app” functionality. This tool allows users to provide high-level instructions to an Artificial Developer Intelligence, which then builds, tests and iterates the requested software. The aim is to make software creation accessible to all, even those with no coding experience. While this feature is still under development, Masad said the company plans to improve it over the next year, potentially having it ready for developers in the next 6 to 8 months. Going forward, Masad envisions a future where AI functions as a collaborator, able to conduct high-level tasks and even manage resources. “We're entering a period where software is going to feel more alive,” Masad said. “And so I think computing is becoming more humane, more accessible, more exciting, more natural.” For more on NVIDIA’s startup accelerator program, visit https://www.nvidia.com/en-us/startups/
The world increasingly runs on code. Accelerating the work of those who create that code will boost their productivity — and that’s just what AI startup Codeium, a member of NVIDIA’s Inception program for startups, aims to do. On the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz interviewed Codeium founder and CEO Varun Mohan and Jeff Wang, the company’s head of business, about the company's business, about how AI is transforming software. Codeium's AI-powered code acceleration toolkit boasts three core features: autocomplete, chat and search. Autocomplete intelligently suggests code segments, saving developers time by minimizing the need for writing boilerplate or unit tests. At the same time the chat function empowers developers to rework or even create code with natural language queries, enhancing their coding efficiency while providing searchable context on the entire code base. Noah spoke with Mohan and Wang about the future of software development with AI, and the continued, essential role of humans in the process.
Startup MosaicML is on a mission to help the AI community enhance prediction accuracy, decrease costs, and save time by providing tools for easy training and deployment of large AI models. In this episode of NVIDIA's AI Podcast, host Noah Kravitz speaks with MosaicML CEO and co-founder Naveen Rao, about how the company aims to democratize access to large language models. MosaicML, a member of NVIDIA's Inception program, has identified two key barriers to widespread adoption: the difficulty of coordinating a large number of GPUs to train a model and the costs associated with this process. Making training of models accessible is key for many companies who need to control over model behavior, respect data privacy, and iterate fast to develop new products based on AI.
Scientists at Matice Biosciences are using AI to study the regeneration of tissues in animals known as super-regenerators, such as salamanders and planarians. The goal of the research is to develop new treatments that will help humans heal from injuries without scarring. On the latest episode of NVIDIA’s AI Podcast, host Noah Kravtiz spoke with Jessica Whited, a regenerative biologist at Harvard University and co-founder of Matice Biosciences. https://blogs.nvidia.com/blog/2023/06/21/matice/
In the latest episode of NVIDIA's AI Podcast, Anant Agarwal, founder of edX and Chief Platform Officer at 2U, shared his vision for the future of online education and the impact of artificial intelligence in revolutionizing the learning experience. Agarwal, a strong advocate for Massive Open Online Courses MOOCs, discussed the importance of accessibility and quality in education. The MIT professor and renowned edtech pioneer also highlighted the implementation of AI-powered features in the edX platform, including the ChatGPT plugin and edX Xpert, an AI-powered learning assistant.
In this episode of the NVIDIA AI Podcast, host Noah Kravitz dives into an illuminating conversation with Alex Fielding, co-founder and CEO of Privateer Space. Fielding is a tech industry veteran, having previously worked alongside Apple co-founder Steve Wozniak on several projects, and holds a deep expertise in engineering, robotics, machine learning and AI. Privateer Space, Fielding’s latest venture, aims to address one of the most daunting challenges facing our world today: space debris. The company is creating a data infrastructure to monitor and clean up space debris, ensuring sustainable growth for the budding space economy. In essence, they’re the sanitation engineers of the cosmos. Privateer is also focused on bolstering space accessibility. All of the company’s datasets and those of its partners are being made available through APIs, so users can more easily build space applications related to Earth observation, climate science and more. Privateer Space is a part of NVIDIA Inception, a free program that offers go-to-market support, expertise and technology for AI startups. During the podcast, Fielding shares the genesis of Privateer Space, his journey from Apple to the space industry, and his subsequent work on communication between satellites at different altitudes. He also addresses the severity of space debris, explaining how every launch adds more debris, including minute yet potentially dangerous fragments like frozen propellant and paint chips. https://blogs.nvidia.com/blog/2023/05/23/privateer-space
Artificial intelligence is teaming up with crowdsourcing to improve the thermo-stability of mRNA vaccines, making distribution more accessible worldwide. In this episode of NVIDIA's AI podcast, host Noah Kravitz interviewed Bojan Tunguz, a physicist and senior system software engineer at NVIDIA, and Johnny Israeli, senior manager of AI and cloud software at NVIDIA. The guests delved into AI's potential in drug discovery and the Stanford Open Vaccine competition, a machine-learning contest using crowdsourcing to tackle the thermo-stability challenges of mRNA vaccines. Kaggle, the online machine learning competition platform, hosted the Stanford Open Vaccine competition. Tunguz, a quadruple Kaggle grandmaster, shared how Kaggle has grown to encompass not just competitions, but also datasets, code, and discussions. Competitors can earn points, rankings, and status achievements across these four areas. The fusion of artificial intelligence, crowdsourcing, and machine learning competitions is opening new possibilities in drug discovery and vaccine distribution. By tapping into the collective wisdom and skills of participants worldwide, it becomes possible to solve pressing global problems, such as enhancing the thermo-stability of mRNA vaccines, allowing for a more efficient and widely accessible distribution process. Don't miss this enlightening conversation on the transformative power of AI and crowdsourcing in mRNA vaccine distribution.
Imagine a future where your vehicle's interior offers personalized experiences and builds trust through human-machine interfaces and artificial intelligence. In this episode of the NVIDIA AI Podcast, host Katie Burke Washabaugh and guest Andreas Binner, Chief Technology Officer at Rightware, delve into this fascinating topic. Rightware is a company at the forefront of developing in-vehicle HMI. Their platform, Kanzi, works in tandem with NVIDIA DRIVE IX to provide a complete toolchain for designing personalized vehicle interiors for the next generation of transportation, including detailed visualizations of the car's AI. Andreas touches on his journey into automotive technology and HMI, the evolution of infotainment in the automotive industry over the past decade, and surprising trends in HMI. They explore the influence of AI on HMI, novel AI-enabled features, and the importance of trust in new technologies. Other topics include the role of HMI in fostering trust between vehicle occupants and the vehicle, the implications of autonomous vehicle visualization, balancing larger in-vehicle screens with driver distraction risks, additional features for trust-building between autonomous vehicles and passengers, and predictions for intelligent cockpits in the next decade. Learn about the innovations that Rightware's Kanzi platform and NVIDIA DRIVE IX bring to the automotive industry and how they contribute to the development of intelligent vehicle interiors. Tune in.
Imagine a stroller that can drive itself, help users up hills, brake on slopes and provide alerts of potential hazards. That’s what GlüxKind has done with Ella, an award-winning smart stroller that uses the NVIDIA Jetson edge AI and robotics platform to power its AI features. Kevin Huang and Anne Hunger are the co-founders of GlüxKind, a Vancouver-based startup that aims to make parenting easier with AI. They’re also married and have a child together who inspired them to create Ella. In this episode of the NVIDIA AI Podcast, host Noah Kravitz talks to Huang and Hunger about their journey from being consumers looking for a better stroller to becoming entrepreneurs who built one. They discuss how NVIDIA Jetson enables Ella’s self-driving capabilities, object detection, voice control and other features that make it stand out from other strollers. The pair also share their vision for the future of smart baby gear and how they hope to improve the lives of parents and caregivers around the world.
Tools like ChatGPT have awakened the world to the potential of generative AI. Now, much more is coming. On the latest episode of NVIDIA’s AI Podcast, Yves Jacquier, executive director of Ubisoft La Forge, shares valuable insights into the transformative potential of generative AI in the gaming industry. With over two decades of experience in technology innovation, science and R&D management across various sectors, Jacquier’s comprehensive expertise makes him a true visionary in the field. During his conversation with podcast host Noah Kravitz, Jacquier highlighted how generative AI, which enables computers to create unique content such as images, text and music, is already revolutionizing the gaming sector. By designing new levels, characters and items, and generating realistic graphics and soundscapes, this cutting-edge technology offers countless opportunities for more immersive and engaging experiences. As the driving force behind Ubisoft La Forge, Jacquier plays a crucial role in shaping the company’s academic R&D strategy. Key milestones include establishing a chair in AI deep learning in 2011 and founding Ubisoft La Forge, the first lab in the gaming industry dedicated to applied academic research — research that’s being translated into state-of-the-art gaming experiences.
Peter Ma was bored in his high school computer science class. So he decided to teach himself something new: how to use artificial intelligence to find alien life. That’s how he eventually became the lead author of a groundbreaking study published in Nature Astronomy. The study reveals how Ma and his co-authors used AI to analyze a massive dataset of radio signals collected by the SETI Breakthrough Listen project. They found eight signals that might just be technosignatures, or signs of alien technology. In this episode of the NVIDIA AI Podcast, host Noah Kravitz interviews Ma, who is now an undergraduate student at the University of Toronto. Ma tells Kravitz how he stumbled upon this problem and how he developed an AI algorithm that outperformed traditional methods in the search for extraterrestrial intelligence. You can read more about Ma’s research on NVIDIA’s blog: https://blogs.nvidia.com/blog/2023/02/06/ai-potential-alien-signals/
In the quest for knowledge at work, it can be tempting to think that finding what you need is like a needle in a haystack. But what if the haystack itself could show you where the needle is? That's the promise of large language models, or LLMs as they’re known, and it's the subject of a this week’s episode of NVIDIA’s AI Podcast featuring Deedy Das and Eddie Zhou, founding engineers at Glean, in conversation with our host, Noah Kravitz. With large-language models, the haystack can become a source of intelligence, helping guide knowledge workers on what they need to know. Glean is a Silicon Valley startup focused on providing better tools for enterprise search by indexing everything employees have access to in the company, including Slack, Dropbox, and email. The company raised a Series C financing round last year, valuing the company at $1 billion. By indexing everything employees have access to in the company, LLMs can provide a comprehensive view of the enterprise and its data, making it easier to find the information needed to get work done. In the podcast, Das and Zhou discuss the challenges and opportunities of bringing LLMs into the enterprise, and how this technology can help people spend less time searching and more time working. https://blogs.nvidia.com/blog/2023/03/01/glean-llm-enterprise-search/
Surfers, swimmers, and beachgoers face a hidden danger in the ocean: rip currents. These narrow channels of water can flow away from the shore at speeds up to 2.5 meters per second, making them one of the biggest safety risks for those enjoying the ocean. To help keep beachgoers safe, Dr. Christo Rautenbach, a coastal and estuarine physical processes scientist, has teamed up with the National Institute of Water and Atmospheric Research in New Zealand to develop a real-time rip current identification tool using deep learning. On this episode of the NVIDIA AI podcast, host Noah Kravitz interviews Dr. Rautenbach about the technology behind the rip current detection tool. The tool was developed by Dr. Rautenbach and NIWA in collaboration with Surf Lifesaving New Zealand and achieved a detection rate of roughly 90% in trials. The research behind the technology was published in the November 22nd edition of the journal Remote Sensing. Dr. Rautenbach explains how AI can be used to identify rip currents, a critical step in keeping beachgoers safe. He shares the research behind the technology and the results of the trials, as well as the potential for this tool to be used globally to help reduce the number of fatalities caused by rip currents. Tune in. https://blogs.nvidia.com/blog/2023/02/15/rip