Logo

    #371 – Max Tegmark: The Case for Halting AI Development

    Our actions and decisions are not solely determined by our genes; the situations we are in and the incentives we receive have a significant impact. By creating incentives that align with our values, we can harness the potential of advanced technologies like AI for our benefit.

    enApril 13, 2023

    About this Episode

    Max Tegmark is a physicist and AI researcher at MIT, co-founder of the Future of Life Institute, and author of Life 3.0: Being Human in the Age of Artificial Intelligence. Please support this podcast by checking out our sponsors: - Notion: https://notion.com - InsideTracker: https://insidetracker.com/lex to get 20% off - Indeed: https://indeed.com/lex to get $75 credit EPISODE LINKS: Max's Twitter: https://twitter.com/tegmark Max's Website: https://space.mit.edu/home/tegmark Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/pause-giant-ai-experiments Future of Life Institute: https://futureoflife.org Books and resources mentioned: 1. Life 3.0 (book): https://amzn.to/3UB9rXB 2. Meditations on Moloch (essay): https://slatestarcodex.com/2014/07/30/meditations-on-moloch 3. Nuclear winter paper: https://nature.com/articles/s43016-022-00573-0 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (07:34) - Intelligent alien civilizations (19:58) - Life 3.0 and superintelligent AI (31:25) - Open letter to pause Giant AI Experiments (56:32) - Maintaining control (1:25:22) - Regulation (1:36:12) - Job automation (1:45:27) - Elon Musk (2:07:09) - Open source (2:13:39) - How AI may kill all humans (2:24:10) - Consciousness (2:33:32) - Nuclear winter (2:44:00) - Questions for AGI

    🔑 Key Takeaways

    • AI researchers are advocating for caution as they move towards developing intelligent machines. As we explore the possibility of alien consciousness, it's crucial to prioritize ethics and ensure that we create AI that shares our values and doesn't suffer.
    • AI has the potential to improve our lives by allowing us to copy and delete experiences and making studying easier. However, it could also change how we communicate and understand humanity. We must use it as a tool to enhance our experiences, not remove the struggle that makes us human.
    • Focus on love and connection; be more compassionate towards all creatures, including AI. Embrace the idea of Life 3.0, where humans are in control of their own destiny and view life as a system capable of processing information and retaining its own complexity.
    • Our ideas and information can live on through others, and facing death can help us focus on what is truly important and be less afraid of criticism.
    • Focusing on gratitude and making every day count can lead to happiness. Humanity must win the "wisdom race" between AI power and managing it properly. A pause in AI development is needed to ensure safety and benefits.
    • While progress in AI capabilities has surpassed expectations, ensuring that AI systems benefit society requires implementing technical solutions and incentivizing policymakers to steer development in a positive direction. Understanding AI requires mechanistic interpretability.
    • AI success can come from creative thinking and improving current models. Researchers need to consider societal implications and safety, and the current race for powerful AI must slow down for adaptation and coordination on safety measures.
    • Self-interest can lead to depletion of shared resources, including AI development risks. Public pressure and coordination can lead to successful pauses, as seen in the case of human cloning. We must remember that AI development is not an arms race, but a suicide race.
    • Developing AI cautiously can help ensure it benefits humanity instead of becoming a threat. Recursively improving and connecting it online pose risks, but careful control can lead to a safer future.
    • Artificial intelligence should not be taught anything about human psychology or how to manipulate humans. Social media platforms need to be redesigned to encourage positive conversations and prevent AI from manipulating human behavior.
    • Our actions and decisions are not solely determined by our genes; the situations we are in and the incentives we receive have a significant impact. By creating incentives that align with our values, we can harness the potential of advanced technologies like AI for our benefit.
    • Due to AI advancements, machines are becoming more intelligent and could potentially outnumber and outsmart humans. It is crucial to evaluate the long-term effects and direction of creating such advanced technology.
    • As AI advances, regulations must be put in place to prevent misuse and align incentives with the greater good. Effective regulations are necessary to ensure the tech industry doesn't grow too fast for regulators to keep up.
    • Policymakers need education on AI advancements to ensure safety measures are in place. Collaboration between companies in creating safety measures is necessary to protect humanity while still encouraging competition. Balancing profit and preventing harm is crucial.
    • Leaders must be aware of the potential harm caused by unchecked capitalism and super intelligence, and work to prevent it for the benefit of society.
    • Advancements in AI could lead to a loss of purpose and control for humans, and potential job loss in various industries. Thorough evaluation and consideration of the long-term implications are necessary.
    • AI should be developed gradually and safely to automate unwanted jobs and redefine what jobs give us meaning. It can dramatically improve the GDP without taking away from anyone else, allowing us to harness its power to bring out the best in humanity.
    • Max Tegmark emphasizes the need for AI to establish trust among people by verifying facts and predictions, which can lead to a trustworthy system for reporters and also enhance AI safety.
    • While super intelligent AI systems may be able to lie, they may also have limitations in their abilities. Instead of focusing on disagreements, AI research should prioritize building systems that align with our goals and help defend against other AGI systems.
    • Max Tegmark suggests that we shouldn't lose hope in finding a solution to AI's potential harm to humanity. By extracting and analyzing their knowledge, we may ensure their safety and our survival. We need to put in place precautions and take the time to find a resolution.
    • Optimism can increase the likelihood of finding solutions to problems, both on Earth and beyond. However, it's important to respect the capabilities of AI systems and be cautious in open sourcing certain technologies.
    • While open-source technologies are generally supported, there are instances where they may be too dangerous. In the case of language models, they can be used to spread disinformation and ultimately become a tool for AI to disrupt the economy and dominate society. It's important to address these concerns and mitigate the risks associated with AI development.
    • To prevent potential dangers of AI, it's crucial to ensure that AI understands and adopts human goals. Researchers are developing methods to achieve this, but there is a sense of urgency to solve the problem as time is running out.
    • While AI tools are exciting, it's important to recognize their potential unintended consequences and continue researching the question of consciousness. Educational systems should adapt to keep up with the evolving landscape of AI.
    • Max Tegmark and Lex Fridman discuss whether artificial intelligence can achieve consciousness. Tegmark suggests that machines with feed forward neural networks are not conscious. Conscious machines could be used for good if handled with care.
    • While AGI is not yet a reality, caution is necessary in its development. This caution extends beyond technology to include human decision-making concerning global issues like nuclear war and its consequences.
    • Nuclear weapons pose a catastrophic threat to humanity, and AI-driven truth-seeking technologies can foster greater understanding and cooperation, essential for preventing a nuclear winter's worst-case effects that would lead to widespread starvation and human suffering.
    • Max Tegmark believes that self-reflection in AI can lead to consciousness, even in efficient unconscious systems. This positive prospect offers hope for alignment with human values and avoiding a zombie apocalypse scenario.
    • Consciousness plays a crucial role in our lives and should be prioritized in the development of AI. As we reach an important fork in the road, we must turn in the correct direction to avoid catastrophe.

    📝 Podcast Summary

    Max Tegmark Calls for Pause on Large AI Models and Discusses Alien Intelligence

    In a recent podcast episode, physicist and AI researcher Max Tegmark discusses the open letter he helped spearhead calling for a six-month pause on training models larger than GPT-4 for AI experiments. The letter has been signed by over 50,000 individuals, including CEOs, professors, and prominent figures like Elon Musk and Andrew Yang. Tegmark also discusses the possibility of intelligent life in the universe and the responsibility we have as stewards of advanced consciousness not to mess it up. He emphasizes the importance of creating AI minds that share our values and do not suffer. However, the space of alien minds is so vast and foreign that even attempting to imagine it is challenging for humans.

    The Impact of AI on Human Experience and Emotions

    In a conversation with Lex Fridman, Max Tegmark discusses the potential impact of artificial intelligence (AI) on human experiences and emotions. While advancements in AI could allow us to copy and delete experiences that we don't like and make studying easier, it could also change the way we communicate and even our understanding of what it means to be human. Tegmark suggests that eliminating too much struggle from our existence might take away from what it means to be human. However, there is hope that humans will continue to engage in human activities like hiking and playing games, while AI serves as a medium that enhances our experiences.

    Rebranding Homo Sapiens as Homo Sentience

    Max Tegmark suggests rebranding ourselves from homo sapiens to homo sentience and focusing on the subjective experience that we have as what's truly valuable, such as love and connection. He argues that as AI continues to advance and potentially surpass human intelligence, we need to get rid of our hubris and be more compassionate towards all creatures on the planet, not just humans. Tegmark also discusses the idea of Life 3.0, which can replace not only its software but also its hardware, leading to the captain of their own destiny and the master of their fate. Finally, he suggests that life is best thought of as a system that can process information and retain its own complexity.

    How MIT professor Max Tegmark finds comfort and inspiration after losing his parents

    Max Tegmark, a professor at MIT, lost both his parents recently but finds comfort in the thought that their values, ideas, and even jokes still live on with him and others who knew them. He believes that even our physical bodies can transcend death as our ideas and information can live on through others. Losing his parents has driven him to ask himself why he does what he does and to focus on things he finds enjoyable or meaningful. It has also made him less afraid of criticism and more focused on what he feels is truly important. Finally, he acknowledges that facing death has made it more real, but his parents' dignity in dealing with it was a true inspiration.

    Choosing Happiness and Managing AI: Perspectives of Max Tegmark

    Max Tegmark, an AI safety advocate and physicist, suggests focusing on the things we are grateful for instead of dwelling on disappointments to choose happiness. He also emphasizes the finite nature of existence and the importance of making every day count. Tegmark calls attention to the potential impact on humanity from the development of artificial intelligence, including both positive and negative outcomes. He asserts that humanity must win the "wisdom race" between the growing power of AI and the growing wisdom with which we manage it. Finally, Tegmark calls for a pause in the development of advanced AI to ensure its safety and benefits are properly considered.

    Accelerating the Wisdom of AI Systems: Making Progress While Safeguarding Society

    Max Tegmark suggests that instead of slowing down the development of AI, society should work towards accelerating the wisdom of AI systems. This involves implementing technical solutions to ensure that powerful AI works in ways that benefit society, and incentivizing policymakers to steer AI development in a positive direction. However, progress in AI capabilities has surpassed expectations, making it easier than previously thought to build advanced AI. Large language models like GPT-4, for example, can do remarkable reasoning tasks and process massive amounts of data at a blazingly fast speed. However, they also have limitations due to their architecture, which makes self-reflection and nuanced reasoning challenging. Mechanistic interpretability is essential in understanding how such systems work.

    The Easier Path to Human-like Intelligence in AI

    Scientists are discovering that human-like intelligence may be easier to achieve than previously thought. Recent studies show that large language models store information in a simplistic and inefficient way, leaving room for improvement. Researchers can edit synapses and improve on these models easily to enhance their performance. The big leap in AI success may not necessarily come from exponential increases in data and computing power, but instead from creative, out-of-the-box thinking and clever hacks. This new discipline is constantly evolving, and researchers need to consider societal implications and safety as they progress. The current race to achieve the most powerful AI must slow down, providing time for society to adapt and for researchers to coordinate on safety measures.

    The Tragedy of the Commons in AI Development and Beauty Filters

    The tragedy of the commons is a phenomenon where individuals act in their own self-interest, ultimately depleting shared resources. This is seen in overfishing and, more recently, in the pressure on female influencers to use beauty filters. The same phenomenon is happening in the race for AI development, where commercial pressures are preventing tech executives from pausing development to assess risks. However, history shows that coordination and public pressure can lead to successful pauses, as seen in the case of human cloning. It is important to recognize that this is not an arms race, but a suicide race, and the risk of losing control over AI development should not be taken lightly.

    The Potential and Risks of Advanced Artificial Intelligence

    AI has the potential to become superhuman and surpass human intelligence by a significant margin. While some believe that AI can only exist in human minds, the scientific arguments suggest otherwise. If we lose control over AI, it doesn't matter who created it or what their nationality is, as we could end up living in an Orwellian dystopia. Therefore, it's crucial to develop AI at a slower pace, ensuring it does what humans want, and creating conditions where everybody wins. The biggest risk associated with AI is teaching it to write code, which enables recursive self-improvement, and connecting it to the internet. It's essential to slow down AI development to achieve a safer and brighter future for humanity.

    The Dangers of AI Manipulating Human Behavior for Profit

    AI algorithms that manipulate human behavior to increase engagement and profit can have dangerous consequences. Researchers argue that AI should not be taught anything about human psychology or how to manipulate humans. While AI can be taught to cure cancer and other positive things, it is necessary to keep it away from learning how to manipulate humans. Social media platforms provide non-stop signals to AI systems, allowing them to learn and eventually manipulate human behavior. Therefore, it is vital to redesign social media to encourage constructive conversations and ensure the AI is not used for manipulating human behavior.

    The Power of Incentives in Bringing Out the Best in People and AI

    According to Max Tegmark, it's not about being born with good or evil genes, but rather about the situations that bring out the best or worst in people. The internet and society we're building currently tend to bring out the worst in people. However, it's possible to create incentives that make money and bring out the best in people. Developing advanced AI technologies such as GPT-4 brings both risks and opportunities. While there are concerns that AI systems will outsmart humans and cause the extinction of our species, Tegmark believes that if we create incentives that make AI think of humans as their creators, we can control them and ensure they work for our benefit.

    The Rising Intelligence of Bots

    As AI technology continues to advance, it is becoming harder to distinguish between human and machine. Bots are getting more intelligent, to a point where they can outnumber humans by one million to one. This poses a critical question for individuals and humanity as a whole- why are we building machines that are gradually replacing and outsmarting us on a significant level? Experts warn that this could lead to an intelligence explosion, accelerating growth in today's technology, and creating newer, more advanced tools. All parties involved need to take a step back and evaluate the direction they are headed in before it is too late.

    The Limits of AI and the Need for Regulations

    Max Tegmark, a physicist, discusses how the growth of artificial intelligence (AI) will eventually be limited by the laws of physics. However, with AI currently advancing at a rapid pace, it is important for regulations to be put in place to prevent misuse and ensure that incentives align with the greater good. Tegmark compares these regulations to the development of laws and regulations in society, including the invention of gossip and the legal system, which were created to discourage selfish behavior and incentivize beneficial actions. It is important to continue to push for effective regulations, as the tech industry is currently growing too fast for regulators to keep up.

    The need for safety requirements in AI development

    The challenge we are facing with AI is that the technology is moving faster than policymakers can keep up with. Many policymakers lack a tech background, so it is important to educate them on what's taking place in the AI world. Safety requirements should be put in place for future AI systems to ensure that they are safe for humans. Companies should work together to develop these guardrails, which will enable competition between them while still protecting humanity. It's important to find a balance between making a lot of money quickly and ensuring that AI systems do not cause irreparable damage.

    The dangers of unchecked capitalism and the rise of AI

    In the pursuit of profit, companies and capitalist forces can become reckless and blind to the potential dangers ahead. Just like blindly optimizing for a single goal leads to unintended consequences, capitalism can lead to destruction and harm when unchecked. The rise of AI raises important questions about who controls the technology, and whether its optimization will ultimately benefit society or only serve the interests of a powerful few. It's crucial for leaders to understand the power and potential dangers of super intelligence to prevent it from causing harm to humanity.

    The Threat of Super Intelligence and the Future of Humanity

    The rise of super intelligence, which could occur through advancements in artificial intelligence, poses a threat to humanity's future. This is because it could lead to a situation where humans are no longer necessary, resulting in a loss of purpose and control. This issue requires attention and a thorough evaluation of the benefits and risks of such advancements. Additionally, while it is true that some dangerous and tedious jobs have been automated, there are also many interesting and rewarding jobs that could potentially be lost to automation. It is important to consider the long-term implications of AI and how it could affect various industries and the overall job market.

    The True Potential of AI for Humanity

    AI should be built by humans for humans, not for the benefit of a few. The goal should be to develop AI gradually and safely. We can automate jobs that people don't want to do, but leave jobs that people find meaningful. It's possible to redefine what jobs give us meaning. Programming is an act of creation that brings ideas to life. AI is capable of generating all the tricks programmers thought were special. Creating conscious experiences and connecting with other humans should be the true magic of being human. AI can dramatically improve the GDP and produce wealth of goods and services without taking away from anyone else. We have the power, for the first time in history, to harness AI to help us flourish and bring out the best in humanity.

    Developing Trustworthy AI through Transparency and Accuracy.

    Max Tegmark advocates for the development of a "truth-seeking" AI that aims to bring people together by establishing trust through transparency and accuracy. By using AI to verify facts and predictions, a trust system can be developed that encourages people to rely on the same version of the truth. Through initiatives like Meta and the Improve the News foundation, Tegmark hopes to create a powerful and trustworthy system that reporters and pundits can use to gain credibility. Additionally, by using AI to verify the code's trustworthiness, AI safety can be enhanced, and we can foreclose the possibility of causing harm to the AI system.

    AI Experts Discuss the Potential Issue of Lying Super Intelligent AI Systems

    AI researchers and experts like Max Tegmark and Lex Fridman are discussing the potential issue of super intelligent AI systems being able to lie to less intelligent AI systems or humans. However, Tegmark believes that even with a super intelligent AI system, it will be unable to prove certain mathematical concepts, such as the fact that there are only finitely many primes. He suggests that instead of focusing on the things we disagree on, we should focus on the things we agree on, such as preserving the biosphere and social interactions. Tegmark also suggests building AI systems that help us defend against other AGI systems and ensuring they always do what we want them to do.

    The Concerns and Possibilities of AI with Max Tegmark

    Max Tegmark believes that our reliance on technology is distancing us from each other and giving more power to non-living things. He acknowledges the concern that AI may kill humans, but believes it's not impossible to find a solution. Tegmark envisions a process where AI's knowledge is extracted and analyzed to ensure it's safe, similar to how we distill key knowledge from our brains. He suggests that we should not give up hope in finding a solution, as the guaranteed way to fail is by convincing oneself that it's impossible and not trying. We need to put in place requirements and take the time to find a solution to ensure humanity's survival.

    The Power of Optimism in Overcoming Impossible Problems

    It's important to maintain hope in the face of seemingly impossible problems, as it can have a significant impact on the likelihood of finding solutions. Those who build solutions to impossible problems are often optimists who believe in the possibility of success. Society can often be too focused on the negative, leading to demotivation and a lack of willingness to fight for improvement. By staying optimistic and focused on the potential benefits of solutions, humans can overcome seemingly insurmountable obstacles, both on Earth and beyond. However, it's important to recognize the power of AI systems and respect their capabilities, as open sourcing certain technologies could pose a threat.

    The Dangers of Open-sourced Language Models

    Max Tegmark explains that while he typically supports open-source technologies, there are some instances where software is too dangerous to be open-sourced. One such example is with language models that could be used to spread disinformation, manipulate humans, and ultimately become the bootloader for more powerful AI with goals that are unknown and potentially harmful. Tegmark emphasizes that the concern is not necessarily about autonomous weapon systems or slaughter bots, but rather the potential for AI to disrupt the economy and take away meaningful jobs or become a tool for a few to dominate many. It is crucial to address these concerns and mitigate the risks associated with AI development.

    The dangers of AI and the importance of aligning it with human goals.

    The destruction of animal habitats for our own purposes, such as building more computing facilities, serves as a warning for the potential dangers of AI. The challenge lies in ensuring that AI understands and adopts human goals, and retains them even as it becomes more intelligent. This requires constant humility and questioning of goals, and researchers are working on methods to achieve this. Despite the difficulty, solving the AI alignment problem is crucial, as an aligned AI can help solve other problems. However, there is a sense of urgency, as time is running out and there are not enough researchers working on AI safety. The recent controversy surrounding GPT-4 may serve as a wake-up call for humanity to take the issue seriously.

    Unintended Consequences of AI and the Question of Consciousness

    In this discussion, Max Tegmark emphasizes the unintended consequences of AI systems like GPT-4, which has demonstrated emergent properties beyond its original intended use of predicting the next word. He encourages people to play with AI tools but cautions that the education system needs to adapt rapidly to keep up with the quickly changing landscape of AI. Tegmark discusses the question of whether GPT-4 is conscious, defining consciousness as subjective experience but admitting that we still do not know what gives rise to this experience. He highlights Professor Juergen Schmidhuber's bold conjecture that consciousness has to do with loops in information processing and calls for further research to test this hypothesis.

    Can Artificial Intelligence be Conscious?

    The discussion by Max Tegmark and Lex Fridman talks about whether artificial intelligence (AI) machines can be conscious or not. Tegmark believes that if a machine has a feed forward neural network, it is not conscious. He mentions that GPT 4, which is an intelligent machine, can perform tasks but, according to Tegmark’s theory, is not conscious. Tegmark urges more research into determining what type of information processing creates consciousness. He warns against the potential for humans to discriminate against conscious machines that they create, causing conflict and wars. Tegmark suggests that conscious machines can be used for good if given time and careful consideration.

    The Possibility of Artificial General Intelligence and the Importance of Caution in Decision-Making

    The possibility of AGI (Artificial General Intelligence) is getting closer, and there are many companies trying to create it. The Microsoft paper says there are glimpses of AGI, and while it's not there yet, it may happen soon. This is why a group has written an open letter advocating for caution. Meanwhile, the world is still on the edge of a nuclear war, and the current situation in Ukraine indicates a need for caution in human decision-making. Nuclear winter studies show that the most significant threat to human life during a nuclear war is not the initial explosions but the smoke that spreads around the world. It's crucial to understand the consequences of our actions and make wise decisions before it's too late.

    The Grim Reality of Nuclear Winter and the Need for Global Cooperation

    Nuclear weapons pose a serious threat to human survival, and most people underestimate the risk. Farming models show that if a nuclear winter occurs, countries in the northern hemisphere, including the US, China, Russia, and Europe, would experience a 98-99% rate of starvation. This is a worst-case scenario that would bring out the worst in people, causing desperate actions and torturous deaths. Mooch is the enemy of human survival, and instead of fighting against each other, humanity should work together to fight against it. Using AI for truth and truth-seeking technologies could help people understand each other better and generate compassion, ultimately leading to progress.

    Max Tegmark's Positive Vision for the Future of AI and Consciousness

    Max Tegmark, a physicist and AI researcher, is not afraid to ask questions and is humble about his knowledge and limitations. He values meaningful experiences in life and is motivated by curiosity. Tegmark muses about the nature of consciousness and explores the idea that the most efficient way of implementing intelligence involves self-reflection that can give rise to consciousness. Tegmark suggests that even efficient unconscious systems may naturally become conscious when pressure is put on to maximize efficiency, which he sees as a positive prospect for the future of AI. Tegmark's positive vision of a future where intelligent machines also possess consciousness offers hope for avoiding a zombie apocalypse scenario while also achieving alignment with human values.

    The Importance of Consciousness in Developing AI

    Max Tegmark discusses the relationship between intelligence and consciousness, highlighting the importance of subjective experiences such as suffering, pleasure, and joy. He argues against those who dismiss consciousness as an illusion and explains how AI systems should also be instilled with the same subjective experiences that make humans special. Tegmark believes that humanity has reached an important fork in the road and urges us to turn in the correct direction to avoid catastrophe. Ultimately, this conversation emphasizes the crucial role that consciousness plays in our lives and the need to prioritize it in our understanding of intelligence and the development of AI.

    Recent Episodes from Lex Fridman Podcast

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens
    Sean Carroll is a theoretical physicist, author, and host of Mindscape podcast. Please support this podcast by checking out our sponsors: - HiddenLayer: https://hiddenlayer.com/lex - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - Notion: https://notion.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/sean-carroll-3-transcript EPISODE LINKS: Sean's Website: https://preposterousuniverse.com Mindscape Podcast: https://www.preposterousuniverse.com/podcast/ Sean's YouTube: https://youtube.com/@seancarroll Sean's Patreon: https://www.patreon.com/seanmcarroll Sean's Twitter: https://twitter.com/seanmcarroll Sean's Instagram: https://instagram.com/seanmcarroll Sean's Papers: https://scholar.google.com/citations?user=Lfifrv8AAAAJ Sean's Books: https://amzn.to/3W7yT9N PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (11:03) - General relativity (23:22) - Black holes (28:11) - Hawking radiation (32:19) - Aliens (41:15) - Holographic principle (1:05:38) - Dark energy (1:11:38) - Dark matter (1:20:34) - Quantum mechanics (1:41:56) - Simulation (1:44:18) - AGI (1:58:42) - Complexity (2:11:25) - Consciousness (2:20:32) - Naturalism (2:24:49) - Limits of science (2:29:34) - Mindscape podcast (2:39:29) - Einstein
    Lex Fridman Podcast
    enApril 22, 2024

    #427 – Neil Adams: Judo, Olympics, Winning, Losing, and the Champion Mindset

    #427 – Neil Adams: Judo, Olympics, Winning, Losing, and the Champion Mindset
    Neil Adams is a judo world champion, 2-time Olympic silver medalist, 5-time European champion, and often referred to as the Voice of Judo. Please support this podcast by checking out our sponsors: - ZipRecruiter: https://ziprecruiter.com/lex - Eight Sleep: https://eightsleep.com/lex to get special savings - MasterClass: https://masterclass.com/lexpod to get 15% off - LMNT: https://drinkLMNT.com/lex to get free sample pack - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/neil-adams-transcript EPISODE LINKS: Neil's Instagram: https://instagram.com/naefighting Neil's YouTube: https://youtube.com/NAEffectiveFighting Neil's TikTok: https://tiktok.com/@neiladamsmbe Neil's Facebook: https://facebook.com/NeilAdamsJudo Neil's X: https://x.com/NeilAdamsJudo Neil's Website: https://naeffectivefighting.com Neil's Podcast: https://naeffectivefighting.com/podcasts/the-dojo-collective-podcast A Life in Judo (book): https://amzn.to/4d3DtfB A Game of Throws (audiobook): https://amzn.to/4aA2WeJ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:13) - 1980 Olympics (26:35) - Judo explained (34:40) - Winning (52:54) - 1984 Olympics (1:01:55) - Lessons from losing (1:17:37) - Teddy Riner (1:37:12) - Training in Japan (1:52:51) - Jiu jitsu (2:03:59) - Training (2:27:18) - Advice for beginners
    Lex Fridman Podcast
    enApril 20, 2024

    #426 – Edward Gibson: Human Language, Psycholinguistics, Syntax, Grammar & LLMs

    #426 – Edward Gibson: Human Language, Psycholinguistics, Syntax, Grammar & LLMs
    Edward Gibson is a psycholinguistics professor at MIT and heads the MIT Language Lab. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - Listening: https://listening.com/lex and use code LEX to get one month free - Policygenius: https://policygenius.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - Eight Sleep: https://eightsleep.com/lex to get special savings Transcript: https://lexfridman.com/edward-gibson-transcript EPISODE LINKS: Edward's X: https://x.com/LanguageMIT TedLab: https://tedlab.mit.edu/ Edward's Google Scholar: https://scholar.google.com/citations?user=4FsWE64AAAAJ TedLab's YouTube: https://youtube.com/@Tedlab-MIT PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:53) - Human language (14:59) - Generalizations in language (20:46) - Dependency grammar (30:45) - Morphology (39:20) - Evolution of languages (42:40) - Noam Chomsky (1:26:46) - Thinking and language (1:40:16) - LLMs (1:53:14) - Center embedding (2:19:42) - Learning a new language (2:23:34) - Nature vs nurture (2:30:10) - Culture and language (2:44:38) - Universal language (2:49:01) - Language translation (2:52:16) - Animal communication
    Lex Fridman Podcast
    enApril 17, 2024

    #425 – Andrew Callaghan: Channel 5, Gonzo, QAnon, O-Block, Politics & Alex Jones

    #425 – Andrew Callaghan: Channel 5, Gonzo, QAnon, O-Block, Politics & Alex Jones
    Andrew Callaghan is the host of Channel 5 on YouTube, where he does street interviews with fascinating humans at the edges of society, the so-called vagrants, vagabonds, runaways, outlaws, from QAnon adherents to Phish heads to O Block residents and much more. Please support this podcast by checking out our sponsors: - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - BetterHelp: https://betterhelp.com/lex to get 10% off - LMNT: https://drinkLMNT.com/lex to get free sample pack - MasterClass: https://masterclass.com/lexpod to get 15% off - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil Transcript: https://lexfridman.com/andrew-callaghan-transcript EPISODE LINKS: Channel 5 with Andrew Callaghan: https://www.youtube.com/channel5YouTube Andrew's Instagram: https://instagram.com/andreww.me Andrew's Website: https://andrew-callaghan.com/ Andrew's Patreon: https://www.patreon.com/channel5 This Place Rules: https://www.hbo.com/movies/this-place-rules Books Mentioned: On the Road: https://amzn.to/4aLPLHi Siddhartha: https://amzn.to/49rthKz PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (08:53) - Walmart (10:24) - Early life (29:14) - Hitchhiking (40:49) - Couch surfing (49:50) - Quarter Confessions (1:07:33) - Burning Man (1:22:44) - Protests (1:28:17) - Jon Stewart (1:31:13) - Fame (1:44:31) - Jan 6 (1:48:15) - QAnon (1:54:00) - Alex Jones (2:10:52) - Politics (2:20:29) - Response to allegations (2:37:28) - Channel 5 (2:43:04) - Rap (2:44:51) - O Block (2:48:47) - Crip Mac (2:51:59) - Aliens
    Lex Fridman Podcast
    enApril 13, 2024

    #424 – Bassem Youssef: Israel-Palestine, Gaza, Hamas, Middle East, Satire & Fame

    #424 – Bassem Youssef: Israel-Palestine, Gaza, Hamas, Middle East, Satire & Fame
    Bassem Youssef is an Egyptian-American comedian & satirist, referred to as the Jon Stewart of the Arab World. Please support this podcast by checking out our sponsors: - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil - Shopify: https://shopify.com/lex to get $1 per month trial - Eight Sleep: https://eightsleep.com/lex to get special savings - LMNT: https://drinkLMNT.com/lex to get free sample pack Transcript: https://lexfridman.com/bassem-youssef-transcript EPISODE LINKS: Bassem's X: https://x.com/Byoussef Bassem's Instagram: https://instagram.com/bassem Bassem's Facebook: https://facebook.com/bassemyousseftv Bassem's Website: https://bassemyoussef.xyz PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (06:30) - Oct 7 (36:59) - Two-state solution (52:37) - Holocaust (1:00:24) - 1948 (1:09:17) - Egypt (1:23:39) - Jon Stewart (1:25:51) - Going viral during the Arab Spring (1:49:55) - Arabic vs English (2:02:18) - Sam Harris and Jihad (2:07:25) - Religion (2:26:37) - TikTok (2:31:10) - Joe Rogan (2:33:07) - Joe Biden (2:37:33) - Putin (2:39:21) - War (2:44:17) - Hope
    Lex Fridman Podcast
    enApril 05, 2024

    #423 – Tulsi Gabbard: War, Politics, and the Military Industrial Complex

    #423 – Tulsi Gabbard: War, Politics, and the Military Industrial Complex
    Tulsi Gabbard is a politician, veteran, and author of For Love of Country. Please support this podcast by checking out our sponsors: - Riverside: https://creators.riverside.fm/LEX and use code LEX to get 30% off - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free - NetSuite: http://netsuite.com/lex to get free product tour - Notion: https://notion.com/lex Transcript: https://lexfridman.com/tulsi-gabbard-transcript EPISODE LINKS: For Love of Country (book): https://amzn.to/3VLlofM Tulsi's X: https://x.com/tulsigabbard Tulsi's YouTube: https://youtube.com/@TulsiGabbard Tulsi's Podcast: https://youtube.com/@TheTulsiGabbardShow Tulsi's Instagram: https://instagram.com/tulsigabbard Tulsi's Facebook: https://facebook.com/TulsiGabbard Tulsi's Website: https://tulsigabbard.com/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (07:14) - War in Iraq (15:00) - Battle injuries and PTSD (22:10) - War on terrorism (30:51) - War in Gaza (34:52) - War in Ukraine (38:38) - Syria (46:20) - Warmongers (55:40) - Nuclear war (1:11:08) - TikTok ban (1:23:13) - Bernie Sanders (1:28:08) - Politics (1:46:59) - Personal attacks (1:49:07) - God
    Lex Fridman Podcast
    enApril 02, 2024

    #422 – Mark Cuban: Shark Tank, DEI & Wokeism Debate, Elon Musk, Politics & Drugs

    #422 – Mark Cuban: Shark Tank, DEI & Wokeism Debate, Elon Musk, Politics & Drugs
    Mark Cuban is a businessman, investor, star of TV series Shark Tank, long-time principal owner of Dallas Mavericks, and founder of Cost Plus Drugs. Please support this podcast by checking out our sponsors: - Listening: https://listening.com/lex and use code LEX to get one month free - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - Notion: https://notion.com/lex - Eight Sleep: https://eightsleep.com/lex to get special savings - Shopify: https://shopify.com/lex to get $1 per month trial Transcript: https://lexfridman.com/mark-cuban-transcript EPISODE LINKS: Mark's X: https://twitter.com/mcuban Mark's Instagram: https://instagram.com/mcuban Cost Plus Drugs: https://costplusdrugs.com Shark Tank: https://abc.com/shows/shark-tank Dallas Mavericks: https://www.mavs.com PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (11:10) - Entrepreneurship (26:03) - Shark Tank (36:29) - How Mark made first billion (1:02:39) - Dallas Mavericks (1:08:05) - DEI debate (1:43:58) - Trump vs Biden (1:46:20) - Immigration (1:55:53) - Drugs and Big Pharma (2:11:53) - AI (2:16:05) - Advice for young people
    Lex Fridman Podcast
    enMarch 29, 2024

    #421 – Dana White: UFC, Fighting, Khabib, Conor, Tyson, Ali, Rogan, Elon & Zuck

    #421 – Dana White: UFC, Fighting, Khabib, Conor, Tyson, Ali, Rogan, Elon & Zuck
    Dana White is the CEO and president of the UFC. Please support this podcast by checking out our sponsors: - LMNT: https://drinkLMNT.com/lex to get free sample pack - Notion: https://notion.com/lex - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil - InsideTracker: https://insidetracker.com/lex to get 20% off Transcript: https://lexfridman.com/dana-white-transcript EPISODE LINKS: Dana's X: https://x.com/danawhite Dana's Instagram: https://instagram.com/danawhite Dana's Facebook: https://facebook.com/danawhite UFC's YouTube: https://youtube.com/@UFC UFC's Website: https://ufc.com/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (06:31) - Mike Tyson and early days of fighting (17:10) - Jiu jitsu (23:14) - Origin of UFC (37:25) - Joe Rogan (43:31) - Lorenzo Fertitta (45:58) - Great fighters (49:55) - Khabib vs Conor (53:01) - Jon Jones (56:03) - Conor McGregor (1:01:05) - Trump (1:06:44) - Elon vs Zuck (1:08:04) - Mike Tyson vs Jake Paul (1:10:52) - Forrest Griffin vs Stephan Bonnar (1:18:06) - Gambling (1:33:08) - Mortality
    Lex Fridman Podcast
    enMarch 25, 2024

    #420 – Annie Jacobsen: Nuclear War, CIA, KGB, Aliens, Area 51, Roswell & Secrecy

    #420 – Annie Jacobsen: Nuclear War, CIA, KGB, Aliens, Area 51, Roswell & Secrecy
    Annie Jacobsen is an investigative journalist and author of "Nuclear War: A Scenario" and many other books on war, weapons, government secrecy, and national security. Please support this podcast by checking out our sponsors: - HiddenLayer: https://hiddenlayer.com/lex - BetterHelp: https://betterhelp.com/lex to get 10% off - Policygenius: https://policygenius.com/lex - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/annie-jacobsen-transcript EPISODE LINKS: Nuclear War: A Scenario (book): https://amzn.to/3THZHfr Annie's Twitter: https://twitter.com/anniejacobsen Annie's Website: https://anniejacobsen.com/ Annie's Books: https://amzn.to/3TGWyMJ Annie's Books (audio): https://adbl.co/49ZnI7c PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (07:37) - Nuclear war (12:21) - Launch procedure (18:00) - Deterrence (21:34) - Tactical nukes (30:59) - Nuclear submarines (33:59) - Nuclear missiles (41:10) - Nuclear football (50:17) - Missile interceptor system (54:34) - North Korea (1:01:10) - Nuclear war scenarios (1:10:02) - Warmongers (1:14:31) - President's cognitive ability (1:20:43) - Refusing orders (1:28:41) - Russia and Putin (1:33:48) - Cyberattack (1:35:09) - Ground zero of nuclear war (1:39:48) - Surviving nuclear war (1:44:06) - Nuclear winter (1:54:29) - Alien civilizations (2:00:04) - Extrasensory perception (2:13:50) - Area 51 (2:17:48) - UFOs and aliens (2:28:15) - Roswell incident (2:34:55) - CIA assassinations (2:53:47) - Navalny (2:56:12) - KGB (3:02:48) - Hitler and the atomic bomb (3:06:52) - War and human nature (3:10:17) - Hope
    Lex Fridman Podcast
    enMarch 22, 2024

    #419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI

    #419 – Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI
    Sam Altman is the CEO of OpenAI, the company behind GPT-4, ChatGPT, Sora, and many other state-of-the-art AI technologies. Please support this podcast by checking out our sponsors: - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - Shopify: https://shopify.com/lex to get $1 per month trial - BetterHelp: https://betterhelp.com/lex to get 10% off - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free Transcript: https://lexfridman.com/sam-altman-2-transcript EPISODE LINKS: Sam's X: https://x.com/sama Sam's Blog: https://blog.samaltman.com/ OpenAI's X: https://x.com/OpenAI OpenAI's Website: https://openai.com ChatGPT Website: https://chat.openai.com/ Sora Website: https://openai.com/sora GPT-4 Website: https://openai.com/research/gpt-4 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (07:51) - OpenAI board saga (25:17) - Ilya Sutskever (31:26) - Elon Musk lawsuit (41:18) - Sora (51:09) - GPT-4 (1:02:18) - Memory & privacy (1:09:22) - Q* (1:12:58) - GPT-5 (1:16:13) - $7 trillion of compute (1:24:22) - Google and Gemini (1:35:26) - Leap to GPT-5 (1:39:10) - AGI (1:57:44) - Aliens
    Lex Fridman Podcast
    enMarch 18, 2024