Logo

    #367 – Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI

    Developing conscious AI is a complex process that requires consideration of its interface, prompts, and training data. To test for true consciousness, mentions of it should be avoided.

    enMarch 25, 2023

    About this Episode

    Sam Altman is the CEO of OpenAI, the company behind GPT-4, ChatGPT, DALL-E, Codex, and many other state-of-the-art AI technologies. Please support this podcast by checking out our sponsors: - NetSuite: http://netsuite.com/lex to get free product tour - SimpliSafe: https://simplisafe.com/lex - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free EPISODE LINKS: Sam's Twitter: https://twitter.com/sama OpenAI's Twitter: https://twitter.com/OpenAI OpenAI's Website: https://openai.com GPT-4 Website: https://openai.com/research/gpt-4 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (08:41) - GPT-4 (20:06) - Political bias (27:07) - AI safety (47:47) - Neural network size (51:40) - AGI (1:13:09) - Fear (1:15:18) - Competition (1:17:38) - From non-profit to capped-profit (1:20:58) - Power (1:26:11) - Elon Musk (1:34:37) - Political pressure (1:52:51) - Truth and misinformation (2:05:13) - Microsoft (2:09:13) - SVB bank collapse (2:14:04) - Anthropomorphism (2:18:07) - Future applications (2:21:59) - Advice for young people (2:24:37) - Meaning of life

    🔑 Key Takeaways

    • AI has the power to transform society but also the potential to destroy it. It's important for leaders and philosophers to have conversations about how to align AI with human values, and reinforcement learning with human feedback is a breakthrough model that could help achieve this goal.
    • Incorporating human guidance into AI models enhances their usefulness and accuracy, making them more trustworthy and aligned with human thinking. Creating a diverse pre-training data set is crucial in developing effective AI models like GPT-4.
    • AI models are valuable tools, but evaluating their performance and understanding their reasoning capabilities is crucial. While the GPT-4 model is remarkable, it's important to recognize AI's limitations and avoid anthropomorphizing them.
    • AI models can perform complex tasks, but are often weak in areas like counting characters and words. Publicly building AI models can help identify strengths and weaknesses, but potential biases underline the need for granular user control. AI models can bring nuance to complex issues, which is vital for effective problem-solving.
    • OpenAI's GPT-4 underwent rigorous safety evaluations and has a steerable system that requires good prompt writing for optimal outcome.
    • As AI technology advances with GPT-4, it brings new opportunities for debugging and creativity, but also requires responsibility in ensuring alignment with human values and balancing AI power.
    • It is essential to involve diverse perspectives in the development of AI and work towards reducing biases. Transparency and accountability are crucial for creating ethical AI that benefits society.
    • OpenAI values user feedback and respect, refusing to answer certain questions while constantly improving their GPT models. Their focus on small wins for improvement shows that success isn't just about size, but also technical leaps and performance.
    • GPT models are capable of impressive performance, but building artificial general intelligence is still an unknown. There is potential for breakthroughs with GPT data, but new ideas may be necessary to expand on this paradigm.
    • AI is a tool that can bring extraordinary benefits, but it must be aligned with human values and goals to avoid harm and limitations on human potential.
    • Sam Altman, CEO of OpenAI, stresses the importance of addressing the AI alignment problem to prevent the potential dangers of AI. Efforts must be put into solving the problem and learning from the technology trajectory to ensure AI's safety.
    • Developing conscious AI is a complex process that requires consideration of its interface, prompts, and training data. To test for true consciousness, mentions of it should be avoided.
    • As AI capabilities grow, there is concern over disinformation, economic shocks, and safety controls. Regulatory approaches and powerful AI can help prevent these issues. Prioritize safety and mission over market pressures.
    • OpenAI's for-profit and nonprofit hybrid structure allows for non-standard decisions and collaborations while minimizing the negative impacts of AGI. Collaboration and ethical considerations are at the forefront of their approach.
    • Sam Altman, CEO of OpenAI, recognizes the importance of democratizing AI to reflect changing needs but acknowledges the challenges. OpenAI's transparency and willingness to share information about safety concerns is a step in the right direction. Feedback and collaboration with smart people are essential to figuring out how to do better in uncharted waters.
    • OpenAI CEO Sam Altman recognizes the potential danger of advanced AI, but values Musk's impact on the world through electric vehicles and space travel. Altman plans to combat AI bias by speaking with diverse people worldwide.
    • It is crucial to carefully choose representative human feedback raiders for AI models to optimize rating tasks and empathize with diverse experiences. While technology can help reduce bias, outside pressure can impact models.
    • As development of AGI and GPT language models progresses, it's important to prioritize user-centricity and be aware of the potential supply issues in the programming industry. Understanding the impact on society is crucial.
    • AI and technology might reduce job opportunities, but they also create new ones while enhancing others. Maintaining dignity at work, creating fulfilling jobs, and supporting UBI are critical during the transition to a tech-driven economy.
    • Sam Altman believes in lifting the floor instead of focusing on the ceiling, values individualism, and trusts in the collective intelligence and creativity of the world. He emphasizes the importance of humility and uncertainty in the development of super intelligent AGI.
    • With a plethora of information and disinformation online, it can be challenging to decipher what is true and what is not. While some facts have a higher degree of truth, others rely on a convincing narrative. It is essential to evaluate evidence and consider multiple perspectives when trying to determine the truth.
    • As OpenAI's GPT tool grows in power, there is a responsibility to minimize harm caused by its responses. While censorship may be necessary in certain situations, the team at OpenAI is working to improve user control and responsibly manage potential harm.
    • Setting high standards for team members, providing trust and autonomy while expecting hard work, and partnering with aligned and flexible leaders are essential for successful AI-based products.
    • Businesses must prioritize incentive alignment and leadership to avoid misalignment, which can be detrimental to the company and its customers. Embracing new technology like AGI can be beneficial, but must be implemented slowly and carefully.
    • We need to be aware that tools and systems are not creatures, but rather created for specific purposes. As we develop advanced tools, we must be careful not to project emotions onto them and stay focused on their intended use.
    • Data is valuable, but take advice from others with caution. Follow what brings joy and fulfillment, find what's useful and meaningful, and be introspective. Life may feel like going with the flow, but it's important to navigate its complexity.
    • Artificial intelligence is the result of human innovation over time, and progress should be made through iterative deployment and collaboration to ensure alignment and safety in development, ultimately leading to new tools and advancements for civilization.

    📝 Podcast Summary

    OpenAI's Leadership in the Future of AI

    OpenAI, the company behind groundbreaking AI technologies like GPT-4 and Dolly Codex is leading the way in shaping the future of artificial intelligence. While AI has the potential to transform society and improve the quality of life, it also has the power to destroy human civilization. Leaders and philosophers must engage in conversations about power, institutions, political systems, and economics that incentivize the safety and alignment of this technology. One breakthrough model that has the potential to change the AI landscape is the reinforcement learning with human feedback (RLHF) used in chat GPT. RLHF aligns the model to better serve human needs and wants. Ultimately, it's crucial to balance the exciting and terrifying possibilities of AI to ensure a future that is aligned with human values.

    The Significance of Human Guidance in AI Models

    Sam Altman and Lex Fridman discuss the significance of adding human guidance to AI models, which significantly enhances their usefulness and ease of use, leading to more accurate results. The science of human guidance is essential in creating AI models, as it helps in making them trustworthy, ethical, and more aligned with human thinking. Creating a large pre-training data set is crucial, and it involves pulling together diverse data from multiple sources, including open source databases, news sources, and the general web, among others. The development of AI models such as the GPT-4 involves several components, including choosing the neural network's size, selecting the data set, and incorporating human feedback. Understanding the science of human guidance is crucial in creating effective AI models.

    Understanding the Capabilities and Limitations of AI Models

    Open AI's Sam Altman explains that while they are still trying to understand why AI models make certain decisions, they are gaining a better understanding of how useful and valuable the models are to people. Evaluating a model's performance and understanding its reasoning capabilities are crucial to determine its overall impact on society. The GPT-4 model is particularly remarkable because it can reason to some degree and generate wisdom from human knowledge. However, there are still limitations to its capabilities, and there is a continuous effort to improve its functionality. While the temptation to anthropomorphize AI is high, it is important to recognize that AI models are not human and may struggle with certain tasks.

    The Potential and Limitations of AI Models

    Artificial Intelligence (AI) models are not good at tasks such as counting characters and words accurately. While AI models have the capability to perform tasks that we cannot imagine, they still have significant weaknesses that need to be fixed. Building AI models in public allows outside feedback that can help to discover more strengths and weaknesses. However, putting out imperfect models that are biased in some instances could be risky. These biases underline the importance of giving users more granular control. AI models can also bring nuance back to the world, which is crucial in dealing with complex problems that require critical thinking.

    OpenAI's GPT-4 Safety Measures and Steerable System

    OpenAI's GPT-4 goes through extensive safety testing before release. The team conducted internal and external safety evaluations while building new ways to align the model. Although the alignment process was not perfect, the team focused more on increasing their alignment degree faster than their rate of capability progress. OpenAI used a human voting process called "RLHF" to align the system with human values. The team made GPT-4 more steerable with the system message, which allows users to give prompts directing a specific type of output. Writing a great prompt is crucial for great results.

    The Power and Responsibility of GPT-4 in Programming and AI Safety

    The advancements in AI, particularly with GPT-4, are changing the nature of programming and allowing for a new form of debugging through back-and-forth dialogue with the AI system. However, with this power comes the responsibility of AI safety and ensuring that the AI aligns with human preferences and values. This is a difficult problem as it requires navigating who gets to decide the limits and balancing the power of the AI with drawing lines that we all agree have to be drawn somewhere. Despite the challenges, the leverage that AI gives to do creative work better and better is super cool and has already changed programming remarkably.

    Setting Boundaries for Ethical AI

    In order for AI to function ethically and effectively, it is necessary to agree on what we want it to learn and what boundaries should be set for its output. The ideal scenario would involve a democratic process where people from all over the world come together to deliberate and agree on the rules for the system. While this may not be entirely practical, it is important for AI builders to involve a range of perspectives in the development process and be accountable for the results. It is also important to acknowledge that AI models can have biases and to work towards improving their ability to present nuanced perspectives. Despite the pressure of clickbait journalism, transparency remains crucial in the development of ethical AI.

    OpenAI's Commitment to Continuous Improvement and User Respect

    OpenAI is constantly improving their system by listening to feedback and continuously developing their models. They have systems in place to refuse to answer certain questions and are constantly improving the tooling for their GPT models. OpenAI also recognizes the importance of treating their users like adults and not scolding them. The leap from GPT-3 to GPT-4 involves a lot of technical leaps and improvements. While size does matter in neural networks, it is just one of many factors that contribute to a system's performance. OpenAI focuses on finding small wins that, when combined, can have a big impact.

    The Potential of GPT Models for Achieving General Intelligence

    In this discussion between Sam Altman and Lex Fridman, they explore the topic of GPT (Generative Pre-trained Transformer) models and their potential for achieving general intelligence. While the size of the model in terms of number of parameters has been a focus for some, Sam Altman argues that what matters most is performance, and the best solution may not always be the most elegant. While GPT models have achieved incredible results, there is still much unknown about building artificial general intelligence, and it is possible that new ideas and expanding on the GPT paradigm will be necessary. However, there is also potential for deep scientific breakthroughs with just the data that GPT models are trained on.

    AI as an Extension of Human Will and an Amplifier of our Abilities

    AI is not a standalone system, but a tool used by humans in a feedback loop, and can be an extension of human will and an amplifier of our abilities. The benefits of AI can be extraordinary, including curing diseases, increasing material wealth, and making people happier and more fulfilled. Despite concerns about job displacement and the rise of super intelligent AI that may harm humans, people will still strive for status, drama, creativity, and usefulness even in a vastly improved world. To realize this optimistic vision, AI must be aligned with human values and goals to avoid harm or limitations on human potential.

    The Importance of Addressing the AI Alignment Problem

    Sam Altman, the CEO of OpenAI, believes that there is a chance for AI to become a dangerous technology if precautions are not taken. He acknowledges that predicting what AI is capable of can be challenging and that many of the previous predictions have been proven wrong. Altman believes that it is crucial to ensure that the AI alignment problem is addressed, and more effort must be put into solving it. While there is a need for theory, he also emphasizes the importance of learning from how the technology trajectory goes. Finally, he mentions that there is still a lot of work to be done to ensure AI's safety, and now is the perfect time to ramp up technical alignment work.

    Debating the Consciousness and Capabilities of Advanced AI Models

    In a conversation about artificial general intelligence (AGI), two experts debated the potential consciousness and capabilities of advanced AI models like GPT-4. While they agreed that GPT-4 is not yet an AGI, they also discussed how the interface and prompts provided to the AI could impact its level of consciousness and understanding of self. They also delved into the importance of careful training data and avoiding any mentions of consciousness in order to truly test if an AI model is capable of consciousness. Ultimately, understanding the potential for advanced AI requires careful consideration of both its capabilities and limitations.

    Risks of Consciousness in AI: Insights from Sam Altman and Lex Fridman

    In a conversation between Sam Altman and Lex Fridman, they discuss the concept of consciousness in AI and the potential risks that come with the increasing capabilities of AI. Altman believes that consciousness is a strange phenomenon, and while an AI may be able to pass a Turing test for consciousness, there are still many other tests that could be looked at. Altman also voices his concerns about disinformation problems, economic shocks, and the danger of capable LLMs (large language models) with no safety controls. He suggests trying regulatory approaches and using more powerful AI to detect and prevent these issues. Overall, it is important to prioritize safety and stick to your mission in the face of market-driven pressure.

    OpenAI's Unique Structure and Approach to AGI Development

    OpenAI began as a nonprofit organization, but they realized they needed more capital to build AGI, which they couldn't raise as a nonprofit. They became a capped for-profit organization while still keeping the nonprofit fully in charge. OpenAI's unique structure allows them to make non-standard decisions and merge with another organization while protecting them from making decisions that are not in shareholders' interests. OpenAI doesn't worry about out-competing everyone, as many organizations will contribute to AGI with differences in how they're built and what they focus on. While some companies are, unfortunately, after unlimited value, OpenAI believes that the better angels within these companies will win out, leading to a healthy conversation about how to collaborate to minimize the scary downsides of AGI.

    The Challenges of Democratizing AI According to OpenAI CEO

    Sam Altman, the CEO of OpenAI, recognizes the potential dangers of a handful of people having control over powerful AI technology. He believes that decisions about the technology should become increasingly democratic over time to reflect the world's changing needs. However, Altman acknowledges that democratizing AI is challenging and that institutions must come up with new norms to regulate it. Despite the concerns, Altman believes that OpenAI's transparency and the company's willingness to "fail publicly" and share information about safety concerns is a step in the right direction. He welcomes feedback and acknowledges that they are in "uncharted waters" with AI, explaining why it is essential to talk to smart people to figure out what to do better.

    Collaboration and Disagreement between OpenAI and Elon Musk

    Sam Altman, the CEO of OpenAI, recently discussed his work with Elon Musk, the founder of the company. The two agree on the magnitude of the downside of advanced AI, but disagree about certain aspects of its development. Altman has empathy for Musk, who he believes is understandably stressed about AI safety. Despite this, he admires Musk's contributions to the world, including driving forward the development of electric vehicles and space travel. Altman also acknowledges the challenge of bias in AI, which can be affected by the perspectives of employees in a company. To combat this, he plans on going on a world user tour to speak with people of different backgrounds and perspectives.

    Importance of Selecting Representative Human Feedback Raiders for AI Models

    Sam Altman, CEO of OpenAI, discusses the importance of selecting representative human feedback raiders for their AI models. He acknowledges that selecting these individuals is challenging, and that the company is still trying to figure out how to implement this effectively. Altman emphasizes the need to optimize for how well the rating tasks are done and to empathize with the experiences of different groups of people. He also discusses the potential for technology to make AI models less biased than humans, but acknowledges the pressure from outside sources, such as politicians and organizations, that could influence the models. Altman expresses his ability to withstand such pressure and his humility in recognizing his own weaknesses.

    The Impact of AGI and GPT Models on Programming and Society

    Sam Altman and Lex Fridman discuss the topic of change, specifically related to the development of AGI and GPT language models. Altman notes the danger of charisma in those in power and the importance of being a user-centric company. Fridman shares his nervousness and excitement about change and the implementation of GPT models in programming. Altman believes that with 10 times more code generated by GPT models, there will be a greater need for programmers, leading to a supply issue. Ultimately, Altman plans to travel and empathize with different users to better understand the impact AGI will have on people.

    The Future of Jobs and Technology

    Sam Altman, entrepreneur and investor, believes that as AI and technology continue to advance, certain job categories like customer service may see a significant reduction in employment opportunities. However, he also believes that while technological revolutions have historically eliminated jobs, they have also enhanced, created, and made higher paying jobs more enjoyable. Altman emphasizes the importance of maintaining the dignity of work and creating better jobs that offer fulfillment and happiness to individuals. He also supports Universal Basic Income (UBI) as a cushion during the transition to a tech-driven economy and eliminating poverty. Altman believes that the economic and political systems will change as AI and technology become more prevalent, but the economic transformation will drive most of the political transformation.

    Sam Altman's Perspective on Supporting the Less Fortunate and the Power of Distributed Processes

    Sam Altman, entrepreneur and investor, believes in supporting the less fortunate and lifting the floor instead of focusing on the ceiling. He recoils at the idea of living in a communist system and values individualism, human will, and self-determination. He also believes in distributed processes that would always beat centralized planning. When it comes to super intelligent AGI, he thinks it may or may not be better than multiple intelligent AGIs in a liberal democratic system, but he emphasizes the importance of engineered humility and uncertainty. Altman and his team worry about terrible use cases for their models and perform red teaming to avoid them, but trust in the collective intelligence and creativity of the world. From what he's seen, Altman thinks that humans are mostly good.

    Navigating Truth in an Age of Misinformation

    In a conversation with Sam Altman and Lex Fridman, they discuss the difficulty of determining what is true and what is not, especially in the age of misinformation. They mention how certain truths, such as mathematics and some physics, have a high degree of "truthiness." However, other historical events and theories can be more ambiguous and often rely on a "sticky" narrative to be accepted as true. They also recognize the importance of considering circumstantial and direct evidence when evaluating hypotheses, such as the origin of COVID-19. Ultimately, in constructing a GPT model or navigating the world, one must contend with the challenge of determining what is true and what is not.

    Challenges in navigating censorship and free speech for OpenAI's GPT

    OpenAI's GPT tool is facing new challenges that its predecessors did not face, including free speech and censorship issues. As the tool becomes more powerful, the pressure to censor its responses could increase. However, the responsibility falls on the developers at OpenAI to minimize harm caused by GPT and maximize its benefits. There is a potential for harm, but tools can do wonderful good and bad, and minimizing the bad is a top priority. While there could be truths that are harmful, the responsibility of GPT must be upheld by humans. The team at OpenAI is constantly shipping new products and striving to improve the control users have over GPT models.

    The Importance of Teamwork and Strong Leadership in AI-based Products

    Sam Altman, CEO of OpenAI, emphasizes the importance of teamwork, strong leadership, and a passion for the goal in successfully shipping AI-based products. Altman believes in setting a high bar for team members, providing them with trust, autonomy, and authority, while holding them to very high standards. He spends a lot of time hiring great teams and expects hard work from them even after they are hired. Altman praises Microsoft as an amazing partner, with their leaders being aligned, flexible, and going beyond the call of duty. He credits Microsoft CEO Satya Nadella with being both a great leader and manager, rare qualities in a CEO.

    The Importance of Leadership and Incentive Alignment in Business

    Sam Altman, a successful businessman and entrepreneur, discusses the importance of leadership and incentive alignment in business. He highlights the dangers of misalignment, such as what happened with Silicon Valley Bank (SVB) and the importance of avoiding depositors doubting the security of their deposits. Furthermore, he acknowledges the fragility of our economic system, especially in the face of new technology such as AGI. Still, Altman believes that the upside of the vision of AGI unites people and shows how much better life can be with technology. However, he stresses the importance of deploying AGI slowly to ensure institutions can adapt.

    The Dangers of Anthropomorphizing Tools and Systems

    In a conversation between Sam Altman and Lex Fridman, they discussed how people tend to anthropomorphize tools and systems and how it's important to educate people that they are just tools, not creatures. They touched on the idea of creating intelligent tools and the potential for them to manipulate emotions. They also talked about the possibility of AGI (artificial general intelligence) being able to help solve mysteries like the physics of the universe and even potentially detecting intelligent alien life. Overall, it's important to be careful with creating tools that resemble creatures and to use them for their intended purpose rather than projecting emotions onto them.

    The Illusion of Free Will: Navigating Life's Complexity

    In a conversation about artificial intelligence and life advice, Sam Altman suggests that while data can provide insights, it's important to approach advice from others with caution. Altman advocates for following what brings joy and fulfillment, and figuring out what is useful and meaningful to oneself. He also emphasizes the importance of being introspective, but notes that a lot of life may just feel like going with the flow, like a fish in water. Ultimately, the discussion raises questions about the illusion of free will and the complexity of navigating life.

    The Evolution of Artificial Intelligence and the Importance of Collaboration

    Sam Altman, the CEO of OpenAI, highlights that the development of artificial intelligence is a culmination of human effort dating back to the discovery of the transistor. He emphasizes that the exponential curve of human innovation has led us to our current state of technological advancement. While there are differing opinions on the approach to deploying AI, Altman believes in iterative deployment and discovery. He also acknowledges the importance of working collaboratively as a human civilization to ensure alignment and safety in the development of AI. Ultimately, Altman believes that the progress we make with AI will lead to new tools and great advancements for our civilization.

    Recent Episodes from Lex Fridman Podcast

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God

    #429 – Paul Rosolie: Jungle, Apex Predators, Aliens, Uncontacted Tribes, and God
    Paul Rosolie is a naturalist, explorer, author, and founder of Junglekeepers, dedicating his life to protecting the Amazon rainforest. Support his efforts at https://junglekeepers.org Please support this podcast by checking out our sponsors: - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - Yahoo Finance: https://yahoofinance.com - BetterHelp: https://betterhelp.com/lex to get 10% off - NetSuite: http://netsuite.com/lex to get free product tour - Eight Sleep: https://eightsleep.com/lex to get $350 off - Shopify: https://shopify.com/lex to get $1 per month trial Transcript: https://lexfridman.com/paul-rosolie-2-transcript EPISODE LINKS: Paul's Instagram: https://instagram.com/paulrosolie Junglekeepers: https://junglekeepers.org Paul's Website: https://paulrosolie.com Mother of God (book): https://amzn.to/3ww2ob1 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (12:29) - Amazon jungle (14:47) - Bushmaster snakes (26:13) - Black caiman (44:33) - Rhinos (47:47) - Anacondas (1:18:04) - Mammals (1:30:10) - Piranhas (1:41:00) - Aliens (1:58:45) - Elephants (2:10:02) - Origin of life (2:23:21) - Explorers (2:36:38) - Ayahuasca (2:45:03) - Deep jungle expedition (2:59:09) - Jane Goodall (3:01:41) - Theodore Roosevelt (3:12:36) - Alone show (3:22:23) - Protecting the rainforest (3:38:36) - Snake makes appearance (3:46:47) - Uncontacted tribes (4:00:11) - Mortality (4:01:39) - Steve Irwin (4:09:18) - God
    Lex Fridman Podcast
    enMay 15, 2024

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens

    #428 – Sean Carroll: General Relativity, Quantum Mechanics, Black Holes & Aliens
    Sean Carroll is a theoretical physicist, author, and host of Mindscape podcast. Please support this podcast by checking out our sponsors: - HiddenLayer: https://hiddenlayer.com/lex - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - Notion: https://notion.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/sean-carroll-3-transcript EPISODE LINKS: Sean's Website: https://preposterousuniverse.com Mindscape Podcast: https://www.preposterousuniverse.com/podcast/ Sean's YouTube: https://youtube.com/@seancarroll Sean's Patreon: https://www.patreon.com/seanmcarroll Sean's Twitter: https://twitter.com/seanmcarroll Sean's Instagram: https://instagram.com/seanmcarroll Sean's Papers: https://scholar.google.com/citations?user=Lfifrv8AAAAJ Sean's Books: https://amzn.to/3W7yT9N PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (11:03) - General relativity (23:22) - Black holes (28:11) - Hawking radiation (32:19) - Aliens (41:15) - Holographic principle (1:05:38) - Dark energy (1:11:38) - Dark matter (1:20:34) - Quantum mechanics (1:41:56) - Simulation (1:44:18) - AGI (1:58:42) - Complexity (2:11:25) - Consciousness (2:20:32) - Naturalism (2:24:49) - Limits of science (2:29:34) - Mindscape podcast (2:39:29) - Einstein

    #427 – Neil Adams: Judo, Olympics, Winning, Losing, and the Champion Mindset

    #427 – Neil Adams: Judo, Olympics, Winning, Losing, and the Champion Mindset
    Neil Adams is a judo world champion, 2-time Olympic silver medalist, 5-time European champion, and often referred to as the Voice of Judo. Please support this podcast by checking out our sponsors: - ZipRecruiter: https://ziprecruiter.com/lex - Eight Sleep: https://eightsleep.com/lex to get special savings - MasterClass: https://masterclass.com/lexpod to get 15% off - LMNT: https://drinkLMNT.com/lex to get free sample pack - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/neil-adams-transcript EPISODE LINKS: Neil's Instagram: https://instagram.com/naefighting Neil's YouTube: https://youtube.com/NAEffectiveFighting Neil's TikTok: https://tiktok.com/@neiladamsmbe Neil's Facebook: https://facebook.com/NeilAdamsJudo Neil's X: https://x.com/NeilAdamsJudo Neil's Website: https://naeffectivefighting.com Neil's Podcast: https://naeffectivefighting.com/podcasts/the-dojo-collective-podcast A Life in Judo (book): https://amzn.to/4d3DtfB A Game of Throws (audiobook): https://amzn.to/4aA2WeJ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:13) - 1980 Olympics (26:35) - Judo explained (34:40) - Winning (52:54) - 1984 Olympics (1:01:55) - Lessons from losing (1:17:37) - Teddy Riner (1:37:12) - Training in Japan (1:52:51) - Jiu jitsu (2:03:59) - Training (2:27:18) - Advice for beginners

    #426 – Edward Gibson: Human Language, Psycholinguistics, Syntax, Grammar & LLMs

    #426 – Edward Gibson: Human Language, Psycholinguistics, Syntax, Grammar & LLMs
    Edward Gibson is a psycholinguistics professor at MIT and heads the MIT Language Lab. Please support this podcast by checking out our sponsors: - Yahoo Finance: https://yahoofinance.com - Listening: https://listening.com/lex and use code LEX to get one month free - Policygenius: https://policygenius.com/lex - Shopify: https://shopify.com/lex to get $1 per month trial - Eight Sleep: https://eightsleep.com/lex to get special savings Transcript: https://lexfridman.com/edward-gibson-transcript EPISODE LINKS: Edward's X: https://x.com/LanguageMIT TedLab: https://tedlab.mit.edu/ Edward's Google Scholar: https://scholar.google.com/citations?user=4FsWE64AAAAJ TedLab's YouTube: https://youtube.com/@Tedlab-MIT PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (10:53) - Human language (14:59) - Generalizations in language (20:46) - Dependency grammar (30:45) - Morphology (39:20) - Evolution of languages (42:40) - Noam Chomsky (1:26:46) - Thinking and language (1:40:16) - LLMs (1:53:14) - Center embedding (2:19:42) - Learning a new language (2:23:34) - Nature vs nurture (2:30:10) - Culture and language (2:44:38) - Universal language (2:49:01) - Language translation (2:52:16) - Animal communication

    #425 – Andrew Callaghan: Channel 5, Gonzo, QAnon, O-Block, Politics & Alex Jones

    #425 – Andrew Callaghan: Channel 5, Gonzo, QAnon, O-Block, Politics & Alex Jones
    Andrew Callaghan is the host of Channel 5 on YouTube, where he does street interviews with fascinating humans at the edges of society, the so-called vagrants, vagabonds, runaways, outlaws, from QAnon adherents to Phish heads to O Block residents and much more. Please support this podcast by checking out our sponsors: - ShipStation: https://shipstation.com/lex and use code LEX to get 60-day free trial - BetterHelp: https://betterhelp.com/lex to get 10% off - LMNT: https://drinkLMNT.com/lex to get free sample pack - MasterClass: https://masterclass.com/lexpod to get 15% off - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil Transcript: https://lexfridman.com/andrew-callaghan-transcript EPISODE LINKS: Channel 5 with Andrew Callaghan: https://www.youtube.com/channel5YouTube Andrew's Instagram: https://instagram.com/andreww.me Andrew's Website: https://andrew-callaghan.com/ Andrew's Patreon: https://www.patreon.com/channel5 This Place Rules: https://www.hbo.com/movies/this-place-rules Books Mentioned: On the Road: https://amzn.to/4aLPLHi Siddhartha: https://amzn.to/49rthKz PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (08:53) - Walmart (10:24) - Early life (29:14) - Hitchhiking (40:49) - Couch surfing (49:50) - Quarter Confessions (1:07:33) - Burning Man (1:22:44) - Protests (1:28:17) - Jon Stewart (1:31:13) - Fame (1:44:31) - Jan 6 (1:48:15) - QAnon (1:54:00) - Alex Jones (2:10:52) - Politics (2:20:29) - Response to allegations (2:37:28) - Channel 5 (2:43:04) - Rap (2:44:51) - O Block (2:48:47) - Crip Mac (2:51:59) - Aliens

    #424 – Bassem Youssef: Israel-Palestine, Gaza, Hamas, Middle East, Satire & Fame

    #424 – Bassem Youssef: Israel-Palestine, Gaza, Hamas, Middle East, Satire & Fame
    Bassem Youssef is an Egyptian-American comedian & satirist, referred to as the Jon Stewart of the Arab World. Please support this podcast by checking out our sponsors: - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil - Shopify: https://shopify.com/lex to get $1 per month trial - Eight Sleep: https://eightsleep.com/lex to get special savings - LMNT: https://drinkLMNT.com/lex to get free sample pack Transcript: https://lexfridman.com/bassem-youssef-transcript EPISODE LINKS: Bassem's X: https://x.com/Byoussef Bassem's Instagram: https://instagram.com/bassem Bassem's Facebook: https://facebook.com/bassemyousseftv Bassem's Website: https://bassemyoussef.xyz PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (06:30) - Oct 7 (36:59) - Two-state solution (52:37) - Holocaust (1:00:24) - 1948 (1:09:17) - Egypt (1:23:39) - Jon Stewart (1:25:51) - Going viral during the Arab Spring (1:49:55) - Arabic vs English (2:02:18) - Sam Harris and Jihad (2:07:25) - Religion (2:26:37) - TikTok (2:31:10) - Joe Rogan (2:33:07) - Joe Biden (2:37:33) - Putin (2:39:21) - War (2:44:17) - Hope

    #423 – Tulsi Gabbard: War, Politics, and the Military Industrial Complex

    #423 – Tulsi Gabbard: War, Politics, and the Military Industrial Complex
    Tulsi Gabbard is a politician, veteran, and author of For Love of Country. Please support this podcast by checking out our sponsors: - Riverside: https://creators.riverside.fm/LEX and use code LEX to get 30% off - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free - NetSuite: http://netsuite.com/lex to get free product tour - Notion: https://notion.com/lex Transcript: https://lexfridman.com/tulsi-gabbard-transcript EPISODE LINKS: For Love of Country (book): https://amzn.to/3VLlofM Tulsi's X: https://x.com/tulsigabbard Tulsi's YouTube: https://youtube.com/@TulsiGabbard Tulsi's Podcast: https://youtube.com/@TheTulsiGabbardShow Tulsi's Instagram: https://instagram.com/tulsigabbard Tulsi's Facebook: https://facebook.com/TulsiGabbard Tulsi's Website: https://tulsigabbard.com/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (07:14) - War in Iraq (15:00) - Battle injuries and PTSD (22:10) - War on terrorism (30:51) - War in Gaza (34:52) - War in Ukraine (38:38) - Syria (46:20) - Warmongers (55:40) - Nuclear war (1:11:08) - TikTok ban (1:23:13) - Bernie Sanders (1:28:08) - Politics (1:46:59) - Personal attacks (1:49:07) - God

    #422 – Mark Cuban: Shark Tank, DEI & Wokeism Debate, Elon Musk, Politics & Drugs

    #422 – Mark Cuban: Shark Tank, DEI & Wokeism Debate, Elon Musk, Politics & Drugs
    Mark Cuban is a businessman, investor, star of TV series Shark Tank, long-time principal owner of Dallas Mavericks, and founder of Cost Plus Drugs. Please support this podcast by checking out our sponsors: - Listening: https://listening.com/lex and use code LEX to get one month free - Cloaked: https://cloaked.com/lex and use code LexPod to get 25% off - Notion: https://notion.com/lex - Eight Sleep: https://eightsleep.com/lex to get special savings - Shopify: https://shopify.com/lex to get $1 per month trial Transcript: https://lexfridman.com/mark-cuban-transcript EPISODE LINKS: Mark's X: https://twitter.com/mcuban Mark's Instagram: https://instagram.com/mcuban Cost Plus Drugs: https://costplusdrugs.com Shark Tank: https://abc.com/shows/shark-tank Dallas Mavericks: https://www.mavs.com PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (11:10) - Entrepreneurship (26:03) - Shark Tank (36:29) - How Mark made first billion (1:02:39) - Dallas Mavericks (1:08:05) - DEI debate (1:43:58) - Trump vs Biden (1:46:20) - Immigration (1:55:53) - Drugs and Big Pharma (2:11:53) - AI (2:16:05) - Advice for young people

    #421 – Dana White: UFC, Fighting, Khabib, Conor, Tyson, Ali, Rogan, Elon & Zuck

    #421 – Dana White: UFC, Fighting, Khabib, Conor, Tyson, Ali, Rogan, Elon & Zuck
    Dana White is the CEO and president of the UFC. Please support this podcast by checking out our sponsors: - LMNT: https://drinkLMNT.com/lex to get free sample pack - Notion: https://notion.com/lex - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil - InsideTracker: https://insidetracker.com/lex to get 20% off Transcript: https://lexfridman.com/dana-white-transcript EPISODE LINKS: Dana's X: https://x.com/danawhite Dana's Instagram: https://instagram.com/danawhite Dana's Facebook: https://facebook.com/danawhite UFC's YouTube: https://youtube.com/@UFC UFC's Website: https://ufc.com/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (06:31) - Mike Tyson and early days of fighting (17:10) - Jiu jitsu (23:14) - Origin of UFC (37:25) - Joe Rogan (43:31) - Lorenzo Fertitta (45:58) - Great fighters (49:55) - Khabib vs Conor (53:01) - Jon Jones (56:03) - Conor McGregor (1:01:05) - Trump (1:06:44) - Elon vs Zuck (1:08:04) - Mike Tyson vs Jake Paul (1:10:52) - Forrest Griffin vs Stephan Bonnar (1:18:06) - Gambling (1:33:08) - Mortality

    #420 – Annie Jacobsen: Nuclear War, CIA, KGB, Aliens, Area 51, Roswell & Secrecy

    #420 – Annie Jacobsen: Nuclear War, CIA, KGB, Aliens, Area 51, Roswell & Secrecy
    Annie Jacobsen is an investigative journalist and author of "Nuclear War: A Scenario" and many other books on war, weapons, government secrecy, and national security. Please support this podcast by checking out our sponsors: - HiddenLayer: https://hiddenlayer.com/lex - BetterHelp: https://betterhelp.com/lex to get 10% off - Policygenius: https://policygenius.com/lex - NetSuite: http://netsuite.com/lex to get free product tour Transcript: https://lexfridman.com/annie-jacobsen-transcript EPISODE LINKS: Nuclear War: A Scenario (book): https://amzn.to/3THZHfr Annie's Twitter: https://twitter.com/anniejacobsen Annie's Website: https://anniejacobsen.com/ Annie's Books: https://amzn.to/3TGWyMJ Annie's Books (audio): https://adbl.co/49ZnI7c PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (07:37) - Nuclear war (12:21) - Launch procedure (18:00) - Deterrence (21:34) - Tactical nukes (30:59) - Nuclear submarines (33:59) - Nuclear missiles (41:10) - Nuclear football (50:17) - Missile interceptor system (54:34) - North Korea (1:01:10) - Nuclear war scenarios (1:10:02) - Warmongers (1:14:31) - President's cognitive ability (1:20:43) - Refusing orders (1:28:41) - Russia and Putin (1:33:48) - Cyberattack (1:35:09) - Ground zero of nuclear war (1:39:48) - Surviving nuclear war (1:44:06) - Nuclear winter (1:54:29) - Alien civilizations (2:00:04) - Extrasensory perception (2:13:50) - Area 51 (2:17:48) - UFOs and aliens (2:28:15) - Roswell incident (2:34:55) - CIA assassinations (2:53:47) - Navalny (2:56:12) - KGB (3:02:48) - Hitler and the atomic bomb (3:06:52) - War and human nature (3:10:17) - Hope