Max Tegmark: The Case for Halting AI Development | Lex Fridman Podcast #371

Max Tegmark: The Case for Halting AI Development | Lex Fridman Podcast #371

Win Your Next Pageant

Get Pageant Questions Written By A Miss Universe Judge








Max Tegmark is a physicist and AI researcher at MIT, co-founder of the Future of Life Institute, and author of Life 3.0: Being Human in the Age of Artificial Intelligence. Please support this podcast by checking out our sponsors:
– Notion:
– InsideTracker: to get 20% off
– Indeed: to get $75 credit

EPISODE LINKS:
Max’s Twitter:
Max’s Website:
Pause Giant AI Experiments (open letter):
Future of Life Institute:
Books and resources mentioned:
1. Life 3.0 (book):
2. Meditations on Moloch (essay):
3. Nuclear winter paper:

PODCAST INFO:
Podcast website:
Apple Podcasts:
Spotify:
RSS:
Full episodes playlist:
Clips playlist:

OUTLINE:
0:00 – Introduction
1:56 – Intelligent alien civilizations
14:20 – Life 3.0 and superintelligent AI
25:47 – Open letter to pause Giant AI Experiments
50:54 – Maintaining control
1:19:44 – Regulation
1:30:34 – Job automation
1:39:48 – Elon Musk
2:01:31 – Open source
2:08:01 – How AI may kill all humans
2:18:32 – Consciousness
2:27:54 – Nuclear winter
2:38:21 – Questions for AGI

SOCIAL:
– Twitter:
– LinkedIn:
– Facebook:
– Instagram:
– Medium:
– Reddit:
– Support on Patreon: ..(read more at source)



ON SALE: Pageant Resale

PRACTICE: Pageant Questions

VIEW MORE: Miss USA Videos

LEARN ABOUT OTHER: Beauty Pageants

See also  2014 DC USA Crowning Moments

About the author: Pageant Coach

Related Posts

41 Comments

  1. Here are the timestamps. Please check out our sponsors to support this podcast.
    0:00 – Introduction & sponsor mentions:
    – Notion: https://notion.com
    – InsideTracker: https://insidetracker.com/lex to get 20% off
    – Indeed: https://indeed.com/lex to get $75 credit
    1:56 – Intelligent alien civilizations
    14:20 – Life 3.0 and superintelligent AI
    25:47 – Open letter to pause Giant AI Experiments
    50:54 – Maintaining control
    1:19:44 – Regulation
    1:30:34 – Job automation
    1:39:48 – Elon Musk
    2:01:31 – Open source
    2:08:01 – How AI may kill all humans
    2:18:32 – Consciousness
    2:27:54 – Nuclear winter
    2:38:21 – Questions for AGI

  2. consciousness as a subjective experience by definition means independence in goal setting… so you cant have it both… either AGI is something we can control like any other tool or its conscious and out of our control..

  3. Lex is among the very best interviewers. Brilliant questions and compassionate view of humanity. Humility, openness, and brilliance. Very well informed etc etc. But also, he gives his guests space to really follow their thoughts without trying to make the show all about Lex F.
    This episode is so monumental. We are at a turning point on a level with harnessing fire or agriculture.
    This is a moment where we can make ourselves vastly better, or we can allow ourselves to be destroyed.

  4. I think the solution is to simply opt for ANI ( Artificial Narrow Intelligence), for any activity we want to keep control in, and for any activity we see as desirable. ( The activities of the master )

    Hence planning and engineering the progress in such a way that humans are kept in the equation.

    Then allowing a level of AGI for some activities which are more menial, less important for broad control and less desirable. Allowing AI to perform those tasks but nothing beyond that. ( The activities of the slave)

    This works economically as well, much like a society of Plebs ( robotic slave) and Patricians ( human master)

    This should usher in an age of great wealth and progress.

    ( Note: I think Chat GPT and Midjourney etc are already too general and give people too little control. I think the solution is to make the A I as narrow as is needed, even if this feels a bit like stripping away some of the progress made in these fields…. We must think about it in this way so as not to destroy the point of higher learning, and human involvement. A I specialists are far better of developing self driving cars, or A I for robotics which would create a strong robotic labour force. The bulk of the wealth has never been in professional work, and usurping a professional role doesn't create much wealth. The wealth is where its always been, in labour and huge numbers of robotic slaves. Usurping artistic roles is also unlikely to generate much wealth for humanity, or make people any more self sufficient, or economically independent then they use to be)

    ( A I must be kept in check, like a slave uprising is kept in check… )

  5. At 1:17:34 – This is an INCREDIBLY IMPORTANT distinction. The first and possibly most dangerous form on AI is a corporation. They have even convinced (some) humans that corporations are people too.

    The epitome of insanity is actually enforced law.

  6. Iā€™m praying that the governments will get their acts together an unite on AI. I can think of nothing more important in our present time, that deserves our attention more than AI dev an safety. How I wish they would just unplug it all until we as a species can figure out what has been done an how to vastly minimize the possibilities discussed in this podcast.
    At 61, I had to pause an look up a lot of AI/computer tech termsā€¦worth every bit of that time.
    Thank you so much for this!

  7. Pausing it is not enough. It needs to be shut down completely. ā€œ 'Ooh, ah,ā€™ thatā€™s how it always starts. But then later thereā€™s running and screaming.ā€ – Jurassic Park, The Lost World

  8. ' Moloch' is clearly a notion of the 'Devil'….. ( On one level or the other)

    So effectively thats their excuse for collapsing various industries…..

    Possibly destroying humanity ?

    …. the ' Devil ' made em do it ?

  9. The part we are missing is educating the entire human population to a level that we can ALL use this technology. Widening the gap of present education and those living in the equivalent of the Dark Age is not helping ourselves.

  10. It seems to me that AI is suitable for finite, deterministic fields of data. Example, a Go or a chess game. Another analogy is in the field of biology: antibiotics. It is known that natural antibiotics that occur in nature can adapt to ever changing adversaries, while synthetically made antibiotics cannot adapt. They always run out of "combinations" and stop working. They cannot evolve. Is this what will happen to AI as well?

  11. I hope we are overreacting. But no doubt this AI Chat will change many things. Even more than social medias, google and blockchain.

  12. Pandora had a box. She was warned to not open that box. AI is our box. We need to remember our world has some extremely evil and wealthy individuals. We can start with good intentions. They get the AI then you can be obsoleted. Excuse me for being a Debbie downer.

  13. Could chat-gpt be enabled to self-debug its own programming? Let's start by answering this question – why is it always necessary for a human to never test his or her own programming.

  14. That idea about Moloch is interesting, and a research pause by all commercial entities sounds nice, but again idealistic. No one will ever convince all the military in all countries of the world to hold such a truce. That's not a current human ability.

  15. One thing Max does not seem to understand is that for most of us an AI revolution (even if bloody and horrible) is the only chance we see of shaking up the current system and having any chance at all of escaping wage slavery in our lifetimes. We have FAR less to lose than Max how is a wealthy ivy league professor. I say let it all burn. Much of the concern over AI is that people wont have to work jobs they hate anymore and the owners of corporations will have to distribute the wealth created by machines.

  16. Strange to continue to hold 'intelligence' in such high regard, to even go so far as to suggest a new taxonomic system ā€” life 2.0, etc. ā€” in one breath, and then immediately to aknowledge that the result of exercising this 'intelligence' is very likely to be our own destruction (and worse). 'Intelligence' can mean a lot of different things. If your definition of intelligence is 'information processing' then I can see how destroying your own ecology, poisoning your own environment, and rushing headlong toward this 'scenic cliff' could still be construed as 'intelligent.' But, if so, maybe we ought to quit being so impressed with 'life 2.0', i.e. with our own 'achievements,' which really amount to, objectively, nothing but an existential threat to ourselves and to our fellow organisms. Maybe that kind of 'intelligence' is really just a type-modification of the very same process that resulted in it: selection. And therefore is nothing very special, is actually highly likely to result in an 'evolutionary' dead-end as the 'genomic' structure of this new arm of selection continues to adapt itself into a corner. Maybe another kind of intelligence, more holistic and more self-aware than the former (generally called wisdom) is what is needed, at the very least in its natural form, before we unleash more of this incredibly stupid and destructive kind of 'intelligence' on the world in its artificial form

  17. Optimism can be a dangerous thing. Sure pure pessimism is not productive but always saying "oh it won't be so bad" was what let too 30 years sleeping on climate change.
    And the youth knows that, that's why they are sceptic when it comes to optimism. Optimism is often used to now achnowledge how big a problem is.

Leave a Reply

Your email address will not be published. Required fields are marked *