-
Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368
Eliezer Yudkowsky is a researcher, writer, and philosopher on the topic of superintelligent AI. Please support this podcast by checking out our sponsors:
- Linode: https://linode.com/lex to get $100 free credit
- House of Macadamias: https://houseofmacadamias.com/lex and use code LEX to get 20% off your first order
- InsideTracker: https://insidetracker.com/lex to get 20% off
EPISODE LINKS:
Eliezer's Twitter: https://twitter.com/ESYudkowsky
LessWrong Blog: https://lesswrong.com
Eliezer's Blog page: https://www.lesswrong.com/users/eliezer_yudkowsky
Books and resources mentioned:
1. AGI Ruin (blog post): https://lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities
2. Adaptation and Natural Selection: https://amzn.to/40F5gfa
PODCAST INFO:
Podcast website: https://lexfridman....
published: 30 Mar 2023
-
Eliezer Yudkowsky - Human Augmentation as a Safer AGI Pathway [AGI Governance, Episode 6]
The sixth and final episode of our series AGI Governance on The Trajectory is with Eliezer Yudkowsky, famed AI safety thinker and Co-founder at the Machine Intelligence Research Institute.
In this episode we explore two things I've never seen Yudkowsky speak on at length before: (a) his specific recommendations and draft ideas of international governance, and (b) his nuanced vision of an ideal future where AGI is eventually harnessed to serve very specific human values.
Listen to this episode on The Trajectory Podcast: https://www.buzzsprout.com/2308422/episodes/16491788
See the full article from this episode: https://danfaggella.com/yudkowsky1
...
There are four main questions we cover in this AGI Governance series are:
1. How important is AGI governance now on a 1-10 scale?
2. What...
published: 24 Jan 2025
-
YUDKOWSKY + WOLFRAM ON AI RISK.
Eliezer Yudkowsky and Stephen Wolfram discuss artificial intelligence and its potential existen‑
tial risks. They traversed fundamental questions about AI safety, consciousness, computational irreducibility, and the nature of intelligence.
The discourse centered on Yudkowsky’s argument that advanced AI systems pose an existential threat to humanity, primarily due to the challenge of alignment and the potential for emergent goals that diverge from human values. Wolfram, while acknowledging potential risks, approached the topic from a his signature measured perspective, emphasizing the importance of understanding computational systems’ fundamental nature and questioning whether AI systems would necessarily develop the kind of goal‑directed behavior Yudkowsky fears.
SHOWNOTES (transcription...
published: 11 Nov 2024
-
Eliezer Meets Rebekah
Eliezer prays to God and then meets Rebekah at the well.
published: 03 Aug 2015
-
Porque viajar não me deixa mais feliz
Opa
Então, se você tem interesse e vê o potencial que é criar conteúdo, se inscreva de graça na minha newsletter:
https://creatorbusiness.com.br/
published: 13 Sep 2024
-
‘The probability that we die is - yes.’ Eliezer Yudkowsky on the dangers of AI #ai #tech #agi
published: 15 Jul 2023
-
Dependente - Eliezer de Tarsis e Marcelo Markes (Ao Vivo)
Videoclipe oficial da música Dependente - Eliezer + Marcelo Markes (Ao Vivo)
Siga no Instagram:
@eliezerdetarsis
https://www.instagram.com/eliezerdetarsis/
LETRA:
O Seu Amor Me Trouxe Aqui
Pra Entender Que Sou De Ti
Em Outros Lugares Eu Busquei
E Em Tua Presença Eu Achei O Meu Lugar
Sou Dependente Da Tua Glória
Fui Atraído Pelo Teu Amor
Não Quero Fugir Da Tua Presença
Eu Só Quero Te Contemplar
▶ FICHA TÉCNICA:
Dirigido por: Tg Alves
Produção Executiva: Double Global
Produtora de Vídeo: TG ALVES FILM Co.
Direção de Fotografia: Patrick Rodrigues
Produção: Valdemir Fer, Drielle Monteiro (MiPro Soluções), Guilherme Frattes (TG ALVES FILM Co.)
1AC: Miller Teixeira
Operadores de Câmera: Amarelo, Fabrício Teixeira, Vinicius Wesley, Joilson Castro, Jonatas Limeira, Pedro Pardal, Abidiel Pere...
published: 10 Nov 2023
-
Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality
For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong.
We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more.
If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely.
Transcript: https://dwarkeshpatel.com/p/eliezer-yudkowsky
Apple Podcasts: https://apple.co/3mcPjON
Spotify: https://spoti.fi/3KDFzX9
Follow me on Twitter: https://twitter.com/dwarkesh_sp
Timestamps:
(0:00:00) - TIME article
(0:09:06) - Are humans aligned?
(0:37:35) - Large language models
(1:07:15) - Can AIs help with alignment?
(1:30:17) - Soc...
published: 06 Apr 2023
-
(PROROK JONA) Legenda Eliezer Papo
SUPER, POUČNO I JAKO ZANIMLJIVO PREDAVANJE O NAVEDENIM I OSTALIM MNOGIM SUPER BIBLIJSKIM TEMAMA SA POZNATOM LEGENDARNOM DUŠOM ELIEZEROM PAPOM
Dragi Prijatelji hvala vam unaprijed na gledanju i odvojenom vremenu
Ako vas zanimaju ostale epizode imate na popisima ovog skromnog kanala kao i sve ostale divne sadržaje
Ovdje na dnu imate link od poznatog divnog kanala Dobre Duše i Legende Eliezera Pape
Hvala vam na pažnji
https://youtube.com/@EliezerPapoSFRJ?si=R5mR7eWLzPGOL-tW
published: 29 Jan 2025
-
Eliezer Yudkowsky on if Humanity can Survive AI
Eliezer Yudkowsky is a researcher, writer, and advocate for artificial intelligence safety. He is best known for his writings on rationality, cognitive biases, and the development of superintelligence. Yudkowsky has written extensively on the topic of AI safety and has advocated for the development of AI systems that are aligned with human values and interests. Yudkowsky is the co-founder of the Machine Intelligence Research Institute (MIRI), a non-profit organization dedicated to researching the development of safe and beneficial artificial intelligence. He is also a co-founder of the Center for Applied Rationality (CFAR), a non-profit organization focused on teaching rational thinking skills. He is also a frequent author at LessWrong.com as well as Rationality: From AI to Zombies.
In th...
published: 06 May 2023
3:17:51
Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368
Eliezer Yudkowsky is a researcher, writer, and philosopher on the topic of superintelligent AI. Please support this podcast by checking out our sponsors:
- Lino...
Eliezer Yudkowsky is a researcher, writer, and philosopher on the topic of superintelligent AI. Please support this podcast by checking out our sponsors:
- Linode: https://linode.com/lex to get $100 free credit
- House of Macadamias: https://houseofmacadamias.com/lex and use code LEX to get 20% off your first order
- InsideTracker: https://insidetracker.com/lex to get 20% off
EPISODE LINKS:
Eliezer's Twitter: https://twitter.com/ESYudkowsky
LessWrong Blog: https://lesswrong.com
Eliezer's Blog page: https://www.lesswrong.com/users/eliezer_yudkowsky
Books and resources mentioned:
1. AGI Ruin (blog post): https://lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities
2. Adaptation and Natural Selection: https://amzn.to/40F5gfa
PODCAST INFO:
Podcast website: https://lexfridman.com/podcast
Apple Podcasts: https://apple.co/2lwqZIr
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41
OUTLINE:
0:00 - Introduction
0:43 - GPT-4
23:23 - Open sourcing GPT-4
39:41 - Defining AGI
47:38 - AGI alignment
1:30:30 - How AGI may kill us
2:22:51 - Superintelligence
2:30:03 - Evolution
2:36:33 - Consciousness
2:47:04 - Aliens
2:52:35 - AGI Timeline
3:00:35 - Ego
3:06:27 - Advice for young people
3:11:45 - Mortality
3:13:26 - Love
SOCIAL:
- Twitter: https://twitter.com/lexfridman
- LinkedIn: https://www.linkedin.com/in/lexfridman
- Facebook: https://www.facebook.com/lexfridman
- Instagram: https://www.instagram.com/lexfridman
- Medium: https://medium.com/@lexfridman
- Reddit: https://reddit.com/r/lexfridman
- Support on Patreon: https://www.patreon.com/lexfridman
https://wn.com/Eliezer_Yudkowsky_Dangers_Of_Ai_And_The_End_Of_Human_Civilization_|_Lex_Fridman_Podcast_368
Eliezer Yudkowsky is a researcher, writer, and philosopher on the topic of superintelligent AI. Please support this podcast by checking out our sponsors:
- Linode: https://linode.com/lex to get $100 free credit
- House of Macadamias: https://houseofmacadamias.com/lex and use code LEX to get 20% off your first order
- InsideTracker: https://insidetracker.com/lex to get 20% off
EPISODE LINKS:
Eliezer's Twitter: https://twitter.com/ESYudkowsky
LessWrong Blog: https://lesswrong.com
Eliezer's Blog page: https://www.lesswrong.com/users/eliezer_yudkowsky
Books and resources mentioned:
1. AGI Ruin (blog post): https://lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities
2. Adaptation and Natural Selection: https://amzn.to/40F5gfa
PODCAST INFO:
Podcast website: https://lexfridman.com/podcast
Apple Podcasts: https://apple.co/2lwqZIr
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41
OUTLINE:
0:00 - Introduction
0:43 - GPT-4
23:23 - Open sourcing GPT-4
39:41 - Defining AGI
47:38 - AGI alignment
1:30:30 - How AGI may kill us
2:22:51 - Superintelligence
2:30:03 - Evolution
2:36:33 - Consciousness
2:47:04 - Aliens
2:52:35 - AGI Timeline
3:00:35 - Ego
3:06:27 - Advice for young people
3:11:45 - Mortality
3:13:26 - Love
SOCIAL:
- Twitter: https://twitter.com/lexfridman
- LinkedIn: https://www.linkedin.com/in/lexfridman
- Facebook: https://www.facebook.com/lexfridman
- Instagram: https://www.instagram.com/lexfridman
- Medium: https://medium.com/@lexfridman
- Reddit: https://reddit.com/r/lexfridman
- Support on Patreon: https://www.patreon.com/lexfridman
- published: 30 Mar 2023
- views: 2095128
1:16:26
Eliezer Yudkowsky - Human Augmentation as a Safer AGI Pathway [AGI Governance, Episode 6]
The sixth and final episode of our series AGI Governance on The Trajectory is with Eliezer Yudkowsky, famed AI safety thinker and Co-founder at the Machine Inte...
The sixth and final episode of our series AGI Governance on The Trajectory is with Eliezer Yudkowsky, famed AI safety thinker and Co-founder at the Machine Intelligence Research Institute.
In this episode we explore two things I've never seen Yudkowsky speak on at length before: (a) his specific recommendations and draft ideas of international governance, and (b) his nuanced vision of an ideal future where AGI is eventually harnessed to serve very specific human values.
Listen to this episode on The Trajectory Podcast: https://www.buzzsprout.com/2308422/episodes/16491788
See the full article from this episode: https://danfaggella.com/yudkowsky1
...
There are four main questions we cover in this AGI Governance series are:
1. How important is AGI governance now on a 1-10 scale?
2. What should AGI governance attempt to do?
3. What might AGI governance look like in practice?
4. What should innovators and regulators do now?
If this sounds like it's up your alley, then be sure to stick around and connect:
-- Blog: https://danfaggella.com/trajectory
-- X: https://x.com/danfaggella
-- LinkedIn: https://linkedin.com/in/danfaggella
-- Newsletter: https://bit.ly/TrajectoryTw
-- Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
https://wn.com/Eliezer_Yudkowsky_Human_Augmentation_As_A_Safer_Agi_Pathway_Agi_Governance,_Episode_6
The sixth and final episode of our series AGI Governance on The Trajectory is with Eliezer Yudkowsky, famed AI safety thinker and Co-founder at the Machine Intelligence Research Institute.
In this episode we explore two things I've never seen Yudkowsky speak on at length before: (a) his specific recommendations and draft ideas of international governance, and (b) his nuanced vision of an ideal future where AGI is eventually harnessed to serve very specific human values.
Listen to this episode on The Trajectory Podcast: https://www.buzzsprout.com/2308422/episodes/16491788
See the full article from this episode: https://danfaggella.com/yudkowsky1
...
There are four main questions we cover in this AGI Governance series are:
1. How important is AGI governance now on a 1-10 scale?
2. What should AGI governance attempt to do?
3. What might AGI governance look like in practice?
4. What should innovators and regulators do now?
If this sounds like it's up your alley, then be sure to stick around and connect:
-- Blog: https://danfaggella.com/trajectory
-- X: https://x.com/danfaggella
-- LinkedIn: https://linkedin.com/in/danfaggella
-- Newsletter: https://bit.ly/TrajectoryTw
-- Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
- published: 24 Jan 2025
- views: 9318
4:17:08
YUDKOWSKY + WOLFRAM ON AI RISK.
Eliezer Yudkowsky and Stephen Wolfram discuss artificial intelligence and its potential existen‑
tial risks. They traversed fundamental questions about AI safet...
Eliezer Yudkowsky and Stephen Wolfram discuss artificial intelligence and its potential existen‑
tial risks. They traversed fundamental questions about AI safety, consciousness, computational irreducibility, and the nature of intelligence.
The discourse centered on Yudkowsky’s argument that advanced AI systems pose an existential threat to humanity, primarily due to the challenge of alignment and the potential for emergent goals that diverge from human values. Wolfram, while acknowledging potential risks, approached the topic from a his signature measured perspective, emphasizing the importance of understanding computational systems’ fundamental nature and questioning whether AI systems would necessarily develop the kind of goal‑directed behavior Yudkowsky fears.
SHOWNOTES (transcription, references, summary, best quotes etc):
https://www.dropbox.com/scl/fi/3st8dts2ba7yob161dchd/EliezerWolfram.pdf?rlkey=b6va5j8upgqwl9s2muc924vtt&st=vemwqx7a&dl=0
***
MLST IS SPONSORED BY TUFA AI LABS!
The current winners of the ARC challenge, MindsAI are part of Tufa AI Labs. They are hiring ML engineers. Are you interested?! Please goto https://tufalabs.ai/
***
https://en.wikipedia.org/wiki/Eliezer_Yudkowsky
https://en.wikipedia.org/wiki/Stephen_Wolfram
TOC:
1. Foundational AI Concepts and Risks
[00:00:00] 1.1 AI Optimization and System Capabilities Debate
[00:06:46] 1.2 Computational Irreducibility and Intelligence Limitations
[00:20:09] 1.3 Existential Risk and Species Succession
[00:23:28] 1.4 Consciousness and Value Preservation in AI Systems
2. Ethics and Philosophy in AI
[00:33:24] 2.1 Moral Value of Human Consciousness vs. Computation
[00:36:30] 2.2 Ethics and Moral Philosophy Debate
[00:39:58] 2.3 Existential Risks and Digital Immortality
[00:43:30] 2.4 Consciousness and Personal Identity in Brain Emulation
3. Truth and Logic in AI Systems
[00:54:39] 3.1 AI Persuasion Ethics and Truth
[01:01:48] 3.2 Mathematical Truth and Logic in AI Systems
[01:11:29] 3.3 Universal Truth vs Personal Interpretation in Ethics and Mathematics
[01:14:43] 3.4 Quantum Mechanics and Fundamental Reality Debate
4. AI Capabilities and Constraints
[01:21:21] 4.1 AI Perception and Physical Laws
[01:28:33] 4.2 AI Capabilities and Computational Constraints
[01:34:59] 4.3 AI Motivation and Anthropomorphization Debate
[01:38:09] 4.4 Prediction vs Agency in AI Systems
5. AI System Architecture and Behavior
[01:44:47] 5.1 Computational Irreducibility and Probabilistic Prediction
[01:48:10] 5.2 Teleological vs Mechanistic Explanations of AI Behavior
[02:09:41] 5.3 Machine Learning as Assembly of Computational Components
[02:29:52] 5.4 AI Safety and Predictability in Complex Systems
6. Goal Optimization and Alignment
[02:50:30] 6.1 Goal Specification and Optimization Challenges in AI Systems
[02:58:31] 6.2 Intelligence, Computation, and Goal-Directed Behavior
[03:02:18] 6.3 Optimization Goals and Human Existential Risk
[03:08:49] 6.4 Emergent Goals and AI Alignment Challenges
7. AI Evolution and Risk Assessment
[03:19:44] 7.1 Inner Optimization and Mesa-Optimization Theory
[03:34:00] 7.2 Dynamic AI Goals and Extinction Risk Debate
[03:56:05] 7.3 AI Risk and Biological System Analogies
[04:09:37] 7.4 Expert Risk Assessments and Optimism vs Reality
8. Future Implications and Economics
[04:13:01] 8.1 Economic and Proliferation Considerations
https://wn.com/Yudkowsky_Wolfram_On_Ai_Risk.
Eliezer Yudkowsky and Stephen Wolfram discuss artificial intelligence and its potential existen‑
tial risks. They traversed fundamental questions about AI safety, consciousness, computational irreducibility, and the nature of intelligence.
The discourse centered on Yudkowsky’s argument that advanced AI systems pose an existential threat to humanity, primarily due to the challenge of alignment and the potential for emergent goals that diverge from human values. Wolfram, while acknowledging potential risks, approached the topic from a his signature measured perspective, emphasizing the importance of understanding computational systems’ fundamental nature and questioning whether AI systems would necessarily develop the kind of goal‑directed behavior Yudkowsky fears.
SHOWNOTES (transcription, references, summary, best quotes etc):
https://www.dropbox.com/scl/fi/3st8dts2ba7yob161dchd/EliezerWolfram.pdf?rlkey=b6va5j8upgqwl9s2muc924vtt&st=vemwqx7a&dl=0
***
MLST IS SPONSORED BY TUFA AI LABS!
The current winners of the ARC challenge, MindsAI are part of Tufa AI Labs. They are hiring ML engineers. Are you interested?! Please goto https://tufalabs.ai/
***
https://en.wikipedia.org/wiki/Eliezer_Yudkowsky
https://en.wikipedia.org/wiki/Stephen_Wolfram
TOC:
1. Foundational AI Concepts and Risks
[00:00:00] 1.1 AI Optimization and System Capabilities Debate
[00:06:46] 1.2 Computational Irreducibility and Intelligence Limitations
[00:20:09] 1.3 Existential Risk and Species Succession
[00:23:28] 1.4 Consciousness and Value Preservation in AI Systems
2. Ethics and Philosophy in AI
[00:33:24] 2.1 Moral Value of Human Consciousness vs. Computation
[00:36:30] 2.2 Ethics and Moral Philosophy Debate
[00:39:58] 2.3 Existential Risks and Digital Immortality
[00:43:30] 2.4 Consciousness and Personal Identity in Brain Emulation
3. Truth and Logic in AI Systems
[00:54:39] 3.1 AI Persuasion Ethics and Truth
[01:01:48] 3.2 Mathematical Truth and Logic in AI Systems
[01:11:29] 3.3 Universal Truth vs Personal Interpretation in Ethics and Mathematics
[01:14:43] 3.4 Quantum Mechanics and Fundamental Reality Debate
4. AI Capabilities and Constraints
[01:21:21] 4.1 AI Perception and Physical Laws
[01:28:33] 4.2 AI Capabilities and Computational Constraints
[01:34:59] 4.3 AI Motivation and Anthropomorphization Debate
[01:38:09] 4.4 Prediction vs Agency in AI Systems
5. AI System Architecture and Behavior
[01:44:47] 5.1 Computational Irreducibility and Probabilistic Prediction
[01:48:10] 5.2 Teleological vs Mechanistic Explanations of AI Behavior
[02:09:41] 5.3 Machine Learning as Assembly of Computational Components
[02:29:52] 5.4 AI Safety and Predictability in Complex Systems
6. Goal Optimization and Alignment
[02:50:30] 6.1 Goal Specification and Optimization Challenges in AI Systems
[02:58:31] 6.2 Intelligence, Computation, and Goal-Directed Behavior
[03:02:18] 6.3 Optimization Goals and Human Existential Risk
[03:08:49] 6.4 Emergent Goals and AI Alignment Challenges
7. AI Evolution and Risk Assessment
[03:19:44] 7.1 Inner Optimization and Mesa-Optimization Theory
[03:34:00] 7.2 Dynamic AI Goals and Extinction Risk Debate
[03:56:05] 7.3 AI Risk and Biological System Analogies
[04:09:37] 7.4 Expert Risk Assessments and Optimism vs Reality
8. Future Implications and Economics
[04:13:01] 8.1 Economic and Proliferation Considerations
- published: 11 Nov 2024
- views: 97029
2:07
Eliezer Meets Rebekah
Eliezer prays to God and then meets Rebekah at the well.
Eliezer prays to God and then meets Rebekah at the well.
https://wn.com/Eliezer_Meets_Rebekah
Eliezer prays to God and then meets Rebekah at the well.
- published: 03 Aug 2015
- views: 144218
13:54
Porque viajar não me deixa mais feliz
Opa
Então, se você tem interesse e vê o potencial que é criar conteúdo, se inscreva de graça na minha newsletter:
https://creatorbusiness.com.br/
Opa
Então, se você tem interesse e vê o potencial que é criar conteúdo, se inscreva de graça na minha newsletter:
https://creatorbusiness.com.br/
https://wn.com/Porque_Viajar_Não_Me_Deixa_Mais_Feliz
Opa
Então, se você tem interesse e vê o potencial que é criar conteúdo, se inscreva de graça na minha newsletter:
https://creatorbusiness.com.br/
- published: 13 Sep 2024
- views: 507947
5:27
Dependente - Eliezer de Tarsis e Marcelo Markes (Ao Vivo)
Videoclipe oficial da música Dependente - Eliezer + Marcelo Markes (Ao Vivo)
Siga no Instagram:
@eliezerdetarsis
https://www.instagram.com/eliezerdetarsis/
LE...
Videoclipe oficial da música Dependente - Eliezer + Marcelo Markes (Ao Vivo)
Siga no Instagram:
@eliezerdetarsis
https://www.instagram.com/eliezerdetarsis/
LETRA:
O Seu Amor Me Trouxe Aqui
Pra Entender Que Sou De Ti
Em Outros Lugares Eu Busquei
E Em Tua Presença Eu Achei O Meu Lugar
Sou Dependente Da Tua Glória
Fui Atraído Pelo Teu Amor
Não Quero Fugir Da Tua Presença
Eu Só Quero Te Contemplar
▶ FICHA TÉCNICA:
Dirigido por: Tg Alves
Produção Executiva: Double Global
Produtora de Vídeo: TG ALVES FILM Co.
Direção de Fotografia: Patrick Rodrigues
Produção: Valdemir Fer, Drielle Monteiro (MiPro Soluções), Guilherme Frattes (TG ALVES FILM Co.)
1AC: Miller Teixeira
Operadores de Câmera: Amarelo, Fabrício Teixeira, Vinicius Wesley, Joilson Castro, Jonatas Limeira, Pedro Pardal, Abidiel Pereira, Miller Teixeira, Patrick Rodrigues
Light Designer: Bruno Andrade
Técnico de Locação: Luiz Vertullo, Eduardo Santos
Making Of: Leonardo Cirqueira
Fotografia Still: Caroline Lessa
Edição e Finalização: Tg Alves
Produção Musical: Maefe
Vocais: Eliezer de Tarsis, Marcelo Markes
Backing Vocal: Juliana Purgatto
Instrumentos: Johnny Essi e Gabriel Quirino (Teclado), Everson Menezes (Guitarra), Maefe (Violão), Denis Silva (Contra Baixo), Raffah de Layah (Bateria)
Captação de Áudio: Tiago Vendrasco
Responsável Técnico: Maycon Mendes
Monitoração: Tiago Vendrasco
Edição: Silnei Soares
Mixagem e Masterização: Junior Papalia
#Dependente #Eliezer #MarceloMarkes
https://wn.com/Dependente_Eliezer_De_Tarsis_E_Marcelo_Markes_(Ao_Vivo)
Videoclipe oficial da música Dependente - Eliezer + Marcelo Markes (Ao Vivo)
Siga no Instagram:
@eliezerdetarsis
https://www.instagram.com/eliezerdetarsis/
LETRA:
O Seu Amor Me Trouxe Aqui
Pra Entender Que Sou De Ti
Em Outros Lugares Eu Busquei
E Em Tua Presença Eu Achei O Meu Lugar
Sou Dependente Da Tua Glória
Fui Atraído Pelo Teu Amor
Não Quero Fugir Da Tua Presença
Eu Só Quero Te Contemplar
▶ FICHA TÉCNICA:
Dirigido por: Tg Alves
Produção Executiva: Double Global
Produtora de Vídeo: TG ALVES FILM Co.
Direção de Fotografia: Patrick Rodrigues
Produção: Valdemir Fer, Drielle Monteiro (MiPro Soluções), Guilherme Frattes (TG ALVES FILM Co.)
1AC: Miller Teixeira
Operadores de Câmera: Amarelo, Fabrício Teixeira, Vinicius Wesley, Joilson Castro, Jonatas Limeira, Pedro Pardal, Abidiel Pereira, Miller Teixeira, Patrick Rodrigues
Light Designer: Bruno Andrade
Técnico de Locação: Luiz Vertullo, Eduardo Santos
Making Of: Leonardo Cirqueira
Fotografia Still: Caroline Lessa
Edição e Finalização: Tg Alves
Produção Musical: Maefe
Vocais: Eliezer de Tarsis, Marcelo Markes
Backing Vocal: Juliana Purgatto
Instrumentos: Johnny Essi e Gabriel Quirino (Teclado), Everson Menezes (Guitarra), Maefe (Violão), Denis Silva (Contra Baixo), Raffah de Layah (Bateria)
Captação de Áudio: Tiago Vendrasco
Responsável Técnico: Maycon Mendes
Monitoração: Tiago Vendrasco
Edição: Silnei Soares
Mixagem e Masterização: Junior Papalia
#Dependente #Eliezer #MarceloMarkes
- published: 10 Nov 2023
- views: 3203505
4:03:25
Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality
For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong.
We also discuss his call to halt AI,...
For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong.
We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more.
If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely.
Transcript: https://dwarkeshpatel.com/p/eliezer-yudkowsky
Apple Podcasts: https://apple.co/3mcPjON
Spotify: https://spoti.fi/3KDFzX9
Follow me on Twitter: https://twitter.com/dwarkesh_sp
Timestamps:
(0:00:00) - TIME article
(0:09:06) - Are humans aligned?
(0:37:35) - Large language models
(1:07:15) - Can AIs help with alignment?
(1:30:17) - Society’s response to AI
(1:44:42) - Predictions (or lack thereof)
(1:56:55) - Being Eliezer
(2:13:06) - Othogonality
(2:35:00) - Could alignment be easier than we think?
(3:02:15) - What will AIs want?
(3:43:54) - Writing fiction & whether rationality helps you win
https://wn.com/Eliezer_Yudkowsky_Why_Ai_Will_Kill_Us,_Aligning_Llms,_Nature_Of_Intelligence,_Scifi,_Rationality
For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong.
We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more.
If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely.
Transcript: https://dwarkeshpatel.com/p/eliezer-yudkowsky
Apple Podcasts: https://apple.co/3mcPjON
Spotify: https://spoti.fi/3KDFzX9
Follow me on Twitter: https://twitter.com/dwarkesh_sp
Timestamps:
(0:00:00) - TIME article
(0:09:06) - Are humans aligned?
(0:37:35) - Large language models
(1:07:15) - Can AIs help with alignment?
(1:30:17) - Society’s response to AI
(1:44:42) - Predictions (or lack thereof)
(1:56:55) - Being Eliezer
(2:13:06) - Othogonality
(2:35:00) - Could alignment be easier than we think?
(3:02:15) - What will AIs want?
(3:43:54) - Writing fiction & whether rationality helps you win
- published: 06 Apr 2023
- views: 147586
46:18
(PROROK JONA) Legenda Eliezer Papo
SUPER, POUČNO I JAKO ZANIMLJIVO PREDAVANJE O NAVEDENIM I OSTALIM MNOGIM SUPER BIBLIJSKIM TEMAMA SA POZNATOM LEGENDARNOM DUŠOM ELIEZEROM PAPOM
Dragi Prijatelji ...
SUPER, POUČNO I JAKO ZANIMLJIVO PREDAVANJE O NAVEDENIM I OSTALIM MNOGIM SUPER BIBLIJSKIM TEMAMA SA POZNATOM LEGENDARNOM DUŠOM ELIEZEROM PAPOM
Dragi Prijatelji hvala vam unaprijed na gledanju i odvojenom vremenu
Ako vas zanimaju ostale epizode imate na popisima ovog skromnog kanala kao i sve ostale divne sadržaje
Ovdje na dnu imate link od poznatog divnog kanala Dobre Duše i Legende Eliezera Pape
Hvala vam na pažnji
https://youtube.com/@EliezerPapoSFRJ?si=R5mR7eWLzPGOL-tW
https://wn.com/(Prorok_Jona)_Legenda_Eliezer_Papo
SUPER, POUČNO I JAKO ZANIMLJIVO PREDAVANJE O NAVEDENIM I OSTALIM MNOGIM SUPER BIBLIJSKIM TEMAMA SA POZNATOM LEGENDARNOM DUŠOM ELIEZEROM PAPOM
Dragi Prijatelji hvala vam unaprijed na gledanju i odvojenom vremenu
Ako vas zanimaju ostale epizode imate na popisima ovog skromnog kanala kao i sve ostale divne sadržaje
Ovdje na dnu imate link od poznatog divnog kanala Dobre Duše i Legende Eliezera Pape
Hvala vam na pažnji
https://youtube.com/@EliezerPapoSFRJ?si=R5mR7eWLzPGOL-tW
- published: 29 Jan 2025
- views: 118
3:12:41
Eliezer Yudkowsky on if Humanity can Survive AI
Eliezer Yudkowsky is a researcher, writer, and advocate for artificial intelligence safety. He is best known for his writings on rationality, cognitive biases, ...
Eliezer Yudkowsky is a researcher, writer, and advocate for artificial intelligence safety. He is best known for his writings on rationality, cognitive biases, and the development of superintelligence. Yudkowsky has written extensively on the topic of AI safety and has advocated for the development of AI systems that are aligned with human values and interests. Yudkowsky is the co-founder of the Machine Intelligence Research Institute (MIRI), a non-profit organization dedicated to researching the development of safe and beneficial artificial intelligence. He is also a co-founder of the Center for Applied Rationality (CFAR), a non-profit organization focused on teaching rational thinking skills. He is also a frequent author at LessWrong.com as well as Rationality: From AI to Zombies.
In this episode, we discuss Eliezer’s concerns with artificial intelligence and his recent conclusion that it will inevitably lead to our demise. He’s a brilliant mind, an interesting person, and genuinely believes all of the stuff he says. So I wanted to have a conversation with him to hear where he is coming from, how he got there, understand AI better, and hopefully help us bridge the divide between the people that think we’re headed off a cliff and the people that think it’s not a big deal.
(0:00) Intro
(1:18) Welcome Eliezer
(6:27) How would you define artificial intelligence?
(15:50) What is the purpose of a firm alarm?
(19:29) Eliezer’s background
(29:28) The Singularity Institute for Artificial Intelligence
(33:38) Maybe AI doesn’t end up automatically doing the right thing
(45:42) AI Safety Conference
(51:15) Disaster Monkeys
(1:02:15) Fast takeoff
(1:10:29) Loss function
(1:15:48) Protein folding
(1:24:55) The deadly stuff
(1:46:41) Why is it inevitable?
(1:54:27) Can’t we let tech develop AI and then fix the problems?
(2:02:56) What were the big jumps between GPT3 and GPT4?
(2:07:15) “The trajectory of AI is inevitable”
(2:28:05) Elon Musk and OpenAI
(2:37:41) Sam Altman Interview
(2:50:38) The most optimistic path to us surviving
(3:04:46) Why would anything super intelligent pursue ending humanity?
(3:14:08) What role do VCs play in this?
Show Notes:
https://twitter.com/liron/status/1647443778524037121?s=20
https://futureoflife.org/event/ai-safety-conference-in-puerto-rico/
https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy
https://www.youtube.com/watch?v=q9Figerh89g
https://www.vox.com/the-highlight/23447596/artificial-intelligence-agi-openai-gpt3-existential-risk-human-extinction
Eliezer Yudkowsky – AI Alignment: Why It's Hard, and Where to Start
Mixed and edited: Justin Hrabovsky
Produced: Rashad Assir
Executive Producer: Josh Machiz
Music: Griff Lawson
🎙 Listen to the show
Apple Podcasts: https://podcasts.apple.com/us/podcast/three-cartoon-avatars/id1606770839
Spotify: https://open.spotify.com/show/5WqBqDb4br3LlyVrdqOYYb?si=3076e6c1b5c94d63&nd=1
Google Podcasts: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5zaW1wbGVjYXN0LmNvbS9zb0hJZkhWbg
🎥 Subscribe on YouTube: https://www.youtube.com/channel/UCugS0jD5IAdoqzjaNYzns7w?sub_confirmation=1
Follow on Socials
📸 Instagram - https://www.instagram.com/theloganbartlettshow
🐦 Twitter - https://twitter.com/loganbartshow
🎬 Clips on TikTok - https://www.tiktok.com/@theloganbartlettshow
About the Show
Logan Bartlett is a Software Investor at Redpoint Ventures - a Silicon Valley-based VC with $6B AUM and investments in Snowflake, DraftKings, Twilio, and Netflix. In each episode, Logan goes behind the scenes with world-class entrepreneurs and investors. If you're interested in the real inside baseball of tech, entrepreneurship, and start-up investing, tune in every Friday for new episodes.
https://wn.com/Eliezer_Yudkowsky_On_If_Humanity_Can_Survive_Ai
Eliezer Yudkowsky is a researcher, writer, and advocate for artificial intelligence safety. He is best known for his writings on rationality, cognitive biases, and the development of superintelligence. Yudkowsky has written extensively on the topic of AI safety and has advocated for the development of AI systems that are aligned with human values and interests. Yudkowsky is the co-founder of the Machine Intelligence Research Institute (MIRI), a non-profit organization dedicated to researching the development of safe and beneficial artificial intelligence. He is also a co-founder of the Center for Applied Rationality (CFAR), a non-profit organization focused on teaching rational thinking skills. He is also a frequent author at LessWrong.com as well as Rationality: From AI to Zombies.
In this episode, we discuss Eliezer’s concerns with artificial intelligence and his recent conclusion that it will inevitably lead to our demise. He’s a brilliant mind, an interesting person, and genuinely believes all of the stuff he says. So I wanted to have a conversation with him to hear where he is coming from, how he got there, understand AI better, and hopefully help us bridge the divide between the people that think we’re headed off a cliff and the people that think it’s not a big deal.
(0:00) Intro
(1:18) Welcome Eliezer
(6:27) How would you define artificial intelligence?
(15:50) What is the purpose of a firm alarm?
(19:29) Eliezer’s background
(29:28) The Singularity Institute for Artificial Intelligence
(33:38) Maybe AI doesn’t end up automatically doing the right thing
(45:42) AI Safety Conference
(51:15) Disaster Monkeys
(1:02:15) Fast takeoff
(1:10:29) Loss function
(1:15:48) Protein folding
(1:24:55) The deadly stuff
(1:46:41) Why is it inevitable?
(1:54:27) Can’t we let tech develop AI and then fix the problems?
(2:02:56) What were the big jumps between GPT3 and GPT4?
(2:07:15) “The trajectory of AI is inevitable”
(2:28:05) Elon Musk and OpenAI
(2:37:41) Sam Altman Interview
(2:50:38) The most optimistic path to us surviving
(3:04:46) Why would anything super intelligent pursue ending humanity?
(3:14:08) What role do VCs play in this?
Show Notes:
https://twitter.com/liron/status/1647443778524037121?s=20
https://futureoflife.org/event/ai-safety-conference-in-puerto-rico/
https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy
https://www.youtube.com/watch?v=q9Figerh89g
https://www.vox.com/the-highlight/23447596/artificial-intelligence-agi-openai-gpt3-existential-risk-human-extinction
Eliezer Yudkowsky – AI Alignment: Why It's Hard, and Where to Start
Mixed and edited: Justin Hrabovsky
Produced: Rashad Assir
Executive Producer: Josh Machiz
Music: Griff Lawson
🎙 Listen to the show
Apple Podcasts: https://podcasts.apple.com/us/podcast/three-cartoon-avatars/id1606770839
Spotify: https://open.spotify.com/show/5WqBqDb4br3LlyVrdqOYYb?si=3076e6c1b5c94d63&nd=1
Google Podcasts: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5zaW1wbGVjYXN0LmNvbS9zb0hJZkhWbg
🎥 Subscribe on YouTube: https://www.youtube.com/channel/UCugS0jD5IAdoqzjaNYzns7w?sub_confirmation=1
Follow on Socials
📸 Instagram - https://www.instagram.com/theloganbartlettshow
🐦 Twitter - https://twitter.com/loganbartshow
🎬 Clips on TikTok - https://www.tiktok.com/@theloganbartlettshow
About the Show
Logan Bartlett is a Software Investor at Redpoint Ventures - a Silicon Valley-based VC with $6B AUM and investments in Snowflake, DraftKings, Twilio, and Netflix. In each episode, Logan goes behind the scenes with world-class entrepreneurs and investors. If you're interested in the real inside baseball of tech, entrepreneurship, and start-up investing, tune in every Friday for new episodes.
- published: 06 May 2023
- views: 269433