Rephonic
Artwork for AXRP - the AI X-risk Research Podcast

AXRP - the AI X-risk Research Podcast

Daniel Filan
AI Safety
AI Alignment
Neural Networks
Artificial Intelligence
Machine Learning
AI Rights
Ai-Enabled Coups
Singular Learning Theory
Artificial General Intelligence
AI Property Rights
Google Deepmind
MATS Program
AI Safety Research
Artificial Intelligence Risk
Epoch AI
Computational Mechanics
Natural Language Processing
Machine Learning Evaluations
Cybersecurity
Bayesian Statistics

AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it's been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity's future potential. You can visit the website and read t... more

PublishesMonthlyEpisodes62Founded5 years ago
Categories
TechnologyScience

Listen to this Podcast

Artwork for AXRP - the AI X-risk Research Podcast

Latest Episodes

How does game theory work when everyone is a computer program who can read everyone else's source code? This is the problem of 'program equilibria'. In this episode, I talk with Caspar Oesterheld on work he's done on equilibria of programs that simul... more

In this episode, Guive Assadi argues that we should give AIs property rights, so that they are integrated in our system of property and come to rely on it. The claim is that this means that AIs would not kill or steal from humans, because that would ... more

When METR says something like "Claude Opus 4.5 has a 50% time horizon of 4 hours and 50 minutes", what does that mean? In this episode David Rein, METR researcher and co-author of the paper "Measuring AI ability to complete long tasks", talks about M... more

Could AI enable a small group to gain power over a large country, and lock in their power permanently? Often, people worried about catastrophic risks from AI have been concerned with misalignment risks. In this episode, Tom Davidson talks about a ris... more

Key Facts

Accepts Guests
Accepts Sponsors
Contact Information
Podcast Host

Similar Podcasts

People also subscribe to these shows.

80,000 Hours Podcast
80,000 Hours PodcastThe 80,000 Hours team
Redwood Research Blog
Redwood Research BlogRedwood Research
Dwarkesh Podcast
Dwarkesh PodcastDwarkesh Patel
Works in Progress Podcast
Works in Progress PodcastWorks in Progress

Recent Guests

David Rein
Researcher at METR, focusing on AI agent capability evaluation.
METR
Episode: 47 - David Rein on METR Time Horizons
Tom Davidson
Senior Research Fellow at the Forethought Institute for AI Strategy
Forethought Institute
Episode: 46 - Tom Davidson on AI-enabled Coups
Peter Salib
Law professor at the University of Houston and co-director for the Center for Law and AI Risk.
University of Houston
Episode: 44 - Peter Salib on AI Rights for Human Safety
Jason Gross
Researcher interested in mechanistic interpretability and software verification
Episode: 40 - Jason Gross on Compact Proofs and Interpretability
Adrià Garriga-Alonso
Machine learning researcher focused on AI safety, currently working at FAR AI
FAR AI
Episode: 38.5 - Adrià Garriga-Alonso on Detecting AI Scheming
Shakeel Hashim
Grants Director and Journalist in Residence at Tarbell, focusing on AI journalism.
Tarbell
Episode: 38.4 - Shakeel Hashim on AI Journalism
Evan Hubinger
Research scientist at Anthropic, leading the Alignment Stress Testing Team
Anthropic
Episode: 39 - Evan Hubinger on Model Organisms of Misalignment
Jesse Hoogland
Executive Director of TAMEAS, a research organization working on Singular Learning Theory applications
TAMEAS
Episode: 38.2 - Jesse Hoogland on Singular Learning Theory
Alan Chan
Research fellow at the Center for the Governance of AI and PhD student at Miele in Quebec
Center for the Governance of AI
Episode: 38.1 - Alan Chan on Agent Infrastructure

Host

Daniel Filan
Host of AXRP - the AI X-risk Research Podcast

Reviews

4.8 out of 5 stars from 43 ratings
  • Fantastic Introduction to the technical side of ai risk research

    It’s early days, but if this keeps up, I think it’s safe to say that this is my new favourite podcast. I’ve been interested in ai risk for quite a while, but only started getting into machine learning from a hands on technical perspective in the past few months. This podcast is proving to be a great way for me to broaden my understanding, and get an idea what the current research field looks like in practice.

    Right now, I just barely know what a convolutional network design looks like, but I st... more

    Apple Podcasts
    5
    Jichah
    Germany5 years ago

Listeners Say

Key themes from listener reviews, highlighting what works and what could be improved about the show.

Engaging host with a knack for linking theory to real-world implications
High-quality guests who dive into papers rather than headlines
Clear, accessible explanations of technical AI risk topics
Thoughtful, rigorous discussions that stay grounded in evidence

Chart Rankings

How this podcast ranks in the Apple Podcasts, Spotify and YouTube charts.

Apple Podcasts
#87
Colombia/Technology
Apple Podcasts
#139
India/Technology

Talking Points

Recent interactions between the hosts and their guests.

48 - Guive Assadi on AI Property Rights
Q: What regime are we talking about here?
The specific rights I think they should have are the right to earn wages, not be forced to do tasks, and the right to hold property like a human does.
48 - Guive Assadi on AI Property Rights
Q: Can you give us just a quick overview of what this post is arguing?
It's arguing that giving AIs property rights could mitigate the risk of violent robot revolution by encouraging them to value property security and align with human interests.
45 - Samuel Albanie on DeepMind's AGI Safety Approach
Q: So, what is the second assumption?
The second one is that there is no human ceiling, meaning AIs can be smarter, more capable than humans.
45 - Samuel Albanie on DeepMind's AGI Safety Approach
Q: Can you tell us just like, what is this paper about?
The goal of this paper is to lay out a technical research agenda for addressing some of the severe risks that we think might be posed by AGI.
43 - David Lindner on Myopic Optimization with Non-myopic Approval
Q: Why does this prevent bad behavior by AIs?
It encourages them to pursue strategies that humans can understand, thus reducing instances of multi-step reward hacking.

Audience Metrics

Listeners, social reach, demographics and more for this podcast.

Gender Skew
Location
Interests
Professions
Age Range
Household Income
Social Media Reach

Frequently Asked Questions About AXRP - the AI X-risk Research Podcast

What is AXRP - the AI X-risk Research Podcast about and what kind of topics does it cover?

The show explores research on AI existential risk through conversations about cutting-edge papers and theoretical frameworks. Episodes often center on topics like AI safety, game theory, robustness of AI systems, and governance, with guests presenting complex ideas and real-world implications in an accessible way. A notable strength is the ability to translate technical papers into practical, thought-provoking discussions that illuminate how research could reduce long-term AI harm, alongside lively examples and nuanced debates about strategy and cooperation in intelligent systems. This mix makes it valuable for researchers, students, and policy-minded listeners who want a rigorous, forward-looking view of AI risk and safety.

Unique aspects... more

Where can I find podcast stats for AXRP - the AI X-risk Research Podcast?

Rephonic provides a wide range of podcast stats for AXRP - the AI X-risk Research Podcast. We scanned the web and collated all of the information that we could find in our comprehensive podcast database. See how many people listen to AXRP - the AI X-risk Research Podcast and access YouTube viewership numbers, download stats, audience demographics, chart rankings, ratings, reviews and more.

How many listeners does AXRP - the AI X-risk Research Podcast get?

Rephonic provides a full set of podcast information for three million podcasts, including the number of listeners. View further listenership figures for AXRP - the AI X-risk Research Podcast, including podcast download numbers and subscriber numbers, so you can make better decisions about which podcasts to sponsor or be a guest on. You will need to upgrade your account to access this premium data.

What are the audience demographics for AXRP - the AI X-risk Research Podcast?

Rephonic provides comprehensive predictive audience data for AXRP - the AI X-risk Research Podcast, including gender skew, age, country, political leaning, income, professions, education level, and interests. You can access these listener demographics by upgrading your account.

How many subscribers and views does AXRP - the AI X-risk Research Podcast have?

To see how many followers or subscribers AXRP - the AI X-risk Research Podcast has on Spotify and other platforms such as Castbox and Podcast Addict, simply upgrade your account. You'll also find viewership figures for their YouTube channel if they have one.

Which podcasts are similar to AXRP - the AI X-risk Research Podcast?

These podcasts share a similar audience with AXRP - the AI X-risk Research Podcast:

1. 80,000 Hours Podcast
2. Redwood Research Blog
3. Dwarkesh Podcast
4. Clearer Thinking with Spencer Greenberg
5. Works in Progress Podcast

How many episodes of AXRP - the AI X-risk Research Podcast are there?

AXRP - the AI X-risk Research Podcast launched 5 years ago and published 62 episodes to date. You can find more information about this podcast including rankings, audience demographics and engagement in our podcast database.

How do I contact AXRP - the AI X-risk Research Podcast?

Our systems regularly scour the web to find email addresses and social media links for this podcast. We scanned the web and collated all of the contact information that we could find in our podcast database. But in the unlikely event that you can't find what you're looking for, our concierge service lets you request our research team to source better contacts for you.

Where can I see ratings and reviews for AXRP - the AI X-risk Research Podcast?

Rephonic pulls ratings and reviews for AXRP - the AI X-risk Research Podcast from multiple sources, including Spotify, Apple Podcasts, Castbox, and Podcast Addict.

View all the reviews in one place instead of visiting each platform individually and use this information to decide if a show is worth pitching or not.

How do I access podcast episode transcripts for AXRP - the AI X-risk Research Podcast?

Rephonic provides full transcripts for episodes of AXRP - the AI X-risk Research Podcast. Search within each transcript for your keywords, whether they be topics, brands or people, and figure out if it's worth pitching as a guest or sponsor. You can even set-up alerts to get notified when your keywords are mentioned.

What guests have appeared on AXRP - the AI X-risk Research Podcast?

Recent guests on AXRP - the AI X-risk Research Podcast include:

1. David Rein
2. Tom Davidson
3. Peter Salib
4. Jason Gross
5. Adrià Garriga-Alonso
6. Shakeel Hashim
7. Evan Hubinger
8. Jesse Hoogland

To view more recent guests and their details, simply upgrade your Rephonic account. You'll also get access to a typical guest profile to help you decide if the show is worth pitching.

Find and pitch the right podcasts

We help savvy brands, marketers and PR professionals to find the right podcasts for any topic or niche. Get the data and contacts you need to pitch podcasts at scale and turn listeners into customers.
Try it free for 7 days