Rephonic
Artwork for AXRP - the AI X-risk Research Podcast

AXRP - the AI X-risk Research Podcast

Daniel Filan
AI Alignment
AI Safety
Neural Networks
AI Safety Research
MATS Program
Bayesian Statistics
Machine Learning Evaluations
Natural Language Processing
Computational Mechanics
Epoch AI
AI Governance
Patreon
Hidden Markov Models
Bayesian Inference
AI Training Models
Multigram Circuit
Learning Coefficient
Attention Heads
Cybersecurity
Long-Term Future Fund

AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it's been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity's future potential. You can visit the website and read t... more

PublishesTwice monthlyEpisodes59Founded5 years ago
Categories
ScienceTechnology

Listen to this Podcast

Artwork for AXRP - the AI X-risk Research Podcast

Latest Episodes

Could AI enable a small group to gain power over a large country, and lock in their power permanently? Often, people worried about catastrophic risks from AI have been concerned with misalignment risks. In this episode, Tom Davidson talks about a ris... more

In this episode, I chat with Samuel Albanie about the Google DeepMind paper he co-authored called "An Approach to Technical AGI Safety and Security". It covers the assumptions made by the approach, as well as the types of mitigations it outlines.

P... more

In this episode, I talk with Peter Salib about his paper "AI Rights for Human Safety", arguing that giving AIs the right to contract, hold property, and sue people will reduce the risk of their trying to attack humanity and take over. He also tells m... more

In this episode, I talk with David Lindner about Myopic Optimization with Non-myopic Approval, or MONA, which attempts to address (multi-step) reward hacking by myopically optimizing actions against a human's sense of whether those actions are genera... more

Key Facts

Accepts Guests
Accepts Sponsors
Contact Information
Podcast Host

Similar Podcasts

People also subscribe to these shows.

Recent Guests

Tom Davidson
Senior Research Fellow at the Forethought Institute for AI Strategy
Forethought Institute
Episode: 46 - Tom Davidson on AI-enabled Coups
Peter Salib
Law professor at the University of Houston and co-director for the Center for Law and AI Risk.
University of Houston
Episode: 44 - Peter Salib on AI Rights for Human Safety
Jason Gross
Researcher interested in mechanistic interpretability and software verification
Episode: 40 - Jason Gross on Compact Proofs and Interpretability
Adrià Garriga-Alonso
Machine learning researcher focused on AI safety, currently working at FAR AI
FAR AI
Episode: 38.5 - Adrià Garriga-Alonso on Detecting AI Scheming
Shakeel Hashim
Grants Director and Journalist in Residence at Tarbell, focusing on AI journalism.
Tarbell
Episode: 38.4 - Shakeel Hashim on AI Journalism
Evan Hubinger
Research scientist at Anthropic, leading the Alignment Stress Testing Team
Anthropic
Episode: 39 - Evan Hubinger on Model Organisms of Misalignment
Jesse Hoogland
Executive Director of TAMEAS, a research organization working on Singular Learning Theory applications
TAMEAS
Episode: 38.2 - Jesse Hoogland on Singular Learning Theory
Alan Chan
Research fellow at the Center for the Governance of AI and PhD student at Miele in Quebec
Center for the Governance of AI
Episode: 38.1 - Alan Chan on Agent Infrastructure
Zhijing Jin
Assistant Professor at the University of Toronto, specializing in natural language processing and causal inference.
University of Toronto
Episode: 38.0 - Zhijing Jin on LLMs, Causality, and Multi-Agent Systems

Host

Daniel Filan
Host of AXRP - the AI X-risk Research Podcast, engaging with researchers and discussing their work related to artificial intelligence and its existential risks.

Reviews

4.8 out of 5 stars from 39 ratings
  • Fantastic Introduction to the technical side of ai risk research

    It’s early days, but if this keeps up, I think it’s safe to say that this is my new favourite podcast. I’ve been interested in ai risk for quite a while, but only started getting into machine learning from a hands on technical perspective in the past few months. This podcast is proving to be a great way for me to broaden my understanding, and get an idea what the current research field looks like in practice.

    Right now, I just barely know what a convolutional network design looks like, but I st... more

    Apple Podcasts
    5
    Jichah
    Germany5 years ago

Listeners Say

Key themes from listener reviews, highlighting what works and what could be improved about the show.

Listeners appreciate the clarity with which complex technical concepts are explained, making the content engaging and fun. Many find it an exciting way to broaden their understanding of AI and its risks, often expressing excitement about the educational value of the discussions. Positive reviews highlight the host's ability to weave humor into technical discussions, keeping the content accessible and entertaining.

Chart Rankings

How this podcast ranks in the Apple Podcasts, Spotify and YouTube charts.

Apple Podcasts
#114
Spain/Technology
Apple Podcasts
#186
India/Technology

Talking Points

Recent interactions between the hosts and their guests.

45 - Samuel Albanie on DeepMind's AGI Safety Approach
Q: So, what is the second assumption?
The second one is that there is no human ceiling, meaning AIs can be smarter, more capable than humans.
45 - Samuel Albanie on DeepMind's AGI Safety Approach
Q: Can you tell us just like, what is this paper about?
The goal of this paper is to lay out a technical research agenda for addressing some of the severe risks that we think might be posed by AGI.
43 - David Lindner on Myopic Optimization with Non-myopic Approval
Q: Why does this prevent bad behavior by AIs?
It encourages them to pursue strategies that humans can understand, thus reducing instances of multi-step reward hacking.
43 - David Lindner on Myopic Optimization with Non-myopic Approval
Q: What does the idea of your paper address?
It's about preventing bad behavior in AI systems, particularly when humans might not be able to detect those issues.
38.4 - Shakeel Hashim on AI Journalism
Q: What are you trying to do with your sub-stack, Transformer?
The goal is to provide a comprehensive summary of AI news to help people stay informed and draw attention to important issues that aren't getting enough coverage.

Audience Metrics

Listeners, social reach, demographics and more for this podcast.

Gender Skew
Location
Interests
Professions
Age Range
Household Income
Social Media Reach

Frequently Asked Questions About AXRP - the AI X-risk Research Podcast

What is AXRP - the AI X-risk Research Podcast about and what kind of topics does it cover?

Focused on the intersection of artificial intelligence and existential risk, the content examines the profound implications of AI on human society. The discussions often center around research papers, delving into how AI could potentially be misaligned with human values and the associated dangers. Typical topics include AI safety, the future of general intelligence, and the ethical considerations surrounding AI rights, providing listeners with a comprehensive understanding of the risks involved. A unique aspect of the content is its accessibility; complex subjects are explained in a way that both laypersons and experts can engage with, fostering a learning environment that encourages curiosity about the technical and societal facets of AI.

Where can I find podcast stats for AXRP - the AI X-risk Research Podcast?

Rephonic provides a wide range of podcast stats for AXRP - the AI X-risk Research Podcast. We scanned the web and collated all of the information that we could find in our comprehensive podcast database. See how many people listen to AXRP - the AI X-risk Research Podcast and access YouTube viewership numbers, download stats, audience demographics, chart rankings, ratings, reviews and more.

How many listeners does AXRP - the AI X-risk Research Podcast get?

Rephonic provides a full set of podcast information for three million podcasts, including the number of listeners. View further listenership figures for AXRP - the AI X-risk Research Podcast, including podcast download numbers and subscriber numbers, so you can make better decisions about which podcasts to sponsor or be a guest on. You will need to upgrade your account to access this premium data.

What are the audience demographics for AXRP - the AI X-risk Research Podcast?

Rephonic provides comprehensive predictive audience data for AXRP - the AI X-risk Research Podcast, including gender skew, age, country, political leaning, income, professions, education level, and interests. You can access these listener demographics by upgrading your account.

How many subscribers and views does AXRP - the AI X-risk Research Podcast have?

To see how many followers or subscribers AXRP - the AI X-risk Research Podcast has on Spotify and other platforms such as Castbox and Podcast Addict, simply upgrade your account. You'll also find viewership figures for their YouTube channel if they have one.

Which podcasts are similar to AXRP - the AI X-risk Research Podcast?

These podcasts share a similar audience with AXRP - the AI X-risk Research Podcast:

1. Future of Life Institute Podcast
2. Dwarkesh Podcast
3. Complex Systems with Patrick McKenzie (patio11)
4. The a16z Show
5. All-In with Chamath, Jason, Sacks & Friedberg

How many episodes of AXRP - the AI X-risk Research Podcast are there?

AXRP - the AI X-risk Research Podcast launched 5 years ago and published 59 episodes to date. You can find more information about this podcast including rankings, audience demographics and engagement in our podcast database.

How do I contact AXRP - the AI X-risk Research Podcast?

Our systems regularly scour the web to find email addresses and social media links for this podcast. We scanned the web and collated all of the contact information that we could find in our podcast database. But in the unlikely event that you can't find what you're looking for, our concierge service lets you request our research team to source better contacts for you.

Where can I see ratings and reviews for AXRP - the AI X-risk Research Podcast?

Rephonic pulls ratings and reviews for AXRP - the AI X-risk Research Podcast from multiple sources, including Spotify, Apple Podcasts, Castbox, and Podcast Addict.

View all the reviews in one place instead of visiting each platform individually and use this information to decide if a show is worth pitching or not.

How do I access podcast episode transcripts for AXRP - the AI X-risk Research Podcast?

Rephonic provides full transcripts for episodes of AXRP - the AI X-risk Research Podcast. Search within each transcript for your keywords, whether they be topics, brands or people, and figure out if it's worth pitching as a guest or sponsor. You can even set-up alerts to get notified when your keywords are mentioned.

What guests have appeared on AXRP - the AI X-risk Research Podcast?

Recent guests on AXRP - the AI X-risk Research Podcast include:

1. Tom Davidson
2. Peter Salib
3. Jason Gross
4. Adrià Garriga-Alonso
5. Shakeel Hashim
6. Evan Hubinger
7. Jesse Hoogland
8. Alan Chan

To view more recent guests and their details, simply upgrade your Rephonic account. You'll also get access to a typical guest profile to help you decide if the show is worth pitching.

Find and pitch the right podcasts

We help savvy brands, marketers and PR professionals to find the right podcasts for any topic or niche. Get the data and contacts you need to pitch podcasts at scale and turn listeners into customers.
Try it free for 7 days