SIILI-Turing_test-hero-1

13.11.2024

Turing Test Ep. 1: Talking AI ethics with Irene Solaiman

Share Content

Listen audio version of this post:

Turing Test Ep. 1: Talking AI ethics with Irene Solaiman
7:27


Contact Us → Contact Us →

In the podcast's inaugural episode, Juha-Matti sits down with Irene Solaiman, the Head of Global Policy at Hugging Face. Hugging Face, often likened to “the GitHub of AI,” is a leading platform for collaboration in the machine learning community, hosting over a million models, thousands of datasets, and an ever-growing collection of AI-driven applications.

Irene is no doubt a prominent advocate for ethical AI, but her entry into the world of AI was not conventional. With a background in international relations and human rights, she initially pursued government work focused on addressing human rights violations. However, the emotional toll of the field led her to explore new ways to make an impact.“Turns out reading a lot of human rights violations was not good for my mental health,” Irene admits candidly. “Not that Silicon Valley is known for shining mental health either, but I found that being able to code allowed me to approach these challenges differently—enhancing rights through empowerment, rather than purely reacting to violations.”

Her career pivot brought her to OpenAI, where she worked on early GPT systems, building technical expertise to complement her advocacy skills. “At OpenAI, I realized how critical it is to understand the technical side of AI to enact meaningful policies,” she explains. “You can’t build robust ethical frameworks or safety measures without knowing what the systems can—and can’t—do.”

This technical foundation paved the way for her role at Hugging Face, where she built the company’s policy framework from scratch. Her work focuses on bridging research, engineering, and governance, ensuring that Hugging Face fosters collaboration and inclusivity while upholding ethical standards.

Hugging Face: A platform for open collaboration

Hugging Face is often referred to as the "GitHub of AI" for its role in hosting over a million models and hundreds of thousands of datasets. But Irene sees the platform as more than just a technical repository. “It’s the leading infrastructure for being able to build with AI,” she says.

Hugging Face’s mission revolves around creating a community-oriented space where developers, researchers, and policymakers can collaborate openly. This openness fosters innovation and accountability, addressing one of AI’s most pressing challenges: ensuring that its benefits are widely distributed.

Irene Solaiman hugging face interview

“We’re facilitating more than technical solutions; we’re building a platform for cultural and ethical exchange,” Irene highlights. She points to Hugging Face’s emphasis on accessibility, particularly for underrepresented groups. “By lowering barriers to entry, we’re helping AI become something that works for everyone, not just a privileged few.”

Navigating ethics in diverse AI systems

One of the key challenges Irene addresses is the difference between narrow AI and generative AI systems. Narrow systems, such as models for predicting housing valuations, have clear benchmarks, making fairness evaluations relatively straightforward.

“For narrow systems, you can ask specific questions: Is this model treating all demographic groups fairly? Are there biases in how it evaluates properties in different neighborhoods?” Irene says.

Generative AI, however, operates in a much less structured environment, producing open-ended outputs without predefined applications. This lack of context complicates ethical evaluations.

“With generative systems, you’re often dealing with infinite possibilities,” Irene explains. “The lack of a frame of reference makes fairness evaluations more abstract. It’s much harder to determine what’s ‘fair’ when the outputs are so varied and depend heavily on their use case.”

She emphasizes the importance of safety by design, embedding ethical considerations into systems from their inception. “Are we generating harmful content? To whom is this content harmful? And how do these harms vary across cultures?” Irene asks. These are the kinds of questions developers must address to build systems that align with societal values.

The global implications of privacy laws

The discussion around AI ethics is deeply tied to international policy. Irene notes key differences between regions like the EU and the US, each of which brings valuable perspectives to the table.

“The EU has emphasized trustworthy AI through legislation like GDPR and the upcoming AI Act,” Irene explains. “Meanwhile, the US has taken a more innovation-forward approach, prioritizing collaboration with private companies.”

However, she warns that fragmented regulations, particularly in the US, can create challenges for businesses. “We’re seeing states like California lead the way with privacy laws, but the lack of uniformity creates confusion for companies operating across jurisdictions,” she says.

Global collaboration is essential for addressing cross-border challenges like bias and misinformation. Irene highlights efforts such as the partnership between the US and UK on AI safety and the OECD’s development of global AI principles. “AI systems don’t stop at borders,” she says. “To address global challenges, we need frameworks that facilitate dialogue between countries.”

On the shared responsibility in developing ethical AI

Ethical AI development requires a collective effort from developers, platform providers, users, and policymakers. Irene underscores that no single group can bear the responsibility alone.

“Every stakeholder, including users, has a role to play in ensuring AI systems are deployed and used responsibly,” she explains. However, she stresses the importance of equipping stakeholders with the right tools and education to navigate ethical challenges.

For instance, Irene critiques the widespread practice of scraping internet data to train models. “Simply scraping whatever data is available isn’t just lazy—it’s potentially dangerous,” she says. Instead, she advocates for curated datasets that are tailored to specific applications and comply with relevant laws.

At the user level, Irene points to the issue of non-consensual content as a growing ethical concern. “Unfortunately, some users exploit these systems in ways that harm others,” she says. “It’s essential to hold users accountable, especially when there’s clear intent to cause harm.”

What's the future of AI?

Despite the challenges, Irene remains optimistic about AI’s potential to align with societal values. She highlights the shift toward smaller, localized models and increased multilingual accessibility as promising trends.

“When we lower the cost of generating text or understanding speech in non-Latin languages, we’re not just innovating—we’re opening up AI for millions of people,” she says.

Reflecting on the broader implications, Irene believes that ethical AI is not just about avoiding harm—it’s about creating systems that empower and uplift. “AI isn’t just about technology—it’s about humanity,” she concludes. “By collaborating across industries and perspectives, we can ensure this powerful tool is used to uplift and empower.”

 


Stay tuned!

Stay tuned for future episodes of the Turing Test Podcast as we continue to explore how AI is transforming the way we live, work, and build a better future.

Useful links:

 

Related Stories

Spiking Now

Get Our Latest News

Immerse yourself into the latest twists and turns of life at Siili! Subscribe to our monthly newsletter, and stay up to date with our stories.

`; formatted += `
${strPostSummary}
`; /* recent-post-tags removed 21.1.2021 by Tovi formatted += '
'; if (blogpost.tagList.length > 0) { formatted += `

${blogpost.tagList.map(tag => tag.label).join(" // ")}

`; } formatted += '
'; */ formatted += `
Read more →Read more →
`; formatted += ''; } return formatted; }