AI Trailblazers: Women Shaping the Future

Irene Solaiman, head of global policy at Hugging Face, is featured in a series highlighting women in AI. Her career path from OpenAI to Zillow and Hugging Face reflects her dedication to AI policy and socio-technical research. She emphasises diversity and community in navigating the male-dominated tech and AI sectors.

Feb 18, 2024 - 13:47
 0  129
AI Trailblazers: Women Shaping the Future

Diving into the world of AI, we spotlight Irene Solaiman, head of global policy at Hugging Face. TechCrunch embarks on a series of interviews, highlighting women who’ve made remarkable contributions to the AI revolution. As the AI boom continues, we shine a light on key work that often flies under the radar.

Irene Solaiman kickstarted her AI journey as a researcher and public policy manager at OpenAI, spearheading a new approach to releasing GPT-2, a precursor to ChatGPT. Following her stint as an AI policy manager at Zillow, she took on the role of head of global policy at Hugging Face. Her duties range from shaping and executing company AI policy globally to conducting socio-technical research.

Solaiman also lends her expertise to the Institute of Electrical and Electronics Engineers (IEEE), advising on AI matters. She is also a recognized AI expert at the intergovernmental Organization for Economic Co-operation and Development (OECD).

Irene Solaiman, head of global policy at Hugging Face How did your AI journey begin? What drew you to the field? AI often sees non-linear career paths. My interest sparked, like many teenagers with awkward social skills, through sci-fi media. I initially studied human rights policy and later took up computer science, viewing AI as a means to champion human rights and forge a brighter future. The blend of technical research and policy leadership in a field rife with unanswered questions keeps my work engaging.

What work in the AI field are you most proud of? I'm most proud when my expertise resonates with peers in the AI field, especially my work on release considerations in AI system deployment and openness. Seeing my paper on an AI Release Gradient sparking discussions among scientists and being used in government reports is validating—indicating I'm on the right track! Personally, I'm deeply motivated by work on cultural value alignment, ensuring systems serve the cultures they're deployed in. Collaborating with my co-author and now dear friend, Christy Dennison, on a Process for Adapting Language Models to Society was a project close to my heart (and involved many debugging hours) that has significantly shaped safety and alignment work today.

How do you navigate the challenges of the male-dominated tech and AI industries? I find, and continue to find, my community—from working with company leaders who share my concerns to collaborating with research co-authors with whom I start every session with a mini therapy session. Affinity groups play a crucial role in building community and sharing insights. It's important to highlight intersectionality here; my communities of Muslim and BIPOC researchers are a continual source of inspiration.

What advice do you have for women aspiring to join the AI field? Build a support group where their success is yours. In simpler terms, find your "girl's girl." The women and allies I started this journey with are my go-to for coffee dates and late-night panicked calls before deadlines. One of the best career tips I've come across was from Arvind Narayan on Twitter, establishing the "Liam Neeson Principle" of not needing to be the smartest, but having a unique set of skills.

What are some of the key challenges facing the evolution of AI? The challenges evolve, making international coordination for safer systems for all crucial. People using and impacted by AI have varying safety preferences and ideas, even within the same country. The issues that arise depend on not just how AI evolves, but also on the deployment environment; safety priorities and capability definitions differ regionally, such as the higher threat of cyberattacks on critical infrastructure in more digitized economies.

What issues should AI users be mindful of? Technical solutions rarely address risks and harms comprehensively. While users can enhance their AI literacy, it's crucial to invest in multiple safeguards as risks evolve. For instance, I'm enthusiastic about further research into watermarking as a technical tool, alongside the need for coordinated policymaker guidance on distributing generated content, particularly on social media platforms.

What is the responsible way to develop AI? Involve those affected and continuously reassess our methods for assessing and implementing safety measures. Both beneficial applications and potential harms evolve, necessitating iterative feedback. The AI safety improvement process should be a collective examination by the field. In 2024, the most popular model evaluations are significantly more robust than those in 2019. While human evaluations are highly useful, evidence of the mental burden and varying costs of human feedback is increasing, making standardized evaluations more appealing.

How can investors better advocate for responsible AI? Many investors and venture capital firms are already engaging in safety and policy discussions, including through open letters and Congressional testimonies. I'm eager to hear more from investors on what drives small businesses across sectors, especially as we see more AI use outside core tech industries.

What's Your Reaction?