Home / News / Artificial Intelligence / Irene Solaiman, head of global policy at Hugging Face, focuses on Women in AI

Irene Solaiman, head of global policy at Hugging Face, focuses on Women in AI

Launching a series of interviews to highlight female academics and others who have made significant contributions to the AI revolution. Multiple publications are planned for the year to showcase important but overlooked work in the ongoing AI boom. Explore additional profiles here.

Irene Solaiman started her professional journey in the field of AI as a researcher and public policy manager at OpenAI. During her time there, she spearheaded a novel strategy for introducing GPT-2, which served as a precursor to ChatGPT. Following her tenure as an AI policy manager at Zillow for almost a year, she transitioned to Hugging Face as the head of global policy. She is tasked with developing and overseeing the company’s AI policy on a global scale, as well as conducting research on socio-technical aspects.

Solaiman provides guidance to the Institute of Electrical and Electronics Engineers (IEEE) on AI matters and is acknowledged as an AI specialist at the intergovernmental Organization for Economic Co-operation and Development (OECD).

Irene Solaiman, who serves as the head of global policy at Hugging Face
How did you initially enter the field of AI? What drew you to this field of study?
Nonlinear career paths are common in the field of AI. My interest began like many teenagers with awkward social skills discovering their passions: through sci-fi media. My academic background includes human rights policy studies, followed by computer science courses. I see AI as a tool for advancing human rights and creating a brighter tomorrow. Engaging in technical research and guiding policy in a field with numerous unanswered questions and unexplored paths is what keeps my work stimulating.

Which project in the field of AI are you most proud of?
I take pride in when my expertise connects with individuals in the AI field, particularly through my work on release considerations in the intricate realm of AI system releases and transparency. It is gratifying to see my paper on an AI Release Gradient Frame technical deployment sparking discussions among scientists and being referenced in government reports. It indicates that I am making progress in the right direction. One area of particular interest to me is cultural value alignment, focusing on optimizing systems for specific cultural contexts. Collaborating with my co-author, Christy Dennison, on the development of a process for adapting language models to society was a significant endeavor that has had a lasting impact on safety and alignment efforts.

How does one address the obstacles presented by the predominantly male tech and AI industries?
I have discovered a supportive network, collaborating with dedicated company leaders who share my values and engaging with research partners with whom I can connect on a personal level before our work sessions. Affinity groups play a significant role in fostering community and exchanging advice. Highlighting the significance of intersectionality, the communities of Muslim and BIPOC researchers serve as a continual source of inspiration.

What advice do you have for women looking to pursue a career in the AI field?
Establish a support group where the achievements of each member contribute to the overall success of the group. From a youthful perspective, I consider this individual to be a “girl’s girl.” The women and allies I started my career with remain my go-to companions for coffee and last-minute deadline support. Arvind Narayan shared valuable career advice on Twitter about the importance of possessing a unique set of skills rather than being the most intelligent.

What are the key challenges that AI is currently facing as it progresses?
The most urgent issues are constantly changing, so the key solution is international cooperation to create safer systems for everyone. Individuals utilizing and impacted by systems, even within the same nation, possess diverse preferences and perceptions of safety. The development of AI and the particular deployment environment will have an impact on the emerging challenges. Safety concerns and varying definitions of capability vary across regions, leading to different priorities, like the increased risk of cyberattacks on essential infrastructure in highly digitalized economies.

What are the key considerations for AI users?
Technical solutions seldom provide comprehensive coverage of risks and harms. Users can enhance their AI literacy by taking specific steps, and they should prioritize investing in various safeguards to mitigate evolving risks. For instance, I am enthusiastic about further research on watermarking as a technical tool. Additionally, there is a need for cohesive policymaker direction regarding the distribution of generated content, particularly on social media platforms.

What is the most effective approach to ethically developing AI?
Considering the impacted individuals and continuously reassessing our approaches to evaluating and incorporating safety measures. Beneficial applications and potential harms are continually evolving and necessitate ongoing feedback. We should collectively analyze the methods for enhancing AI safety as a field. Current model evaluations in 2024 are significantly more robust compared to those conducted in 2019. Currently, I have a stronger preference for technical evaluations over red-teaming. I consider human evaluations to be highly valuable, but with growing evidence of the mental strain and varying expenses associated with human feedback, I am becoming more in favor of standardizing evaluations.

How can investors advocate for responsible AI more effectively?
They are already in that state. It is encouraging to observe the involvement of numerous investors and venture capital firms in discussions regarding safety and policy, as demonstrated through open letters and Congressional testimonies. I am interested in learning more from investors about the factors that drive small businesses in various sectors, particularly with the increasing adoption of AI in non-traditional tech fields.

About Chambers

Check Also

The Air Force has abandoned its attempt to install a directed-energy weapon on a fighter jet, marking another failure for airborne lasers

The U.S. military’s most recent endeavor to create an airborne laser weapon, designed to safeguard …