top of page

About the Center for Gender Equitable AI

Who We Are

The Center for Gender Equitable AI (CGEAI) is a 501(c)(3) youth-led organization addressing the gendered harms emerging from artificial intelligence and other technologies.

​

We combine research, advocacy, and education to tackle the structural inequities that shape how AI is built and governed. Our work spans from confronting tech-facilitated gender-based violence to advancing the representation of women and girls in AI leadership.

​

Founded in 2022 as Girls for Algorithmic Justice, we began as a small group of students analyzing how algorithmic bias affected women. As our work expanded to include data privacy, online safety, and representation in AI governance, we evolved into the Center for Gender Equitable AI to reflect a broader mission: rethinking unequal systems and shaping equitable solutions in emerging technology.

​

Right now, AI ! = Equitable. We exist to change that.

Problems We're Tackling

AI systems don’t exist in a vacuum. They mirror and amplify the inequalities of the
societies that create them. Our work spans diverse issues, from confronting
tech-facilitated gender-based violence to advancing the representation of women and
girls in AI leadership.

​

People of marginalized gender identities are disproportionately affected by AI issues,
including, but not limited to, these harms:

​

  • AI-driven online harassment and deepfakes: 99% of nonconsensual explicit deepfakes target women and girls. And U.S. K-12 students are being impacted at
    an astounding rate: data from the U.S. Department of Education suggests that 15% of students have experiences with nonconsensual explicit deepfakes.

​

  • Underrepresentation in tech decisionmaking: Women make up only 29% of the U.S. AI workforce, and even fewer hold governance or leadership roles. Only about 22% of Computer Science degrees go to women. About 12% of AI researchers globally are women.

​

  • Algorithmic bias: Hiring, credit scoring, and facial recognition systems have been shown to discriminate based on gender and race. Research from UC Berkeley analyzed 133 biased AI systems across different industries and found that 44% of them exhibited gender bias.

​

  • Femtech privacy risks: Health and reproductive data collected by apps can be shared with third parties without consent. For example, a 2022 audit found that 78% of leading femtech companies failed to obtain user consent for specific data-sharing instances.

​​​

These inequalities are interconnected. Our work begins from the belief that representation is a root-cause solution: when underrepresented gender identities are included in designing and governing technology, systems become fairer, safer, and more reflective of the needs of all of humanity.

Our Mission

To make gender equity non-negotiable in the design, deployment, and governance of artificial intelligence. We empower young people to lead the research, education, and advocacy needed to make gender equity a core principle of the AI safety movement.

Our Theory of Change

Our approach is rooted in a cyclical praxis of theory, action, and reflection.
​
1. Theory
We conduct youth-led research on gendered harms in AI to fill knowledge gaps that prevent effective action. Our white papers and op-eds examine issues like femtech data privacy, algorithmic bias, and representation in AI governance. This research forms the intellectual foundation for our advocacy and educational programs.
​
2. Action
We translate insight into tangible change through campaigns, education, and coalition-building.

For example, our 2026 S.T.O.P. Campaign (Say Something, Take It Down, Offer Support, Punish Perpetrators) equips high schools with toolkits and model policy frameworks to respond to AI-driven harassment. Our #StopExplicitDeepfakes campaign raised awareness around the issue of explicit deepfakes, with content garnering over 43,000 views. Through working with peer organizations, we connect youth voices to national and international responsible tech movements.
​
3. Reflection
We hold listening sessions, facilitate ideation circles, and publish reflections to ensure our work remains responsive to the communities most affected by technological inequities. We also send youth delegates to major convenings to share insights and return with new perspectives for our community.
​
This cyclical model allows us to evolve our work continuously, grounding each new initiative in evidence, impact, and lived experience.

RTYPF Video Cover (3).png

Our Work in Practice

Since 2022, CGEAI has grown into a coalition of 200+ youth members across 20+ chapters. 

​

Recent highlights include:

  • Launching the #StopExplicitDeepfakes campaign, which raised awareness of deepfake harms and informed national policy discussions on the TAKE IT DOWN Act.

  • Advising the U.S. Department of Education’s Digital Wellbeing Challenge, a national initiative to promote ethical tech use in schools.

  • Submitted advisory input to the United Nations and represented youth perspectives at leading convenings, like the Women in Engineering Conference and United Nations Commission on the Status of Women.

  • Producing 50+ open-access resources and hosting 15+ educational events and workshops.

  • Holding an advocacy fellowship that taught 20+ students about tech policy advocacy basics.

  • Publishing two op-eds to the San Francisco Chronicle.

Our Values

  • Intentionality: We push for AI systems to be built with intentional attention to the
    needs of marginalized groups of humanity, rather than blindly “building fast and
    breaking things.” Instead, we want tech to be developed diligently and fix–rather
    than create–inequities.

​

  • Equity as Design Principle: Fairness in AI systems is a priority, not an
    afterthought.

​

  • Representation as Root-Cause Solution: The inclusion of folks from underrepresented groups in AI design and governance is the most effective way to prevent the harm
    that arises from overlooking marginalized demographics.

​

  • Youth-Led, Intergenerational in Spirit: We center young voices while collaborating
    across generations in order to create systems that meet the needs of those who
    will inherit them.

​

  • Critical Hope: We challenge inequity with optimism and an abundance mindset,
    with the belief that equitable tech systems can confer benefits upon the whole of
    society. We insist that safer and more equitable systems are possible alongside
    innovation.

bottom of page