Why women and gender non-conforming people will build a safer, more ethical AI world

Those closest to the problem will be closest to the solution.

I’ve just come off a weekend at the AI-Powered Women summit at MIT. I spent two days immersed in a thoughtful and innovative space where powerful and intelligent women wrestled with some of the most challenging questions in an AI-powered world. I walked away with two primary emotions: fear that the tools that are so rapidly entering the marketplace, and everyday life will have lasting negative impact; and hope that the right people are tackling this issue head on.

AI is not neutral. It’s being built inside systems designed by and for those who have historically held power. These people are most likely male, white, cisgender, English-speaking, heterosexual, from a privileged, social class — those who carry identities that often allow them not to consider the experiences of others when pursuing their own gain. And what we are seeing as they chase gain is a willingness to see things like “speed,” “scale,” and “disruption,” as high virtues, even when people are harmed in the process. Their promise for AI in 2025 is one that prioritizes capitalist efficiency through workforce automation, elimination of jobs, and the proliferation of AI-powered workplace solutions. Sadly, I’ve watched this decimate teams, burn through millions if not billions of dollars, and leave some crowdsourcing rent money on LinkedIn.

It’s easy to become jaded in this environment.

When I attended the summit, I was encouraged to see so much emphasis placed on ethics, mindfulness, and an approach to AI within a human context rather than overtaking it. I met founders using AI to tackle real-world problems that went much deeper than decreasing overhead for shareholders. This was a space that was embracing innovation while simultaneously chanting a “do no harm” mantra. So now that I’m looking back on this experience, I am convinced that women and gender non-conforming people our best suited, and most likely to help us create a safe, ethical AI future.

Here’s why:

  1. These are humans who often experience firsthand the consequences of systemic, bias, exclusion, and safety gaps in products, workplaces, and society. This perspective sharpens their ability to see where AI reproduces or amplify harm, and motivates them to design guardrails that others may overlook.

  2. Across sectors these humans are disproportionately represented in roles, focused on ethics, care, trust, and human-centered design. The expertise alliance closely with what’s needed for responsible AI — balancing efficiency with empathy, and innovation with accountability.

  3. There is an increased focus on systems over individuals. These are leaders who approach problems is more holistically, and see the interconnectedness of business, social, political, and personal outcomes. This is critical for AI governance, where the risks and ripple effects of decision-making must be anticipated.

  4. Much like having a systems focus, there is also a greater likelihood to take community centered approaches that emphasize collaboration, transparency, and accountability. These are humans that recognize the negative impacts of top down control, and understand that in order for us to live in an ethical AI world, we should be building AI ecosystems rather than tools.

I will say that intersectionality will play a major role here, and spaces that do not include the voice and experience of persons of color, persons with disabilities, LGBTQ people, and people from other underserved identities are likely to miss important elements of safe and ethical AI. It will also be necessary to bring on the support of allies whose voices will serve to amplify the cause. The future of ethical AI won’t come from replicating those same hierarchies that created today’s harms.

We will know we are headed in the right direction when we see AI not as a tool just for efficiency, but instead as a place to define and solve real human problems.

It’s not just about what AI can do, but what it should do.

Previous
Previous

Weekly Roundup: September 3-9, 2025

Next
Next

AI summer is over