Pride Month And AI: Why Meta’s AI Strategy Signals a Warning for Tech Leaders

As Pride Month unfolds, Meta’s recent policy changes and AI updates raise urgent questions about digital safety and inclusion. With Facebook and Instagram scoring just 45 percent on GLAAD’s 2025 Social Media Safety Index, the gap between corporate values and product design has never been clearer. In an AI-powered world, bias isn’t just a bug. It’s a business risk.

In April, Meta unveiled its newest large language model, Llama 4, with a promise of increased “neutrality.” A month later, internal documents revealed the company was testing fully automated risk assessments for trust and safety. Around the same time, Meta quietly rewrote its hate speech policy to allow users to describe LGBTQ people as “abnormal” or “mentally ill,” citing political discourse protections.

That’s not neutrality. That’s a message.

And it’s not going unnoticed.

According to the 2025 GLAAD Social Media Safety Index (SMSI), Meta platforms—including Facebook and Instagram—each received a 45 percent score on LGBTQ safety, trailing even TikTok. Threads, Meta’s supposed answer to Twitter’s toxicity problem, fared worse at 40 percent. GLAAD’s conclusion? “Not only are these platforms failing to enforce their own policies, but they’re also actively making changes that worsen conditions for LGBTQ users.”

GLAAD doesn’t just hand out letter grades—it measures platform safety using 12 weighted criteria across four categories: policy, product, culture, and transparency. That includes whether companies:

These metrics are important. They aren’t just about values; they’re about risk. Reputational risk. Talent risk. Regulatory risk. All of which leads to an increase in business risk.

Meta’s actions raise the stakes: When the world’s largest tech company deprioritizes digital safety, it doesn’t just impact LGBTQ communities. It trains the AI that trains the internet. It shapes the tone of enterprise products that smaller vendors mimic. It sends a signal that inclusive design is optional—when in fact, it’s the only future-ready strategy left standing.

This isn’t a culture war. It’s an infrastructure problem.

Every investor report talks about “trust,” every product roadmap includes “AI.” But very few are asking: Whose trust? Trained by what AI?

We already know what happens when content moderation is gutted, and nuance is fed into machine logic. False positives rise. Harmful content is left unchecked. Algorithms start reading identity as ideology. And users—especially queer, trans, and gender-expansive users—lose their voice, their audience, or worse.

There’s a cost to building models that can write code and poetry but are explicitly trained not to identify slurs and harmful language. There’s a risk in launching AI assistants that are optimized for efficiency, but not for empathy.

And there’s a question we don’t ask often enough:
What kind of world are we training AI to believe is normal?

This month, as so many organizations pull back on their public support of LGBTQIA+ communities, it begs a deeper question—
What are we, as social media and AI users, doing to help or hinder inclusion online, via AI, and beyond?

Not a call to action. A call to awareness.
Because the data doesn’t just reflect the world we live in—it reflects the one we’re choosing to build.

The Inclusive Leadership Newsletter is a must-read for news, tips, and strategies focused on advancing successful diversity, equity, and inclusion initiatives in technology and across the IT channel. Subscribe today!