CMOtech Asia - Technology news for CMOs & marketing decision-makers
Story image

IWD 2025: Course-correcting AI for the women’s perspective: An open letter to the Tech industry

Yesterday

History has always been written by the "winners" - and until now, that meant men in power. But the future doesn't have to follow the same script. As AI reshapes how we learn, decide, and govern, we have a rare window to course-correct.

I have worked as a lead brand strategist operating within AI Infrastructure since before the days of ChatGPT, and what is clear is that the datasets used to teach AI models are biased—because history is biased.

We can't put our bots through unconscious bias training, but they can exhibit unintentional biases rather than unconscious ones. AI bias typically comes from patterns in training data, societal structures, or the way prompts and responses are framed. Lack of diversity in AI models can have serious consequences; facial recognition tools have come under scrutiny for being racially biased, and if you ask ChatGPT for advice on holding authority as a woman, you'll be told to lower your pitch and ditch 'squeaky or high pitched tones'.

AI bots are the first stop for online searches for 68% of U.S. adults. AI training and inference techniques involve harvesting as much data as possible to train the algorithms, they learn from all types of media: video, podcast, audio. The reality is that those bots are hungry for datasets—but whose voices are the loudest? If AI models are trained on public data as it stands, the future will be built on a foundation of male-dominated knowledge.

So, why bother?

AI Agents are heavily relied on and viewed as active team members in businesses worldwide. We already know the benefits of diversity in the workplace; diverse teams are 75% more likely to see ideas become commercialised (World Business Council for Sustainable Development). If our AI's represent the same face and opinions, what does that mean for our teams' diversity?

The main argument against diverse and ethical practices in AI is output versus input; big tech seems to believe that the effort isn't 'worth it' on a commercial level. However, research already proves that including minorities in AI datasets improves majority performance. 

The fight against data bias

Today's reality can't trust those in charge to prioritise diversity in their AI products, so what can we do?

When working with AI inference, one way to address gender bias is by embedding an ethical code of conduct directly into systems' interactions. This could take the form of a script that prompts the model before generating responses, acknowledging that its training data may be influenced by biases. By explicitly stating these biases and providing counterbalancing examples, you can nudge AI toward more balanced and equitable responses.

Let's look at gender diversity in leadership. Right now, AI is learning leadership from a pretty one-sided playbook. 90% of publicly available speeches come from men. Machines don't know what's missing; they just learn from what they're fed. If we want AI to properly recognise female leadership, we need to change what goes into the system. So, practically, what kind of datasets do we need to curate? Going by Microsoft's training standard, that means training a basic model with 100 hours of speeches from female leaders and 200MB of related text. Doable. But it's not just about quantity - diversity matters too. We need voices from different industries, cultures, and backgrounds to avoid swapping one bias for another.

The work lies in curating and creating new datasets that move the needle. An example of just this is an archival project my firm Your Creative is part of - Madam Speaker. This digital archive houses over 200 speeches by women across eight disciplines. The first speeches were curated by volunteer archivists, but it has since expanded to crowdsourcing new materials from the public. The goal is to have a database with global contributions complete by 2026, crowdsourcing the 100+ hours of speeches needed to train a model. 

Let's train AI differently. Let's create a world where the voice of power isn't just deep and masculine—but rich, diverse, and undeniable.