Technology & AI

‘Combinations of release’: Women in tech push to shape AI before it’s too late

Panelists during a session at the Women in Tech Regatta in Seattle on Wednesday. From left, president Sarah Studer of the University of Washington, Maria Martin of Nordstrom, Nandita Krishnan of Adobe, and Anya Edelstein of Highspot. (Photo by WiT Regatta)

Women have long been left out of the datasets and decisions that shape everything from car safety to medical diagnosis. Industry leaders warn of a quick approach to the dangers of artificial intelligence repeating those patterns.

That was the key message at this week’s Women in Tech Regatta in Seattle, where speakers called for earlier and greater participation in AI development as adoption accelerates.

“The rollout is compounding over time and it’s getting harder to see,” said Anya Edelstein, learning experience manager at Seattle-based Highspot, during AI Leadership Wednesday. “If your opinion is ignored in the room where those decisions are initially made, it becomes difficult to make a change later.”

In the past few years, researchers have sought to reduce the failure of machine learning models trained on biased or skewed data sets, including diagnosing kidney failure in women. Currently, women globally are about 20% more likely to use AI tools than men, furthering the training gap.

In the tech sector, at least, the AI ​​gender gap appears to be closing. It’s a significant shift as companies rush into automation on a large scale, and concerns about misinformation and data security surround Anthropic and OpenAI going public.

Women are leading AI strategy – cautiously

The majority of women in senior positions (80%) are driving the AI ​​strategy at work, where they prioritize responsible adoption over speed, according to a survey of more than 1,700 industry leaders published earlier this month by Chief, a women-focused leadership network.

This is often at odds with the company’s pressures to deploy AI tools and techniques at an ever-increasing pace, said Maria Martin, director of product management at Nordstrom.

“There is a narrow line between the decision being made, and the decision being escalated,” Martin said at the hearing on Wednesday. “It’s important to go ahead and get involved early.”

Of the group of female executives surveyed, 71% were the first in their companies to flag the risks of AI.

“If we don’t intentionally create interventions at every step along the way,” Edelstein says, “bias has a chance to creep in.”

Bringing the women into the room

The challenge of bringing qualified women into AI leadership and decision-making positions may start with hiring. At least two-thirds of recruiters use AI to screen candidates, a process that has been shown to reproduce racial and gender biases, often in reverse.

Attendees connected at the Women in Tech Regatta in Seattle on Wednesday. (Courtesy of WiT Regatta)

By 2024, University of Washington researchers found that AI resume testers chose male words over female words 89% of the time, and white-related words over black-related words 85% of the time. A year later, the UW discovered that hiring managers were imitating the biases of their AI model.

Women and people of color face pressures to conform and code-switch — such as using a gender-neutral name on a resume — before they even enter the office. Once they are hired, the right people are looked for support, said Cynthia Tee, a long-time leader in engineering and computer scientist.

Tee suggests that many industry leaders may use a sponsorship model, which requires greater intent, perceived risk and cost compared to traditional workplace relationships.

“Continue to insist on promoting the people who deserve it,” said Tee during a panel discussion on workplace navigation. “Keep bringing in different people through your recruitment pipelines. Keep bringing people voices that are not being heard.”

AI chat for everyone

There can be a confidence barrier to understanding or using AI, in part because of the industry’s “black box” design. Nandita Krishnan, a data scientist at Adobe who builds apps on the side, suggests setting aside time each week to read the latest news and try to automate everyday tasks.

“If you write about vibe, do it in a way that makes the software still secure,” he said on a panel with Edelstein and Martin. “When you build AI systems, it’s usually visual. Add something to focus the LLMs, and give your agent this fact or database to make sure it doesn’t get old.”

Participation in AI decision-making is not limited to technical knowledge. Edelstein suggests establishing standards around AI — including education, health care and the environment — and finding industry leaders or companies to get involved.

Many workers learn AI out of fear of being left out, he added, but curiosity leads to better results.

“If we can change a lot of ideas about AI,” he said, “that’s the first step to bringing more people into the conversation.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button