Digital Marketing

3 ways to reduce bias in AI for better context

Of all the concerns marketers have when bringing AI into decision-making, there’s one we don’t talk about enough: Are we too quick to assume that AI knows what’s going on in our heads when we’re building models?

This stems from a growing concern about introducing bias when creating information and formatting questions. Bias can come from not paying attention to context and nuance – information that lives in our heads, that we call upon when making decisions ourselves but forget to consider when working with AI.

Why is context important?

I would just assume you know what context is and why we need to provide it as we create our questions. But then you might miss the reasons why I think it’s so important. My points will not have the same impact, and your understanding may be colored or distorted.

The same could happen if we rely too much on the power of AI to think.

Context is what we give our AI model to help it plan, analyze and report results and insights accurately. It’s like adding scenarios when building an automated email workflow.

This goes beyond basic questions about which model to use and what it is used for. We must remember that we have an incredibly powerful tool, but it is not true. We have to think about how we use it and what information we need to provide in order to get accurate and useful information and analysis.

I get it. We use AI and think it knows everything, or that our context doesn’t matter. But this ignores my main point. AI knows a lot, but only you know the context in which you ask questions.

In short, AI cannot read our minds. Often, we create questions that we think do. That colors the answers AI gives us.

Your customers are searching everywhere. Make sure it’s your product he appears.

The SEO toolkit you know, and the AI ​​visibility data you need.

Start a Free Trial

Start with

3 ways to guard against bias when using AI

Here are three steps you should follow to get the most valuable results from your AI queries.

1. Provide context and nuance

I spoke with executives at a company that was dealing with a situation where the CEO was running an AI model, inadvertently uploaded sensitive company performance data in raw form and asked the model to interpret it.

Apart from ensuring that the data will not be shared beyond the company, this executive failed in two other important ways:

  • By providing only raw data, you give the AI ​​model no context to consider when analyzing the information and making its responses.
  • He wrote instructions to say he wanted a bad result or to confirm his bias.

The training of the AI ​​model caused it to initiate the said objection. Without context, the AI ​​model could not think beyond the negatives embedded in the instructions.

The resulting recommendations – surprise! – they were negative and negative. If the company had made decisions based on bias, it would have gone down the wrong path.

We think that a machine will pick up nuances in word choice or tone of voice in the same way that a human would. Or we expect it to use reasoning based on previous experiences that are not part of its data memory.

I see marketers make this mistake as they explore using AI in their marketing programs. They treat AI as a strategy rather than part of a strategy.

As with everything in marketing (and life, if you think about it), strategy must come before tactics. You develop a strategy first (how to do it) and the strategy guides your tactical decisions. AI is, above all, a strategy — a tool to help you strategize to achieve your goal.

As part of developing that strategy, we now have to explain how to avoid bias and how to recognize it in development and exit. We also need to know the context we need to build a reliable model.

That should be first. You can’t do it overnight. Missing that step means that all the information you enter will be incomplete and your analysis will be flawed.

2. Provide enough information to help your AI model make the best decisions

How do you avoid erroneous results? Another way is to do what I did when training one of my AI models in business. I uploaded about 47 different files, contracts, PowerPoints, articles and dozens of other sources of information, which gave the model a complete context of the topic I was researching.

I did one thing that AI experts don’t discuss much.

I asked the model, “What do you need to know? What information is missing?” This helps the model to close the gap and avoid making decisions without important information, such as context.

We hear every day about companies replacing employees with AI. The latest is Block, the company behind Square, Cash App and Afterpay. CEO Jack Dorsey said that the smaller workforce “will move faster with smaller, highly skilled teams that use AI to do more work.”

Good. But human workers provide the context AI models need to deliver better results. The AI ​​model only has context that we provide. We must realize that bias will harm our companies if we do not take it seriously in that step.

Here is another example. Analyzing is the best use of AI. It can quickly track information that you can highlight to assess growth, losses or opportunities that you might not find any other way.

When I upload my email sending data and ask my AI model to analyze it and suggest other plans to send email campaigns, I need to explain that we send emails on Wednesdays and Fridays because that’s when our inventory numbers are updated.

We believe that our subscribers open the most of our emails on Saturday mornings. If you don’t add that context, you shorten the analysis.

You need to add that step to your AI analytics strategy. That’s when you say, “Here’s what I know and what empowers my decisions.”

This step is what I call remembrance. You write down everything you know about how you make decisions in your job, so that when you leave, the next person in your seat has a complete knowledge base.

You may hesitate to do that because it means giving up your secret sauce – the substance and value you bring to your work.

But you have to stop. Your AI model needs all that information to make a decision based on what it knows.

It doesn’t end there. You should always look for holes in the explanation. Do not block comments or questionable findings. Don’t assume your model knows what you know. Don’t think you can fix the problem later.

There is a science to this. Our management needs to make sure we fix that.

3. Use incremental innovation to uncover bias and add context

Big breakthroughs grab attention and hold the talking points at business conferences, but they rarely lead to sustainable and manageable change.

AI instills a desire for rapid development. AI technology vendors are selling the C-suite the dream of a big, company-changing breakthrough. IC-level thinks that is good. Shareholders will love it. The board of directors will sing.

But can a director, executive director, manager, vice president or senior vice president make it work?

Incremental innovation is another more effective method. It takes small steps to build something big. You make one change, learn the result, and build on what you learn to take the next one. Each step is a proof point that can reveal a gap or weakness. In AI terms, that means uncovering where a biased or off-topic question can mislead you.

Yes, it can take longer to achieve than more change. These days, we often don’t find the time we need to make those informed, sustainable changes. But it can produce better results in the long run.

You learn all the nuances of the context. You can put two people on the same project, work with the same knowledge base and see if the output is the same.

This is not to say that mass movements are useless. But at this stage, you have to ask the hard questions:

  • Are these changes real?
  • Do we have guardrails set up?
  • Have we learned the principles of caution?
  • How do we make sure we don’t get into trouble?

A marketer recently told me, “When AI starts publishing ads and e-mails, some companies will make mistakes. They will go public, talk too much and be too good. Because someone somewhere will trust a machine to make all the decisions and that will be a wrong move.

Those decisions will be misinformed because they lack context, and are biased. Because it’s difficult to prove with quality.”

The output of AI is only as good as its input

AI is a powerful tool. Technology is moving faster every day and we can’t slow it down long enough to set up guardrails and regulations.

But as responsible marketers, we have to do it. No one wants to be the person who clicks a button and sends a campaign that was horribly flawed because we didn’t take bias or context into account.

This doesn’t mean we should stop using AI (big no). Every marketer should use AI in ways that work best for their systems. But we must be thoughtful and responsible in how we use and manage our means.

Just remember this: AI can’t crawl inside your mind and learn how long you’ve been at that company, conversations you have with co-workers, your preferences and company rules. Take the time to make sure you’re responding with bias and context as you develop your strategy.


Important takeaways

  • AI results are only as reliable as the context and assumptions built into the information.
  • Missing context introduces bias by forcing the AI ​​to interpret incomplete or misleading input.
  • Marketers must treat AI as a tool within a defined strategy, not as a decision maker.
  • Providing detailed input, including business rules and constraints, improves accuracy and consistency.
  • Incremental testing helps identify biases early and refine how context is used over time.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button