Technology & AI

Who decides what AI tells you? Campbell Brown, a former Meta news manager, has some thoughts

Campbell Brown has spent his career chasing accurate information, first as a popular TV reporter, then as Facebook’s first, and only, dedicated news anchor. Now, as he watches AI reshape the way people use information, he sees history threatening to repeat itself. This time, he doesn’t wait for someone else to fix it.

His company, Forum AI – which he discussed recently with TechCrunch’s Tim Fernholz at a StrictlyVC night in San Francisco – examines how basic models work in what he calls “very high-level topics” – geopolitics, mental health, finance, recruitment – subjects where “there are no clear yes-or-no answers, where it’s complex and complicated.”

The idea is to find the best experts in the world, have the measurements of the builder, and then train AI judges to evaluate the scale models. In the work of Forum AI geopolitics, Brown hired Niall Ferguson, Fareed Zakaria, former Secretary of State Tony Blinken, former Speaker of the House Kevin McCarthy, and Anne Neuberger, who led cybersecurity in the Obama administration. The goal is to get AI judges to agree about 90% with those human experts, a threshold that Forum AI has been able to achieve.

Brown traces the origins of Forum AI, which was founded 17 months ago in New York, to a certain period. “I was at Meta when ChatGPT was first released publicly,” he recalls, “and I really remember right after realizing that this was going to be a funnel for all the information to flow in. And it’s not very good.” What he said to his children made this moment feel almost real. “My kids are going to be really dumb if we don’t figure out how to fix this,” she remembers thinking.

What really frustrated him was that accuracy seemed to matter to no one. Foundation model companies, he said, are “too focused on coding and math,” while stories and information are difficult. But he argued even more, it is not a choice.

Indeed, when Forum AI began testing advanced models, the findings were not exactly encouraging. He cited Gemini pulling out Chinese Communist Party websites for “non-China issues,” and noted a left-leaning political bias in nearly every model. Hidden failures are also increasing, he said, including missing context, missing ideas, arguments that control the grass without consent. “There is still a long way to go,” he said. “But I also think there are some simple fixes that could greatly improve the results.”

Brown has spent years on Facebook watching what happens when the platform prepares for something wrong. “We’ve failed at most things we’ve tried,” he told Fernholz. The fact-checking system he built is no more. The lesson, even if social media has been ignored, is that doing good for the sake of discussion has become a social evil and left many ignorant.

His hope is that AI can break that cycle. “The present may go either way,” said he; companies can give users what they want, or they can “give people what is true and what is true and what is true.” He admitted the logical version of that — AI to optimize reality — might sound absurd. But he thinks business can be an unlikely partner here. Businesses are using AI for credit decisions, lending, insurance, and hiring to manage credit, and “they’re going to want you to improve to get it right.”

That business need is also what Forum AI is betting its business on, although turning compliance interest into consistent revenue remains a challenge, especially given that a large portion of current markets are still satisfied with check-box testing and standard metrics that Brown considers insufficient.

The state of compliance, he said, is “a joke.” When New York City passed the first bias hiring law requiring AI audits, the state regulator found that more than half of the violations went undetected. Real testing, he said, requires domain expertise to work on not only known cases but edge cases that “can get you into trouble that people don’t think about.” And that work takes time. “Smart generalists won’t cut it.”

Brown — whose company last fall raised $3 million led by Lerer Hippeau — is uniquely positioned to explain the disconnect between the image of the AI ​​industry and the reality of many users. “You hear leaders of big tech companies say, ‘This technology is going to change the world,’ ‘it’s going to put you out of work,’ ‘it’s going to cure cancer,'” he said. “But for the average person who uses a chatbot to ask basic questions, they still get a lot of simple and wrong answers.”

Trust in AI remains at very low levels, and he thinks skepticism is, in most cases, justified. “A conversation is happening in Silicon Valley about one thing, and a completely different conversation is happening among consumers.”

If you shop through links in our articles, we may earn a small commission. This does not affect our editorial independence.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button