Meta Says Its Latest AI Model Is Less Woke, More Like Elon’s Grok

by oqtey
Meta says its latest artificial intelligence model, Llama 4, will answer more politically divisive questions and offer diverse viewpoints.

Meta says that its latest AI model, Llama 4, is less politically biased than its predecessors. The company says it has accomplished this in part by permitting the model to answer more politically divisive questions, and added that Llama 4 now compares favorably to the lack of political lean present in Grok, the “non-woke” chatbot from Elon Musk’s startup xAI.

“Our goal is to remove bias from our AI models and to make sure that Llama can understand and articulate both sides of a contentious issue,” Meta continues. “As part of this work, we’re continuing to make Llama more responsive so that it answers questions, can respond to a variety of different viewpoints without passing judgment, and doesn’t favor some views over others.”

One concern raised by skeptics of large models developed by a few companies is the type of control over the information sphere it can produce. He who controls the AI models essentially can control the information people receive, moving the dials in whichever way they please. This is nothing new, of course. Internet platforms have long used algorithms to decide what content to surface. That’s why Meta is still being attacked by conservatives, many of whom insist that the company has suppressed right-leaning viewpoints despite the fact that conservative content has historically been much more popular on Facebook. CEO Mark Zuckerberg has been working in overdrive to curry favor in the administration in hopes of regulatory headaches.

In its blog post, Meta stressed that its changes to Llama 4 are specifically meant to make the model less liberal. “It’s well-known that all leading LLMs have had issues with bias—specifically, they historically have leaned left when it comes to debated political and social topics,” it wrote. “This is due to the types of training data available on the internet.” The company has not disclosed the data it used to train Llama 4, but it is well-known that Meta and other model companies rely on pirated books and scraping websites without authorization.

One of the problems with optimizing for “balance” is that it can create a false equivalence and lend credibility to bad-faith arguments that are not based on empirical, scientific data. Known colloquially as “bothsidesism,” some in media feel a responsibility to offer equal weight to opposing viewpoints, even if one side is making a data-based argument and the other is spouting off conspiracy theories. A group like QAnon is interesting, but represented a fringe movement that never reflected the views of very many Americans, and was perhaps given more airtime than it deserved.

The leading AI models continue to have a pernicious issue with producing factually accurate information, even to this day still often fabricating information and lying about it. AI has many useful applications, but as an information retrieval system, it remains dangerous to use. Large language programs spout off incorrect information with confidence, and all the previous ways of using intuition to gauge whether a website is legitimate are thrown out the window.

AI models do have a problem with bias—image recognition models have been known to have issues recognizing people of color, for instance. And women are often depicted in sexualized ways, such as wearing scantily clad outfits. Bias even shows up in more innocuous forms: It can be easy to spot AI-generated text by the frequent appearance of em dashes, punctuation that’s favored by journalists and other writers who produce much of the content models are trained on. Models exhibit the popular, mainstream views of the general public.

But Zuckerberg sees an opportunity to curry President Trump’s favor and is doing what is politically expedient, so Meta is specifically telegraphing that its model will be less liberal. So next time you use one of Meta’s AI products, it might be willing to argue in favor of curing COVID-19 by taking horse tranquilizers.

Related Posts

Leave a Comment