The AI Bot Wars

“A robot may not injure a human being or, through inaction, allow a human being to come to harm.” – Isaac Asimov, I, Robot
Someday in the distant future, automated artificial intelligence bots will wage misinformation, disinformation, fake news, and propaganda (University of Montana) campaigns directly against each other as a form of information and psychological warfare aimed at civilian populations. These campaigns will serve to erode trust, sow confusion, and create chaos within an enemy’s society. Hot wars, waged by humans or by drones and robots, will only be necessary as mop-up operations to consolidate power and assert authority. These wars will let peoples’ own interpretations and imaginations weaponize messaging against their fellow citizens until they destroy themselves from the inside. A fictionalized account of this type of hybrid warfare mixing misinformation campaigns, cyberattacks on infrastructure, and conventional military was the plot of the recent movie Leave the World Behind.
Bot wars are not the fiction of the distant future, however. They are here today and they are improving just as rapidly as the quality of artificial intelligence. Long gone are the days of blurry photos of Nessie and shaky video of Bigfoot. Misinformation created by generative AI was a key component in the Iran-Israel conflict of 2024-2025 (EDMO) and has been central to Russia’s online propaganda campaigns (NATO).
Today’s generated images and videos are hyperrealistic and can only be determined to be fake by 1) knowing the context or content to be untrue, or 2) having access to metadata which has not been tampered with. How do we combat this onslaught of misinformation? What role do semantic professionals, including taxonomists and ontologists, have in the war for truth?
Evolution of Bot Wars
Today’s artificial intelligence wars are mostly fought by people generating content. Easy access to cheaper, faster, and better artificial intelligence tools allows any user to generate new images and text rapidly with little to no skill in video or content editing necessary. Already existing content creation and social media sharing platforms have expedited and expanded the range and audience for user-generated content, real or not. Most of these platforms can’t keep up with content review and provide no mechanism for viewing the content source, including the metadata which may reveal whether the content is real or generated using AI tools. The democratization of content generation tools has meant an explosion of content (hence the term “content creators” as, seemingly, a professional occupational title). These tools have been praised for their ability to allow users to document, in real time, true events unfolding around them. These same tools allow users to document, in real time, unreal events manufactured by them with the same ease as documenting reality. Science fiction will just be fiction, the only science involved being the technical tools used to create the fiction.
I believe the next step in the misinformation wars will be an advancement in bot-on-bot directed counter-misinformation campaigns. In fact, these wars may already be happening with the number of fabricated online personas generating content responding to other comments which, in turn, may also be the product of fake online personas. Whenever one bot posts generated content, another bot will respond, countering and confusing the messaging. There may be truth in some of the counter-messaging, posting real content in direct response to fictional content. But, really, why bother with the truth at all? One bot can simply respond with equally outrageous content rebutting or retaliating against the first. Since artificial intelligence can generate content so quickly, why not take it a step further and do what any good marketer would do, segmenting and personalizing content to audiences based on their previous social interactions, including posts, likes, and network relationships. Not only can misinformation be generated quickly, it can be tailored to segmented audiences to trigger the most resonating and visceral reactions: fear, rage, mistrust, joy. Eventually, without any direct human intervention at all, peoples’ confidence in truth erodes and the reinforcement of already held beliefs and biases are strengthened. We already talk about echo chambers; the next echo chambers will be bots talking to bots with segmented human audiences receiving the exact messaging they would like to hear. Even as I talk about “will”, these trends are emerging on social media platforms today.
Recursion
I think “recursion”, “a computer programming technique involving the use of a procedure, subroutine, function, or algorithm that calls itself one or more times until a specified condition is met at which time the rest of each repetition is processed from the last one called to the first” (Merriam-Webster) is a great way to describe the more general content feedback loop we currently, and will increasingly, find ourselves in.
The referencing of original sources into various new content forms is happening increasingly in media as authoritative, unbiased news sources and are replaced by opinionated, subjective, and polarized “news” platforms. Algorithms on popular social media platforms weight toward content which has more interactions, positive or negative, and this content drowns out everything else. The number of memes and video clips I see repeated—or, rather, regurgitated—in my social media feeds gives a false impression that only a narrow range of topics are being covered. The breadth is shaved off at the long tails and only the highest middle of the bell curve is spit out into our feeds. Of course, these feeds are shaped by the content with which we interact, creating an echo chamber of reinforced, narrowly focused subject areas. Even as the overall amount of content expands exponentially, our exposure is limited to what we already think…or, rather, believe. Because belief is replacing authoritative fact. Our friends and feeds reinforce these notions; that unpleasant or dissonant facts are a matter of belief rather than any measurable objective truth.
The recursive, or regurgitative, nature of our content sources is going to have long-term effects on the bot wars. As AI bots create more and more content, they will seek out public sources of information and, eventually, feed their own previously created content into the self-guided learning models. Endless loops of self-referencing, recursive, regurgitated, manufactured information will act as the source of truth for new information; an endless entanglement of un-cited, untraceable, unverifiable information. As the bots play out their battles, the information will become so convoluted and unprovable that the only thing left will be belief. Even without the bot wars, we are finding ourselves here today. Belief over science or fact, individual belief over public sentiment, personal fictions over established facts.
The Battle for Semantics
From the early days of my career in an academic thesaurus to the present, the overwhelming mission of establishing “Truth” when so many concepts are only contextually true has haunted me. Fundamental, existential questions of being are, of course, at the heart of semantic modeling. Ontology is “the philosophical study of being” (Wikipedia) after all. As we watch truth and untruth blend into a bizarre miasma of half-truth in real-time, I wonder if other people in the semantic field feel the way I do. I have seen the frustration from scientists as they are dismissed as fraudsters somehow tricking the public into believing humans landed on the moon, vaccines can prevent disease, and fluoride is good for your teeth. Are taxonomy and ontology practitioners feeling the same level of dispirited frustration when they face the daunting task of asserting truth in a postmodern, truthless world? Will the AI bots win?
In the spirit of never giving up in the face of seemingly insurmountable odds, I offer up the following calls to action for semantic professionals which will, at least partially, address the coming AI bot wars:
- Lobby for increased use of semantic practices and technologies (taxonomies, ontologies, graph databases) in your organization. The use cases for semantics are real and can be clearly defined. The real work comes in convincing the C-suite that a rather insignificant financial investment in graph databases and taxonomy and ontology management software can indeed provide a large ROI.
- Taxonomists and ontologists need to engage directly with subject matter experts to ensure that semantic models accurately reflect the domain(s) they cover. Ongoing data ownership, quality assurance, and SME relationships should be an integrated part of the semantic model governance process.
- Similarly, semantic experts need to seek out and be involved with AI and machine learning activities in the organization. As foundational source-of-truth data for machine learning training sets, ensuring semantic models are accurate and are appropriately used in AI projects will help these projects be more successful with less risk to the organization.
- Target the most sensitive use cases. Semantic truth is the most convincing in areas in which the organization experiences risk. Find legal use cases tied to public content or product statements. Understand what risks threaten the company and which practical use cases semantic models can address.
- Design transparency into semantic models, including read-only access to taxonomies and ontologies in a variety of visualizations, so end users can understand and utilize them better. A significant part of any taxonomist’s job is helping users understand what taxonomies and ontologies deliver. Allowing end users to explore for themselves is a part of this work.
- Fight for the same transparency in content UIs in which metadata can be viewed by end users to understand the origin of the content, including whether it was generated by AI.
- If politically inclined, lobby for AI regulation and policies at the national and international level. Establishing regulations guiding the use, and particularly the transparency, of AI for all users will help to ensure that there are consistent best practices in how we implement and interact with AI and its generated content. In 2024, the European Union passed the AI Act, and more national governments and international organizations should follow suit.
- As a new technology, end users need to understand how AI works at least at the fundamental level. There needs to be more programs aimed at providing media literacy for the general public so they can learn how to identify and distinguish truth from untruth especially when it comes to AI-generated content.
- In support of media literacy and metadata transparency, publicly available AI-generated media detection tools need to be more common and easily usable by a general audience. These tools should have the ability to flag and identify misinformation for others.
The fight for truth will be partisan, political, frustrating, and even violent. We live in a postmodern world, but the death of truth will benefit those who create the most convincing and appealing misinformation the fastest. Counteracting these misinformation campaigns may very well be the last bastion of retaining democracy.