Artificial intelligence and the information war of the future

We work with AI every day, use it ourselves, and see what tactics the enemy uses. But we definitely need to look at the issue more broadly than our war with Russia.
In China, the government has obliged all AI companies to build in "ideological protection" mechanisms. As a result, models such as Baidu's ERNIE censor references to Tiananmen or Taiwan at the query level.
Russia is testing neural networks capable of generating fake news in real time, imitating the voices of Ukrainian military and creating deepfake video "testimonies" for social media fakes.
This is no longer a theory, but a practice that we see every day.
The US is investing millions of dollars in projects like OpenAI, Anthropic, and DARPA's AI Next.
Goal: to create systems that identify information attacks before they start spreading.
There are many public developments, but even more are closed to the public and used for military purposes.
Here is a scenario for the coming years.
Let's imagine the year 2027. During a hybrid escalation, Russia launches thousands of AI bots that imitate Ukrainian volunteers, doctors, and veterans. They spread "despair", "betrayal", "corruption", all with hyper-realistic photos and videos created by generative models, as well as texts, entire projects on social media, including teenage and children's content, where Russia is now also active.
China is simultaneously working in Africa and South Asia to promote anti-Western narratives through localized AI models that speak local languages and are culturally adapted.
To adapt the model to a specific culture, it doesn't take much effort – parsing information from social networks, posts by LOMs, comments, content – all these arrays are loaded and turn the model into a clone of a citizen of a particular state in the information space, teach them to think, teach them mentality.
In response, the West is creating "digital front lines" – AI systems that monitor the information space 24/7, detect botnets, distortions of facts, and dipshits. But the problem is that even the truth is difficult to distinguish, because it is also being stylized as fake.
We are moving towards a world where there will be no "one truth" but millions of fragmented realities. And the one who controls the algorithm controls consciousness.
BigTech companies developing AI will have a lot of power. But not only they.
In Ukraine, this is a matter of survival. Because our front is not only geographical, but also informational.
And, of course, in the use of AI technologies, we are and will be one of the leaders in both countermeasures and technologies that allow us to protect the interests of the state in the information zones where they are present.