Both cinema and experts today proclaim that the biggest threats from AI will arise from the full autonomy of a system whose innate operations we hardly understand. Of course, in the face of such an unknown, it is easy to see why the fear of the machines—either wanting to get rid of us or universally erode our freedoms—is validated.
My personal examination of the matter tries to draw a more intimate outcome arising from disproportionate access to the capabilities of intelligence resulting from the hoarding of AI by big tech, governments, and generally powerful people. I belong to the school of thought that believes that even before we ever get full autonomy of the machines, if indeed it is possible, we might ourselves do quite a number on each other with these tools.
Exclusive access to such unparalleled multi-domain power by a few may draw quite some blood before, and even after, the machines take full autonomy.
Consider, for example, the most disastrous effect of AI right now, which is the proliferation of deepfakes.
Of course, deepfakes affect our view of truth and cause us to doubt even things that we previously considered truthful. Avenues where photographic or audio reference was previously used as evidence are brought into disrepute.
You can see that the emergent paradigm shows that, while you can fake anyone in any situation, we can almost boldly refute anyone presumably honestly depicted in any scenario.
Consistent metadata and watermarking might take us some distance along the mitigation path, but in general, they can only go so far. It is crucial to note that enforcement of such measures might as well rule out authentic documents, which could otherwise falsely be watermarked or rejected as admissible due to their unregistered status and thus potentially nullify the authenticity of otherwise valid documents.
We cannot escape the ouroboros here. As we pursue authentication of generated content, we shall also be limiting access to authenticated documents and the means of authenticating them, thus increasing the gap of accessibility for everyone—returning power to the hands of the few—the central issue of my debate.
It is thus arguable that instead of trying so hard to validate circulated material, we should be trying hard to provide access to all of humanity such that no single person bears an awful gap in ability over everyone else.
I have found that the people who find AI most scandalising are those who are not exposed to what it can do. Those who already know that generative algorithms can deepfake you in any compromising situation will be less shocked when it happens and perhaps even less offended. Or at least, they will know how to deal with the matter if it gets out of hand. And if you know that all your friends or supporters know that deepfakes are possible, you will be less flustered and thus find it easier to deal with the situation. It is also worth highlighting that one will be less inclined to use deepfakes on another person if they are aware that all the people they are trying to deepfake are already literate in, and equally capable themselves of, deep-faking the perpetrator—you could call it a mutual deterrent through universal awareness.
Bridging that gap of access to the power of the machines should be a central feature in AI safety strategies. Imagine AI access as a kind of currency—a movement, even—so that no single person ever gains absolute power over the machines, at least until such a time as the singularity appears, when it will not matter any more because a true overlord shall be in sight.
Bbumba
6th November 2025