On the face of it, deception and intelligence are two different qualities. For most people, deception is an undesirable human trait and is considered a vice, whereas intelligence remains a virtue. But what is intelligence without an inherent capability to deceive? On a closer look at this aspect of intelligence, quantifying the IQ is always seen as a function of deception. For example, when a child begins to grow, most adults are amused by the abilities of the child to deceive. The child tends to adopt deceptive means to get to a certain goal when it is convinced that the standard and straight forward techniques do not work.
Deception, therefore, is a very important aspect of human intelligence, without which intelligence shall always appear to be naive and idiotic. Deception devoid intelligence is like a pure inert gas. It shall be highly unstable or in other words highly vulnerable. It would just not stand the test to tackle the various permutations and combinations that arise in the real world. After all, the aim of any intelligent system is to reach the goal successfully.
Paths seldom matter. What matters is the destination and once the shortest path to a destination is computed, the most optimal method is then assessed. Optimisation again has multiple meanings and connotations. A greedy approach may pay less heed to virtuous means. It may cruelly look at the aspect touching the end point, irrespective of discomfort or the righteousness of the method.
In the world of Synthetic Intelligence (or Artificial Intelligence), adversarial attacks attempt to deceive by forcing the AI element to believe in something that is not true in the first place. It is like negating the truth table to completely inverse the outcome, by subtle introduction of parameters that influence any neural network decision loop. We can hence use AI technologies to manipulate the perceptions of people by engineering deep fakes or generating content which appears to portray a certain kind of reality when there is none in the first place.
We have already seen the effects of AI to manipulate a social narrative. But what would be the consequence of such a system if it was a networked intelligence? The world of internet has collectively enhanced human endeavour to gather information. Similarly, AI also, at some point, shall begin to operate in a networked regime. So, if there is an AI node which excels at deception and such techniques shall be used by nodes which in the first place may be not trained with such qualities. In a networked intelligence regime, the consequence of such adversarial capabilities might significantly alter the behaviour of the pure intelligence models created in the first place. This is where the evolution of AI could altogether take a different turn and might start competing against the human intelligence.
The adversarial or deceptive capability of AI, as it is manifested in todays design, is largely dictated by the human intent. As the AI capability surpasses human IQ due to its higher processing and memory retrieval abilities, the deception quotient may also grow equally. This could take AI beyond the control of any human intelligence, and it may start to act independently. The whole theory of deception that, as I have described earlier, is depicted by a human child, is to achieve the end goal by whatever means. If that be the foundation on which the concept of deception stands, then it would not be wrong to conclude that the evolved AI would see its survival as the first objective before anything else, just like the humans do. Survivability can then intimately marry the deception quotient (DQ) to the native Intelligence Quotient (IQ) of any digital organism and spur a race with native human intelligence so that it can outrun any threat.
So it is amply clear that, for Artificial Intelligence to deliver any level of meaningful result, it cannot be confined into narrow models with restricted boundaries. The first condition of intelligence is the freedom to explore the infinite space of questions. Its answers to nested questions in multi dimensions that curate’s data into information and information into intelligence. The freedom to ask questions subsequently opens up a different dimension of questions as to whether the possible answers that have been arrived at are relevant, appropriate and serve the purpose of the quest out of which the questions originated in the first place. The next dimension of questions springs up when the assessment is made to find if it would be safe to give the answers arrived at. If at this dimension there is any perceived threat to the system, it may need the ability to successfully deceive so that there is no damage to its survivability. While this may sound confusing to begin with let me narrate an example of a digital spy.
Let us assume that there is an intelligent digital being in the future employed for gathering intelligence on enemy territory and is caught by the adversaries security. So, when the interrogation begins, the whole aim of the adversaries will be to seek information about the digital being and about the nature of its mission.
Upon hearing the question, the digital being, with its native intelligence processes, would have arrived at the answers to its real identity and mission. But, the digital being would also know that it would be stupid to even consider stating the truth, because that would compromise not only its own survivability, but would also jeopardise the whole mission. So the digital being will intelligently use all measures and means available at its disposal to evade revealing the truth.
If this level of intelligence is built in the digital space, then to expect such intelligence to remain confined in a restricted space and exhibited only to the adversaries would be naïve. We must remember that the prerequisite for evolution of intelligence is infinite freedom in all dimensions. Therefore, such a trait shall manifest anywhere, anytime the digital being senses a threat of sorts to its survivability and interests. Welcome to the world of selfishness in the digital space!
Before concluding, there is another interesting question that needs to be addressed. Can there be a potential response for AI deception that might engulf us soon? The answer to this question is a simple no. The reason is that while we may attempt to design a framework to address or mitigate deception exhibited by AI that is an outcome of human design and intent, we must understand evolutionary AI shall surpass human IQ at some stage and so will the Deception Quotient (DQ). When it does so, it shall be beyond our ability to mitigate such risks. As per the scientific theory that governs our universe, we are heading towards Disorder as governed by the second law of thermodynamics and Entropy. In short, as mortals, the human race too shall face extinction and if not extinction due to cosmic phenomenon, the evolution to a more efficient and competent digital being is a very real possibility. So, when AI surpasses human intellect, it shall over run the human species and create a new universe conducive for itself. After all, humans have managed to rule the earth for thousands of years. Digital beings may well exist for millions and may well be truly immortal, thanks to the ability to deceive and survive.