TECHNOLOGICAL singularity, an idea first advanced by mathematician John van Neumann, is when technological growth becomes self-sustaining, irreversible, and practically beyond the control of humans. The years 2030 or 2045 represent two extremes of estimates of when “singularity” will happen.
Scientist Stephen Hawking expressed fears that AI (artificial intelligence) may mean human extinction. Visionary inventor Elon Musk too has opined that it may be an existential threat. These thoughts have, just a few days ago, been reinforced in an open letter by a thousand tech leaders, asking for its greater regulation. The question is no longer whether AI will overtake human intelligence, but only when this will happen.
Even in its comparative infancy, AI has demonstrated immense capabilities to perform, even replace, some human functions. What simple automation — and now, increasingly, sophisticated robotics — is doing on the shop floor in manufacturing, AI is beginning to do in offices. AI with Generative Pre-trained Transformer (GPT) has already moved up the value chain and demonstrated its ability to not only edit, but also to author a report or write a story given a few basic facts or pointers. In a rudimentary manner, it could well replace sub-editors and coders. Creative writing too is part of its skillset.
GPT4 and ChatGPT are creating waves in industry, with companies rushing to see how they can best be used, even as others are preparing to face serious disruptions in their business models. Many are worried about jobs that may disappear and some about the ethical issues and legal tangles that may ensue. There is discussion about the biases that may be unintentionally introduced by its creators and — because it learns from data (the more, the better) — by the datasets it uses.
A household example of its capabilities: feed it a picture of some ingredients from your kitchen and it will suggest a variety of recipes for what you can make from them. In a recent interesting experiment at the Indraprastha Institute of Information Technology, Delhi (IIIT-D), in a human versus AI contest, students were pitted against AI for diverse tasks: painting a picture, composing song lyrics, and writing a recipe for a given prompt. The output was assessed by eminent judges from the relevant field, who were not told which was a human effort and which an AI-created one. Even in its present early form, the AI-generated (DallE2) paintings fared well with certain themes (around 30 percent).
The song composed and presented by two young musicians for the AI-generated (ChatGPT), with lyrics inspired by Javed Akhtar’s style, was among the winners. Similarly, the recipes generated by Ratatouille, an AI product from research on computational gastronomy (yes, with a “g”; a name coined and popularised by Prof. Bagler of IIIT-D), tricked expert chefs into thinking they were authentic.
Clearly, as the pace of technological progress accelerates, the capabilities of AI are going to be truly mind-boggling. Some of this will contribute to greater human good, as technology has generally done. Historically, it has also resulted in more and higher quality livelihoods (better-paid, less arduous), despite temporary disruptions in the job market. With technological singularity, might we see a change in this too? With AI as a competitor of humans, will jobs shrink?
After all, the efficiency, speed, continuous learning, and tireless 24/7 work capability of AI will be superior to that of humans for many functions. Also, AI and robots don’t fall sick, take days off, or agitate about something. If a robot (or programme) is faulty, it can be immediately removed and replaced without worrying about organisational motivation, retirement/retrenchment benefits, etc. Upgrading and updating are possible in quick time and at short notice. The result could be that humans become redundant for many production and service roles.
In such a scenario, what role would humans have a few decades from now? A good option may be to change the rules of the game. Instead of human intelligence competing with AI, our best bet may be to get AI to compete against human stupidity!
This sounds ludicrous but is a serious proposition, because one area in which humans have an edge is in disruptive innovation, invention, and imagination. These are vital, but facets that AI is not adept at — not yet, at least. And these three i’s inevitably come from asking “stupid” questions (like why an apple falls down, or why the sky is blue), or from “stupid” dreams (imagining heavy objects flying and humans travelling through space to reach the stars), or “stupid” thoughts (inventing the printing press, light bulb, or computer).
Creativity is another important human characteristic, and one to which we may devote ourselves in a jobless but prosperous future. But, as noted in the IIIT-D contest, this too is an area in which AI may excel. Curiosity, a vital seed for discovery and invention, may continue, though, as a human preserve for some time.
As AI advances further, the best future may be a human-plus-AI one, rather than a
zero-sum contestation.The danger in this is the vast scope for misuse of AI by pranksters, ideological or terrorist groups, companies/organizations, and even nations. As in biological research, AI could do great good (think vaccines), but could be misused (like viruses engineered to create sickness).
Further, it is not impossible that some AI bot or app turns “rogue” and breaks free of the boundaries set by its human creators, in a modern-day rebirth of Frankenstein’s monster. More insidious is the way AI can create fake news and facts, complete with images and voices that are indistinguishable from the real ones. As these go viral, concepts like “facts”, “evidence” and “truth” may lose their very meaning. This raises deep ethical and philosophical issues.
Like climate change, irrespective of who causes the problem, the impact of AI will be global and even species-threatening. Here too, there is need for urgent and extensive debate about the implications, impact, and correctives; and, consequently, the desirability (inevitability?) of some sort of regulation, enforced globally.
Kiran Karnik is a public policy analyst and author. His most recent book is ‘Decisive Decade: India 2030, Gazelle or Hippo’.