Artificial Intelligence Safe or Not in 2024?
Artificial Intelligence A safe and equitable future, is that what we can hope in 2024 from AI? Maybe, with careful bioethics, yes, indeed.
Artificial Intelligence will lead in 2024 to new evils and reinforce existing stereotypes and biases but it has the capacity to do good, too.
The genie has left the bottle is how a former tech CEO responded to my anxious queries about the ability of artificial intelligence (AI) to cause serious harm. Such fears are commonplace. None other than Tesla founder Elon Musk, who actually kick started the commercialisation of AI with his initial funding for OpenAI in 2015, has expressed fears about self-learning AI systems turning hostile to the human species, going so far as to say “it poses a greater threat than nuclear weapons.” And this is a man whose self-driving cars using AI and big data analytics are one of the best use cases of the new technology.
In March this year, Musk joined several well-known tech and academic people like Apple co-founder Steve Wozniak and Sapiens author Yuval Noah Harari, in issuing an open letter calling for all labs to pause development on Artificial Intelligence for six months, to allow the government to draft rational regulation for it.
So, is Artificial Intelligence the big breakthrough agent with its promise of changing areas ranging from healthcare to transportation and education? Or is it the ogre that threatens the survival of human civilisation?
Much like the internet which was born as the The National Science Foundation’s (NSF) ARPANET way back in 1983 but became available for ordinary use only by the early ’90s, AI too was in the labs for long. AI research was started as an academic discipline in 1956, a full 66 years before it became available to ordinary users. Thanks to the availability of enormous computing power and advances in artificial neural networks, AI is now accessible on tap ever since OpenAI launched its ChatGPT model in November last year, with its ability to interact with users’ questions in a conversational way, answering follow-up questions, and engaging much like another well-informed person would.
That launch was the pivotal moment in the history of AI and since then it has been the subject of much heated debate.
Fears about technology taking over the world aren’t new. Films like 1968’s 2001: A Space Odyssey, the 1973 thriller Westworld and Star Trek: The Motion Picture in 1979 warned of the perils of AI gaining too much power. The cinematic alarms were a reflection of society’s concerns over technology. When the world wide web hit mainstream in the 1990s, a number of experts warned about the potential of criminals to create havoc in society by using its many loopholes. As subsequent events showed, the internet’s capacity to be exploited went far beyond even their wildest fears. Invasion of personal privacy, phishing, identity theft and espionage were only some of the crimes that the web facilitated. Equally heinous were its use in influencing elections and holding corporations and governments to ransom.
Yet, it would be a real nihilist who will argue that on the balance the internet has done more harm than good for society. Sure, many professions like stenotypists, bank tellers and travel agents have been consigned to history. But equally newer jobs like web developers, data miners, SEO experts, software engineers and user experience developers have come up. Estimates indicate that for every traditional job which was eliminated thanks to the internet, 2.61 new jobs have been created in its place.
Artificial Intelligence, too, will lead to new evils. It will also reinforce existing stereotypes and biases. But beyond that, its capacity to do good is increasingly to the fore. In medicine for example, AI is already a disrupter; machine learning data is being used for imaging analysis, precise diagnosis, robotic surgery, reduction in medical errors and preempting large epidemiology data and trends.
The unrestrained quest for scientific progress in the past, has rung a lot of alarm bells especially in the wonder domain of stem cell research with the pluripotent cells being held out as a panacea for all ills. But eventually caution pulled back the frontiers with many governments either banning harvesting of stem cells from blastocytes for religious or moral reasons. In fact, it is only China that has the most lax regulations for stem cell research with many scientists flocking there for research.
A safe and equitable future, is that what we can hope from Artificial Intelligence? Maybe, with careful bioethics, yes, indeed.