[ad_1]
AN important global debate is underway about the disruptive impact of new technology. There is no doubt modern technology has been a force for good and responsible for innumerable positive developments — empowering people, improving lives, increasing productivity, advancing medical and scientific knowledge and transforming societies. Technological developments have helped to drive unprecedented social and economic progress. But the fourth industrial revolution has also involved the evolution of advance technologies that are creating disruption, new vulnerabilities and harmful repercussions, which are not fully understood, much less managed. A digitalised world is facing the challenge of cybersecurity as threats rise across the world. Data theft and fraud, cyberattacks and breaches of critical systems, electricity networks and financial markets are all part of rising risks.
Communication technology now dominates our lives like never before. It brings untold benefits but also presents new dangers. The phenomenon of fake news for example is not new. But its omnipresence today has much to do with digital technology, which has produced a proliferation of information channels and expansion of social media. Online platforms have become vehicles for the spread of misinformation. Fake news easily circulates due to the magnifying power of social media in a mostly unregulated environment. Anonymity in social media platforms gives trolls and purveyors of false stories the assurance they will not be held accountable for their lies or hate messages. So fake news is posted on social media without fear of retribution. ‘Deepfakes’ — doctored videos using artificial intelligence (AI) — are now commonly used to mislead and deceive.
The profit motive and business model of social media companies prevents them from instituting real checks on divisive and sensational content irrespective of whether it is true or false. That means ‘digital wildfires’ are rarely contained. Digital technology is also being abused to commit crimes, recruit terrorists and spread hate, all of which imperil societies. This presents challenges to social stability in what is now called the post-truth era.
Digital technology is also fuelling polarisation and divisiveness within countries. Studies have pointed to its disruptive impact on political systems and democracy. In an article in the European Journal of Futures Research in March 2022, the authors wrote that “In times of scepticism and a marked dependence on different types of AI in a network full of bots, trolls, and fakes, unprecedented standards of polarisation and intolerance are intensifying and crystallising with the coming to power of leaders of dubious democratic reputation”. The connection between the rise of right-wing populist leaders and their cynical but effective deployment of social media is now well established.
New technologies present opportunities and dangers for nations and people.
Artificial intelligence or machine intelligence presents many dangers such as invasion of privacy and compromise of multiple dimensions of security. The biggest threat posed by autonomous weapons systems is that they can take decisions and even strategies out of human hands. They can independently target and neutralise adversaries and operate without the benefit of human judgement or thoughtful calculation of risks. Today, AI is fuelling an arms race in lethal autonomous weapons in a new arena of superpower competition.
Read: The PMO audio leaks — Could they be faked?
The book, co-authored by Henry Kissinger, Eric Schmidt and Daniel Huttenlocher, The Age of AI: And our Human Future, lays bare the dangers ahead. AI has ushered in a new period of human consciousness, say the authors (Schmidt is Google’s former CEO), which “augers a revolution in human affairs”. But this, they argue, can lead to human beings losing the ability to reason, reflect and conceptualise. It could in fact “permanently change our relationship with reality”.
Their discussion of the military uses of AI and how it is used to fight wars is especially instructive. AI would enhance conventional, nuclear and cyber capabilities in ways that would make security relations between rival powers more problematic and conflicts harder to limit. The authors say that in the nuclear era, the goal of national security strategy was deterrence. This depended on a set of key assumptions — the adversary’s known capabilities, recognised doctrines and predictable responses. Their core argument about the destabilising nature of AI weapons and cyber capabilities is that their value and efficacy stems from their “opacity and deniability and in some cases their operation at the ambiguous borders of disinformation, intelligence collection and sabotage … creating strategies without acknowledged doctrines”. They see this as leading to calamitous outcomes. They note the race for AI dominance between China and the US, which other countries are likely to join. AI capabilities are challenging the traditional notion of security and this intelligent book emphasises that the injection of “nonhuman logic to military systems” can result in disaster.
Advanced new generation military technologies are a source of increasing concern because of their wide implications for international peace and stability. The remote-control war waged by US-led Western forces in Afghanistan over two decades involved the use of unmanned aerial vehicles or drones. This had serious consequences and resulted in the killing of innocent people. The use of a cyberweapon — the Stuxnet computer worm — by the US to target Iranian facilities in 2007 to degrade its nuclear programme was the first attack of its kind. More recently, Russian and Ukrainian militaries are using remotely operated aerial platforms in the Ukraine conflict. Reliance on technology can confront countries at war with unexpected problems. For example, frontline Ukrainian soldiers have faced outages of the internet satellite service which was supposed to prevent Russian forces from using that technology. This digital disruption is reported to have caused a crucial loss of communication between Ukraine’s military forces.
Despite the risks and dangers of such new technologies, there is no international effort aimed at managing them much less regulating their use. There is no move by big powers for any dialogue on cyber and AI arms control. If the global internet can’t be regulated and giant, unaccountable social media companies continue to rake in excessive profits, there is even less prospect of mitigating the destabilising effects of cyber and AI-enabled military capabilities.
The writer is a former ambassador to the US, UK & UN.
Published in Dawn, October 17th, 2022
[adinserter block=”4″]
[ad_2]
Source link