Search

Need for Regulation in Relation to Artificial Intelligence and Robotics

By Gauri Gupta

Vivekanand Institute of Professional Studies, GGSIPU

“Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs.”

- John McCarthy, father of AI

Artificial Intelligence (AI) is concerned with understanding the nature of human intelligence and designing intelligent artefacts which can perform the tasks which, when performed by humans, are said to require intelligence. It refers to the ability of a computer or a computer – enabled robotic system to process information and produce outcomes in a manner similar to the thought process of humans in learning, decision making and solving problems. The goal of AI systems is to develop systems capable of tacking complex problems in ways similar to human logic and reasoning. The term “artificial intelligence” thus means ‘investigating intelligent problem – solving behaviour and creating intelligent computer system.’

The reason is that the notion of ‘intelligent machine’ naturally leads to robots and robotics. One might argue that not every machine is a robot, and certainly artificial intelligence is concerned with virtual agents.

A clear separation between the fields can be seen in the 1970s, when the Robotics becomes more focused on industrial automation, while Artificial Intelligence uses to robots to demonstrate that machines can act also in everyday environments.

Rapid advances in Artificial Intelligence are raising risks since the malicious users are and will soon exploit the technology to mount automated backing attacks, cause driverless car crashes or turn commercial drones into targeted weapons, a new report warns.

The experts claim that the malicious use of AI poses imminent threats to digital, physical and political security by allowing for large scale, finely targeted, highly efficient attacks.

It is considered a powerful force for unlocking all manner of technical possibilities.

Why does the government need to regulate Artificial Intelligence?

While rapid progress of the technology should be seen with a positive lens, it is important to exercise caution and introduce worldwide regulations for the development and the use of AI. The constant research in the field of technology, in addition to giving rise to increasingly powerful applications, is also increasing the accessibility to these applications. The use of certain technologies should be regulated, or at the very least monitored, to print the misuse or abuse of the technology towards harmful ends. The need for regulating the AI research and application is also becoming increasing obvious.

  1. AI research, in recent years, has resulted in numerous applications and capabilities that used to be, not long ago, reserved for the realm of futuristic fiction. It is important to realise that like any other form of technology, is a double-edged sword. If highly advanced and complex AI systems are left uncontrolled and unsupervised, they stand the risk of deviating behaviour and performs tasks in unethical ways. We are currently at more risk of AI doing things wrong that them doing wrong things.\

  2. It is important for the developers to exercise more caution and care while creating these systems. Research is being performed in all corner of the world. There is no way to determine what goes on in each of these places. Most developers try and create system and test them rigorously to prevent mishaps but they may often compromise these aspects while focusing on performance and all-time delivery of projects. This might lead to the creation of the AI systems that are not fully tested for safety and compliance.

  3. AI is a revolutionary technology that can be of great advantage to humanity, but it also holds the potential to cause massive and irreparable damage to human civilization.

Proponents of AI regulation such as Stephen Hawking fear that AI could destroy humanity if we aren’t proactive to avoid the risks of unfettered AI such as ‘powerful autonomous weapons, or new ways for the few to oppress the many.’ He sees regulation as the key to allowing AI and humankind to co-exist in a productive future.

In 2015, 20,000 people including robotics and AI researchers, intellectuals, activists and Stephen Hawking signed an open letter and presented it at the International Conference on Artificial Intelligence that called for the UN to ban further developments of weaponised AI that could operate ‘beyond meaningful control.’

Our society has already been impacted by the AI algorithms that are deployed in financial institutions, employers, government systems among others. These create a significant and serious issues in people’s lives. AI is developing rapidly, while government regulations are moving at a slow pace.

Some of the AI related risks and issues which mandates the need for regulations are as follows:

  1. Autonomous Weapons: With the advancements in AI, it is programmed to do something dangerous, as is the case with autonomous weapons. It might even be plausible to expect that the nuclear arms race will be replaced with the global autonomous weapons race. Russia’s President Vladimir Putin said, ‘Artificial Intelligence is the future, not only for Russia, but for all humankind. It comes with enormous opportunities, but also threatens it. Whoever becomes the lender in this sphere will become the ruler of the world.’

  2. Invasion of Privacy and social grading: Due to the developments in technology, it is now possible to track and analyse an individual’s move online as well as when they are going about their daily business. Cameras are installed at all places, and facial recognition algorithms know who you are. This is not only the invasion of privacy but cab very quickly turn to social oppression.

  3. Superintelligence: As an AI system becomes more powerful and more general it might become superior to human performance in several domains. Many researchers believe that it might be as transformative socially, economically and politically as was the Industrial Revolution. This could lead to extremely positive developments, but could also prove to be catastrophic and could lead to serious safety and security issues.

  4. Expansion of existing and new threats: The cost of threats and attacks may be lowered by the scalable use of AI systems to complete the tasks that would ordinarily require human labour, intelligence and expertise. New effects may arise through the use of AI systems to complete tasks that would be otherwise impractical for humans, in addition to this, malicious actors may exploit the vulnerabilities of AI system.

Guidelines for regulating the AI applications

The first place to start is to set up regulations against AI – enabled weaponry and cyberweapons.

‘A problem with regulating is that is it difficult to define what AI is.’

Secondly, AI is subject to the full gamut of laws that apply to its human operators.

Thirdly, the AI shall clearly disclose that it is not human. This means that they should be identified as machines and not people. This is particularly important since we all have seen the ability of the political bots to comment on new articles and generate propaganda and political discord.

The fourth guideline is that AI shall not retain or disclose confidential information without explicit prior approval from the source. This is a necessity as it keeps a check on the individual’s privacy and provides protection against the misuse of data. The fifth and the most general rule is that AI must not increase any bias that already exists in our systems.

There are probably AI applications that will be introduced in the future, that may cause harm yet no existing regulatory body is in place. It is up to us to identify those applications as early as possible and identify the regulatory agency to take that part on.

We all should recognise that rules and regulations have certain purposes: to protect the society and in turn the individuals from harm. One platform where all such conversations can take place is through organisations developed by AI.

The article has put forth certain risks that are as result of the rapid developments in AI. Solving these problems will require collective action both domestically and internationally, which has always been a difficult problem – especially on the international level. Yet there are certain historical events wherein nations managed to find ways to stave off the destabilizing effects of the emerging technologies, from the Anti – Ballistic Treaty to that of the Montreal Protocol. This becomes possible when the leaders along with the individuals realise that structural risks are also collective risks. The fact that nations’ fates are fundamentally interlocked is a source of complexity in the governance of AI, but is also a source of hope.

0 comments

Related Posts

See All
  • Twitter

© Copyright 2020 Glocalex | Legal Research Community

  • LinkedIn
  • Twitter
  • Instagram