Edgica logo

The booming growth of Artificial Intelligence (AI) is changing both people lives and businesses, and even the whole industries. Investors don’t fear to spend vast sums on AI-startups and many technologies giants like Microsoft, Google or Amazon and opening the research labs. While engineers are preparing to meet technical challenges, economists are calculating the profits of AI for economies, and many researchers are keeping an eye on the potentials risks of artificial intelligence, particularly on the ethical issues of the interaction between Human and Artificial Intelligence.

In search for rules

The idea that the artificial intelligence can go out of control or harm people is not new. For years it was translated by writers and science-fiction movie makers. The fear of bad things that machines might do to humanity inspired people to create rules for AI.

Perhaps the first principles were formulated by Science Fiction American writer Isaac Asimov. According to the “Three Law of Robotics,” robots should never injure a human through action or inaction, robots have to obey human’s orders, and robots should protect themselves. Also, each law shouldn’t conflict with the other.

Later most experts agreed that these principles were not sophisticated enough to solve all the issues of the human-AI interactions. Moreover, according to a chief scientist of financial prediction firm Aidyia Holdings and AI theorist Ben Goertzel, the point of the Asimov’s laws was to teach people “how any attempt to legislate ethics in terms of specific rules is bound to fall apart and have various loopholes.” Goertzel seems to be too categorical, but Asimov’s laws did not imply any values or design suggestions for people developing the technologies. They also mentioned nothing about how humans should change their way of thinking and living according to the new environment where AI plays an important role.

Now the enormous strides of artificial intelligence, as well as huge fear of developed AI machines, made theorists, as well as the biggest hi-tech companies, look for new rules for artificial intelligence. The Partnership on AI has suggested its own eight principles. These tenets include such calls as “ensure that AI technologies benefit and empower as many people as possible” or “remain socially responsible, sensitive, and engaged directly with the potential influences of AI technologies on wider society.”

Another substantial document was developed by thought leaders from different fields during the Asilomar Beneficial AI Conference this year. This paper, called Asilomar AI Principles, includes 23 statements to guide the development of artificial intelligence in the future. And some of the big-name companies, like Google, have already opened ethics boards to monitor the development and deployment of their Artificial Intelligence products.

Today’s challenges

In the past, the biggest concern about AI was the existential threat of AI machines to people. But for the last decades, the discussion is focused on more tangible issues. Can we trust robotic cars, just because they are programmed to avoid crashes? Should we care about the suffering of “feeling” machines? Are robots reliable enough to decide in the heat of battle whom to kill?

A well-known international organization World Economic Forum (WEF) in its article on the important ethical points of artificial intelligence mentions nine issues that “keep experts up at night.” Among them: unemployment, inequality, singularity, security issues, robots rights, and others.

As WEF observer writes, and many experts would agree, one of the critical issues around AI is that robots might take away jobs of millions of individuals. Companies have already started “hiring” robots. On Ford’s factory in Cologne, Germany, humans share the working space with robots and Apple’s supplier Foxconn Technology Group plans to replace 60 thousand of its factory workers in China with artificial employees.

The highest risks still lie in the area of safety and trust. Speaking about risks we most likely think of two options – when AI is programmed to benefit people but chooses a destructive method to achieve its goal or AI is used maliciously to harm people. In both cases stakes are high. As an answer to this dread a few years ago 272 experts from scientific and tech community signed a letter calling for a ban on fully autonomous weapons. Among these people was a well-known physicist and author of “A Brief History of Time” Stephen Hawking. Earlier Hawking raised fears that robots powered by artificial intelligence could overtake humans in the next 100 years. The co-founder of SpaceX and the CEO of Tesla Motors Elon Musk also repeatedly stated that AI poses a significant threat to humanity.

Another area fraught with ethical issues is the relationships with AI machines. It is quite plausible that in some decades from now people will be comfortable with robots as housekeepers or nannies and robot’s companionship or caregiving could make human life much better and convenient. But when robots look like humans or animals, people may become co-dependent or over-attached to them. And other scenarios with social or emotional dependence towards robots can become even more extreme.

Human+AI future

The future belongs both to humans and the artificial intelligence. The combination of human’s creativity, emotions, empathy, and intuition mixed with powerful AI capabilities can help society solve its biggest problems, introduce new improvements and move forward faster. And speaking of the efficiency of AI, humanity has already done a great job. But when it comes to ethics of interaction between humans and AI, people have a rather poor track record.

An Executive Director of the Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems John C. Havens admits that programmers have to «implement ethical standards from the operating system level up».

Last year the organization headed by Havens issued the report “Ethically Aligned Design,” that calls AI creators to think ethically when they’re developing software and hardware with artificial intelligence components. The authors of the report, as well as many other AI theorists, call for more transparency, privacy and security for AI devices. It is important that AI can detect new threats and create the right protections as it evolves.

One more way to improve the interaction between humans and machines in the future is to attract more people to the development of AI. This idea belongs to the MIT Media Lab associate professor Iyad Rahwan, who is polling the public through the online test to find out what decisions people would want self-driving cars to make. This concept should apply to many other industries so people can feel that AI’s decisions reflect their values.

We should not let Silicon Valley be the mission control for humanity, says the futurist Gerd Leonhard. And he is totally right!

But there is one more important reason why everybody should start thinking about the ethical part of human-AI interactions immediately. Despite the fact that today’s AI is mostly toys, appliances or services under control, it already becomes independent. Have you heard about American Predator drones that can be controlled by someone from far away, even from another side of our planet? Or Google’s AI company, DeepMind that recently has created the program that tackles the problems based on the knowledge of already solved issues. This algorithm can learn and retain knowledge like a human!

AI is on the way to becoming self-aware, and the future of humanity depends on what system of values is now being designed for it. Artificial intelligence is the creation of humans intellect, and it is human’s responsibility to supply it with the right values and ensure that machines and people will be able to work together, not against each other. And to understand what this system of values should look like, people have to raise their awareness about ethics first. And this is proving to be a challenge in itself.
Copyright 2024 Edgica LLC, All rights reserved
Subscribe to our Newsletter!

Subscribe to our Newsletter!

Join our mailing list to receive the latest news and updates from Edgica. We keep your contact information confidential, you always can unsubscribe.

Thank you! Please, check your Inbox to confirm the subscription!

Pin It on Pinterest

Shares
Share This