How to protect us from artificial intelligence?

Код

The idea that artificial intelligence will inevitably lead us to a scenario in which machines rebel against people is quite popular. The artificial superintelligence seems to be the greatest possible threat, and fantastic stories, according to which we will not be needed in a world that belongs to technology, never lost popularity.

Is this inevitable?

The literary and cinematic image of intelligent computer systems from the 60s helped shape and generalize our expectations of the future when we take the path of creating a computer intelligence superior to the human. AI, obviously, has already surpassed the person in certain specific problems that require complex calculations, but still lags behind in a number of other possibilities. How to simultaneously increase the power of this severe tool and keep our economic position above it.

Since artificial intelligence already plays and will continue to play a big role in our future, it is extremely important to explore our possibilities of coexistence with these complex technologies.

Kevin Abosh, the founder of Kwikdesk, which handles data processing and the development of artificial intelligence, shared his thoughts on this matter. He believes that artificial intelligence should be rapid, inconspicuous, reliable, literate and ethical. Yes, ethical.

Ethical framework

The concept of an artificial neural network, created in the likeness of a biological neural network, is nothing new. Units of computing power, which are called neurons, connect to each other and form a network. Each neuron uses a complex learning algorithm at the input, before transmitting data to other neurons, until the neuron is activated at the output and opens the possibility for reading. Expert systems rely on people who will "teach" the system, sowing it with seeds of knowledge. Logical processing mechanisms look for matches, make choices, establish rules in the style of "if this, then that" in relation to the knowledge base. In this process, new knowledge is added to the knowledge base. A pure neural network learns in the process of obtaining non-linear experience, does not have problems with sowing knowledge by an expert. Hybrid networks have proven that they improve the learning capabilities of machines.

Now let's look at the ethical problems of such systems. Further from the first person.

Код

"Evil code" against a good code

The author uses words to immerse the reader in a fictional world, and does it differently, but great authors do it very elegantly. A software engineer writes lines of code that facilitate the processing and movement of data. He too can choose from a number of options different ways, but elegant coders - folknerov computer science. A moving coder focuses on how to enclose as much as possible and better in a short and elegant code. Excess code is reduced to a minimum. The great code also keeps the open window for future additions. Other engineers can add code with their inherent elegance, and the product develops without problems.

At the heart of any man-made product is the intention. Things made by people are impregnated with intentions, and to a greater or lesser extent are carriers of the nature of the creator. It may be difficult for some to imagine an inanimate object possessing such a nature. But many will agree with this. The energy of intentions exists for thousands of years, unifies, divides, unifies, transforms society. Do not underestimate the power of the language. Do not forget that the lines of code are written in a certain programming language. Thus, I am convinced that the code that becomes software used on computers or mobile devices is very "live".

Without considering wisdom and spirituality in the context of informatics and the potential consequences of artificial intelligence, we can still treat static code as a whole with the potential to "do good" or "do evil." These outputs find themselves only in the process of using applications by people. It is the clear choice that people make that affects the nature of the application. They can be viewed in a local system, defining a positive or negative impact on this system, or based on a set of pre-defined standards. Nevertheless, as a journalist can not be 100% impartial in the process of writing an article, so the engineer willingly or unwittingly adds the nature of his intentions to the code. Some may argue that writing code is a logical process, and true logic does not leave space for nature.

But I'm sure that at the moment when you create a rule, a block of code or all the code, it's all imbued with an element of human nature. With each additional rule, the penetration of the probes deepens. The more complex the code, the more it is of this nature. Hence the question arises: "Can the nature of the code be good or evil?".

Obviously, a virus developed by a hacker who maliciously breaks through the protection of the computer and wreaks havoc in your life is imbued with evil nature. But what about the virus created by "good guys" with the aim of penetrating the computers of a terrorist organization in order to prevent terrorist acts? What is its nature? Technically, it can be identical to its vile double, simply used for "good" purposes. So his nature is good? This is the whole ethical paradox of malicious software. But we could not ignore it, thinking about the "evil" code.

In my opinion, there is a code that inherently gravitates towards "evil", and there is code that is inherently biased towards goodwill. This is more important in the context of computers running off-line.

Самоуправляемый автомобиль

In Kwikdesk, we develop an AI framework and a protocol based on my design of an expert system / hybrid of a neural network, which most of all created today resembles a biological model. Neurons are manifested in the form of I / O modules and virtual devices (in a sense autonomous agents) connected by "axons", secure, separated by channels of encrypted data. These data are deciphered as you enter the neuron and after certain processes are encrypted before being sent to the next neuron. Before the neurons can communicate with each other through the axon, the participant and channel keys must be exchanged.

I believe that security and separation should be built into such networks from the lowest level. Superstructures reflect the quality of their smallest components, so anything less than safe building blocks will lead to unsafe operation of the entire line. For this reason, the data must be protected locally and decrypted with local transmission.

Implementation and guarantees

The quality of our life together with machines that are getting smarter and smarter is understandably worrying, and I am absolutely sure that we must take measures to ensure a healthy future for future generations. Threats of smart machines are potentially diverse, but can be broken down into the following categories:

Reservation . In the workplace, people will be replaced by machines. This shift has been going on for many decades and will only accelerate. A proper education is needed to prepare people for the future, in which hundreds of millions of traditional jobs will simply cease to exist. It's complicated.

Security . We rely on machines completely and will rely on them further. As we increasingly trust machines during the transition from a safe zone to a potential danger zone, we can face the risk of a machine error or malicious code. Think, for example, about transport.

Health . Personal diagnostic devices and networked medical data. AI will continue to develop in the field of preventive medicine and the analysis of crowdsorption genetic data. Again, we must have guarantees that these machines will not conduct harmful subversive activities and will not be able to harm us in any way.

Fate . AI predicts with increasing accuracy where you will go and what you will do. As this sphere develops, he will know what decisions we take, where we will go next week, what products we will buy or even when we die. Do we want others to have access to this data?

Knowledge . Machines de facto accumulate knowledge. But if they acquire knowledge faster than people can verify them, how can we trust their integrity?

In conclusion, I want to note that an alert and responsible approach to AI to mitigate potential troubles in a technological explosion of a supernova is our way. We will either tame the AI's potential and pray that it will bring only the best to humanity, or we will burn in its potential, which will reflect the worst in us.

The article is based on materials https://hi-news.ru/computers/kak-obezopasit-nas-ot-iskusstvennogo-intellekta.html.

Comments