- Get link
- X
- Other Apps
- Get link
- X
- Other Apps
HAL 9000 - one of the most famous cinematic artificial intelligence. This excellent form of intelligent computer has failed on the way to Jupiter in Stanley Kubrick's iconic film "Space Odyssey of 2001", which is currently celebrating the 50th anniversary of its release. HAL can speak, understands the person, the expression on his face, reads on the lips - and plays chess. His excellent computing abilities are backed by unique human features. He can interpret emotional behavior, reason and appreciate art.
Empowering HAL with emotions, writer Arthur Clark and film director Stanley Kubrick made him one of the most humanoid images of intelligent technology. In one of the most beautiful scenes in science fiction films, he says that he is "scared" when mission commander David Bowman starts to disable his memory modules after a series of murderous events.
HAL is programmed to provide assistance to the crew of the Discovery. He manages the ship, enlisting the support of his powerful artificial intelligence. But it soon becomes clear that he is very emotional - he can feel fear, sympathize, even a little. Fantastic science fiction, but such an emotional artificial intelligence in our realities is simply impossible at the moment . Any depth of emotions and feelings that you will find in modern technologies will be absolutely false.
"Perfect" artificial intelligence
When in the film Bowman starts to manually edit the HAL functions, he asks him to stop, and when we see a striking destruction of HAL's "mental" abilities, AI tries to calm himself by singing "Daisy Bell" - this is probably the first song the computer wrote.
In fact, the audience begins to feel that Bowman is killing HAL'a. Disconnection is like revenge, especially after what became known from the previous events of the film. But if HAL is capable of making emotional judgments, the real world AI will definitely be limited in the ability to reason and make decisions. Moreover, despite the opinion of futurists, we will never be able to program emotions in the same way as the fantasists - the creators of HAL, because we do not understand them. Psychologists and neuroscientists are uniquely trying to figure out how emotions interact with cognition, but have not yet been able to.
In one study, conducted with the participation of Sino-British bilingual scientists, scientists studied how the emotional meaning of words can be changed by unconscious mental processes. When participants were presented with positive and neutral words like "holiday" or "tree", they unconsciously extracted word forms in Chinese. But when the words had a negative meaning, for example, "murder" or "rape," their brains blocked access to their native language - without their knowledge.
Arguments and emotions
On the other hand, we well understand how reasoning works. We can describe how we arrive at rational solutions, write rules, and turn these rules into a process and code. But emotions remain a mysterious evolutionary legacy. Their source can not be traced, it is so vast, and it is not just an attribute of the mind that can be intentionally introduced. To program something, you not only need to know how this works, but also why. The reasoning has goals and objectives, emotions do not.
In 2015, a study was conducted with students of the University of Bangor, who speak mandarin. They were offered to play a game with the opportunity to win money. In each round they had to accept or leave the proposed bet on the screen - for example, a 50% chance to get 20 points, a 50% chance of losing 100.
Scientists have suggested that the ability to speak their native language will add emotions to them and they will behave differently than if they were communicating in a second language, English. So it happened: when the feedback took place in native Chinese, subjects were 10% more likely to bet in the next round, regardless of risk. This shows that emotions influence the reasoning.
Returning to AI, because emotions can not be fully realized in the program - no matter how difficult it is - the reasoning of the computer will never change under the pressure of his emotions.
One possible interpretation of HAL's strange "emotional" behavior is that it was programmed to simulate emotions in extreme situations where it needed to manipulate people based on common sense, but appealing to their emotional self when the human mind suffers failure. This is the only way to see a convincing simulation of emotions in such circumstances.
In my opinion, we will never create a machine that can feel, hope, fear or rejoice for real. Any approximation will be a simulacrum, because the machine will never be a person, and emotions are the default human part.
- Get link
- X
- Other Apps
Comments
Post a Comment