Top 10 Failure Cases of Artificial Intelligence in 2016: Tesla Car Accident

Lei Feng.com: It seems that there will be a few days in 2016. In the past year, artificial intelligence has mushroomed, and some people even call it "the first year of artificial intelligence." Autopilot, speech recognition, the world-famous game Pokémon Go... The machine seems to be everywhere, omnipotent.

At the same time, in this year, artificial intelligence has also caused a lot of disasters. We need to pay more attention to these mistakes so as not to repeat the same mistakes in the future. Lei Feng.com recently learned that Roman Yampolskiy, director of the Cybersecurity Lab at the University of Louisville, recently published a paper entitled "ArTIficial Intelligence Safety and Cybersecurity: a TImeline of AI Failures", which lists artificial intelligence in An example of poor performance in the past year. According to Yampolskiy, these failures can be attributed to errors made by the AI ​​system during the learning phase as well as during the performance phase.

What are the top 10 failure cases of AI in 2016? Pokémon Go, Microsoft Tay, Tesla car accident on the list

The following is based on the words of Yampolskiy and the opinions of many artificial intelligence experts. The list of foreign media TechRepublic is in no particular order. Compiled by Lei Feng.com:

1. Pokémon Go focuses gamers in the white community

After the release of the popular Pokémon Go game in July, many users noticed that very few Pokémons were located in the black community. Mint's chief data officer, Anu Tewary, said that the reason was that the inventors of these algorithms did not offer a variety of training sets and did not spend time on the black community.

2. Tesla semi-automatic driving system accident

This year, Tesla’s accidents have emerged around the world. In May, on a highway in Florida, a car accident occurred in Tesla, which opened the Autopilot mode, and the driver died. This is Tesla's first death in the world. Later, Tesla made a major update to the Autopilot software, and its CEO, Maske, said in an interview that the update would prevent the accident from happening again. In addition, there have been many Tesla accidents in other countries and regions such as China. However, some of these incidents cannot be said to be directly caused by AI.

3. Microsoft chat robot Tay spreads racism, sexism and attacks on homosexual speech

This spring, Microsoft released an artificial intelligence-driven chat bot Tay on Twitter, hoping it will be happy with young people on the Internet. At first, Tay was designed to imitate a teenage American girl, but it was broken shortly after its launch and became a villain who "loves Hitler and satirize feminism." In the end, Microsoft had to "kill" Tay and announced that it would adjust the related algorithms.

4. Google's AI Alpha Dog lost a game with human Go Master Li Shishi

On March 13 this year, Google Alpha Go and Li Shishi's man-machine battle Wuqiqi fourth game in the Seoul Four Seasons Hotel in Seoul, the chess master Li Shishi defeated Alpha in the middle of the game, pulled back a game. Although the final artificial intelligence scored a victory of 1 to 3, the lost game showed that the current AI system is still not perfect.

“Perhaps Li Shishi discovered the weakness of the Monte Carlo Tree Search (MCTS),” said Toby Walsh, a professor of artificial intelligence at the University of New South Wales. However, although this is seen as a failure of artificial intelligence, Yampolskiy believes that this failure is within acceptable limits.

5. In video games, non-player characters create unexpected weapons for the creator

In June of this year, an AI-equipped video game "Elite: Dangerous" appeared outside the game maker's plan: AI actually created a super weapon outside the game settings. A game site commented on this: "Human players may be defeated by the strange weapons created by AI." It is worth mentioning that the game developers then removed these weapons.

6. Artificial intelligence aesthetics also has racial discrimination

In the first "International Artificial Intelligence Beauty Contest", the robot expert group based on "an algorithm that can accurately assess human aesthetic and health standards" judged the face. However, since the various training sets were not provided for artificial intelligence, the winners of the competition were all white. As Yampolskiy said, "Beauty is in the pattern recognizer."

7. Predicting crime with AI, involving racial discrimination

Northpointe has developed an artificial intelligence system to predict the probability of secondary crimes of alleged offenders. This algorithm, known as the "minority report," is accused of having a racial bias. Because in the test, the probability of black criminals being labeled is much higher than other races. In addition, another media, ProPublica, also pointed out that Northpointe's algorithm "even in the absence of racial discrimination, in most cases the accuracy rate is not high."

8. The robot caused a child to be injured

The Knightscope platform has created a robot that is known as "criminal crime." In July, the robot injured a 16-year-old boy in a mall in Silicon Valley. The Los Angeles Times quoted the company as saying it was an "accidental accident."

9. China uses facial recognition technology to predict criminals, which is considered biased.

Two researchers at Shanghai Jiaotong University published a paper called "Automated Inference on Criminality using Face Images." According to foreign media Mirror, the researchers analyzed 1856 facial images of criminals and used some identifiable facial features to predict criminals, such as lip curvature and eye inner corner distance. There are even nose-mouth angles and so on. For this research, many people in the industry questioned the test results and raised ethical issues.

10. Insurance companies use Facebook big data to predict accident rates

The final case came from Admiral Insurance, England's largest auto insurer, which plans to use Facebook users' tweet data this year to test the link between social networking sites and good drivers.

This is an abuse of artificial intelligence. Walsh believes that "Facebook has done a great job in limiting data." The project called "first car quote" was not opened because Facebook restricted the company from getting data.

From the above cases, readers of Lei Feng.com can see that the AI ​​system is very easy to become extreme. Therefore, humans need to train machine learning algorithms on diverse data sets to avoid AI bias. At the same time, with the continuous development of AI, it is becoming more and more important to ensure the scientific detection of relevant research, ensure the diversity of data, and establish relevant ethical standards.

Solar Battery

Zhejiang Baishili Battery Technology Service Co,.Ltd. , https://www.bslbatteryservice.com