AI is developing so fast that the latest technological achievements in this field have helped create neural networks that know when artificial intelligence cannot be trusted.

According to a report by the Buenos Aires Economic News Network on November 26, this deep learning neural network is designed to imitate the human brain and is able to weigh different factors at the same time, so as to determine the specific model on the basis of data in a level that cannot be achieved by human analysis.

This is very important because today, AI is used in various fields that directly affect human life, such as autopilot of cars and airplanes or complete transportation systems, as well as medical diagnosis and surgical operations.

Although AI will not be as devastating as the movie “Me, Robot” or the notorious dogs in the TV series “Black Mirror”, machines capable of autonomous action already appeared in our daily lives. And their prediction may be more accurate to know when they themselves will malfunction. This is essential for improving its operating conditions and avoiding nuclear disasters in science fiction.

Alexander Amini, a computer scientist at the Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory (CSAIL), said: “We need the ability to generate high-performance models as well as the ability to understand when these models cannot be trusted.”

Amini joined a research team with another computer scientist, Daniela Ruth, dedicating to promoting the development of these neural networks, with a view to achieving unprecedented progress in the field of AI.

Their goal is to make AI self-conscious about its own reliability, which they call “deep evidence return.”

The report pointed out that this new neural network represents the latest advancement in similar technologies developed so far because it runs faster and requires less computing. The operation of the neural network can be synchronized with human decision-making under the condition of setting trust.

“This idea is important and generally available. It can be used to evaluate products based on self-learning models. By evaluating the uncertainty of the model, we can also understand how many errors the model is expected to make and what data is still needed to improve the model,” Ruth said.

The research team illustrates this by comparing it with self-driving cars with different levels of accuracy. For example, in deciding whether to pass an intersection or wait, the neural network lacks confidence in its prediction. The capabilities of the network even include hints on how to improve the data to make more accurate predictions.

Amini said: “Under the circumstances that affect human safety and even threaten human’s life, we can see increasing neural network models coming out of research laboratories and entering into the real world.”