Reference News Network reported on November 27th that artificial intelligence is developing so rapidly that the latest technological achievements in this field have helped to create neural networks that know when to trust artificial intelligence.
According to the Buenos Aires Economic News Network in Argentina on November 26, this deep learning neural network aims to imitate the human brain and weigh different factors at the same time, so as to determine specific patterns on the basis of data at a level that human analysis cannot achieve.
According to the report, this is very important because today, artificial intelligence is used in various fields that directly affect human life, such as the self-driving of cars, aircrafts or complete transportation systems, in addition to medical diagnosis and surgery.
Although artificial intelligence will not be as destructive as in the movie “I, Robot” or the notorious dogs in the TV series Black Mirror for a while, machines that can act autonomously have appeared in our daily life, and their predictions may become more accurate, even Know when they will fail themselves. This is essential to improve its operation and avoid nuclear disasters in science fiction.
According to the report, Alexander Amini, a computer scientist at the Computer Science and Artificial Intelligence Laboratory (CSAIL) of MIT, said: “We need not only the ability to generate high-performance models, but also the ability to know when we can’t trust them.”
Amini joined a team of research with another computer scientist, Daniela Ruth, to promote the development of these neural networks with a view to making unprecedented progress in the field of artificial intelligence.
Their goal is to make artificial intelligence self-aware of its own reliability, which they call “deep evidence regression”.
The report pointed out that this new neural network represents the latest progress in similar technologies developed so far, because it runs faster and requires less computing. The operation of this neural network can be synchronized with human decision-making when the degree of trust is set.
“The idea is important and generally available. It can be used to evaluate products based on autonomous learning models. By assessing the uncertainty of the model, we can also understand how many errors the model expects to occur and what data is needed to improve the model,” Ruth said.
The research team illustrates this by comparing it to self-driving cars with different levels of accuracy. For example, when deciding whether to pass the intersection or wait, the neural network lacks confidence in its prediction. The network’s capabilities even contain hints on how to improve the data to make more accurate predictions.
“With human security and life at risk, we are beginning to see more and more neural network models coming out of research labs and into the real world,” Amini said.