Press "Enter" to skip to content

A deep learning alternative can help artificial intelligence agents in Gameplay of the real world

A new car The learning approach that draws inspiration from the way in which the human brain seems to model and know the world has proven to be able to master a series of simple video games with impressive efficiency.

WhatsApp Group Join Now
Telegram Group Join Now

The new system, called Axiom, offers an alternative to artificial neural networks that are dominant in modern artificial intelligence. Axiom, developed by a software company called Versetto Ai, is equipped with previous knowledge on the way objects physically interact with each other in the world of game. Then use an algorithm to model how to expect the game to act in response to the input, which is updated on the basis of what it observes: a process nicknamed the active inference.

The approach draws inspiration from the principle of free energy, a theory that tries to explain intelligence using the principles taken from mathematics, physics and information theory, as well as by biology. The principle of free energy was developed by Karl Friston, a well -known neuroscientist who is a scientist head of the verses of the company “Cognitive Computing”.

Friston told me for videos from his home to London that the approach could be particularly important for the construction of artificial intelligence agents. “They have to support the type of cognition we see in the real brain,” he said. “This requires a consideration, not only for the ability to learn things, but in reality to learn how you act in the world.”

The conventional approach to learning to play games involves the formation of neural networks through what is known as a profound reinforcement learning, which provides for the experimentation and modification of their parameters in response to positive or negative feedback. The approach can produce superhuman game algorithms, but requires a lot of experimentation to work. Axiom Masters various simplified versions of popular video games called Drive, Bounce, Hunt and Salt using much less examples and less calculation power.

“The general objectives of the approach and some of its key characteristics are on the track with those I see how the most important problems to focus on to get to agi,” says François Chollet, an artificial intelligence researcher who developed Arc 3, a reference point designed to test the abilities of the algorithms of Ai Modern. Chollet is also exploring new approaches to automatic learning and is using its reference point to test the skills of the models to learn to solve non -familiar problems rather than simply imitate previous examples.

“Work strikes me as very original, which is fantastic,” he says. “We need more people who feel new ideas away from the beaten path of large models and reasoning language models.”

Modern artificial intelligence is based on artificial neural networks which are approximately inspired by brain wiring but work in a fundamentally different way. In the last decade and a little, a deep learning, an approach that uses neural networks, has allowed computers to do all sorts of impressive things to transcribe speech, recognize faces and generate images. More recently, of course, the Deep Learning has led to large linguistic models that feed the Garlous chatbots and increasingly capable.

Axiom, in theory, promises a more efficient approach to the construction of artificial intelligence from scratch. It could be particularly effective for the creation of agents who have to learn efficiently from experience, says Gabe René, CEO of Versetti. René says that a financial company has started experimenting with the company’s technology as a way to model the market. “It is a new architecture for artificial intelligence agents that can learn in real time and is more accurate, more efficient and much smaller,” says René. “They are literally designed as a digital brain.”

A little ironically, given that Axiom offers a modern Ai alternative to and deep learning, the principle of free energy has been originally influenced by the work of the British Canadian computer scientist Geoffrey Hinton, which was awarded both the Turing Prize and the Nobel Prize for his pioneering work on deep learning. Hinton has been colleague of Frison at the University College in London for years.

For more information on Friston and the principle of free energy, I highly recommend this article in 2018. Friston’s work has also influenced an exciting new theory of consciousness, described in a wired book reviewed in 2021.

Source link


Discover more from Gautam Kalal

Subscribe to get the latest posts sent to your email.

Be First to Comment

Leave a Reply