VERSES Publishes Pioneering Research Demonstrating More Versatile, Efficient, Physics Foundation for Next-Gen AI

VERSES AI Inc.
VERSES AI Inc.

In This Article:

New research led by Karl Friston showcases new foundation for AI that achieves 99% accuracy with 90% less data on popular MNIST benchmark

VANCOUVER, British Columbia, July 30, 2024 (GLOBE NEWSWIRE) -- VERSES AI Inc. (CBOE:VERS) (OTCQB:VRSSF) (“VERSES” or the “Company”), a cognitive computing company specializing in next generation intelligent systems announces that a team, led by Chief Scientist, Dr. Karl Friston, has published a paper titled, “From pixels to planning: scale-free active inference,” which introduces an efficient alternative to deep learning, reinforcement learning and generative AI called Renormalizing Generative Models (RGMs) that address foundational problems in artificial intelligence (AI), namely versatility, efficiency, explainability and accuracy, using a physics based approach.

‘Active inference’ is a framework with origins in neuroscience and physics that describes how biological systems, including the human brain, continuously generate and refine predictions based on sensory input with the objective of becoming increasingly accurate. While the science behind active inference has been well established and is considered to be a promising alternative to state of the art AI, it has not yet demonstrated a viable pathway to scalable commercial solutions until now. RGM’s accomplish this using a “scale-free” technique that adjusts to any scale of data.

“RGMs are more than an evolution; they’re a fundamental shift in how we think about building intelligent systems from first principles that can model space and time dimensions like we do,” said Gabriel René, CEO of VERSES. “This could be the ‘one method to rule them all’; because it enables agents that can model physics and learn the causal structure of information we can design multimodal agents that can not only recognize objects, sounds and activities but can also plan and make complex decisions based on that real world understanding—all from the same underlying model. This promises to dramatically scale AI development, expanding its capabilities, while reducing its cost.”

The paper describes how Renormalized Generative Models using active inference were effectively able to perform many of the fundamental learning tasks that today require individual AI models, such as object recognition, image classification, natural language processing, content generation, file compression and more. RGMs are a versatile “universal architecture” that can be configured and reconfigured to perform any or all of the same tasks as today’s AI but with far greater efficiency. The paper describes how an RGM achieved 99.8% accuracy on a subset of the MNIST digit recognition task, a common benchmark in machine learning, using only 10,000 training images (90% less data). Sample and compute efficiency translates directly into cost savings and development speed for businesses building and employing AI systems. Upcoming papers are expected to further demonstrate the effective and efficient learning of RGMs and related research applied to MNIST and other industry standard benchmarks such as the Atari Challenge.