A New Digital Twin for Brain Activity Aims to Speed Research
Can artificial intelligence help decode the human brain? A new study by finds that a new AI model is capable of acting as a "digital twin" of the human brain, capable of predicting human brain activity from visual, auditory, and language stimuli. The new model promise to vastly accelerate neuroscience research and promote understanding of the human brain.
It is an understatement to say that the human brain is complex. It is challenging to conduct experiments in vivo on the brains of living human beings. Thus AI offers an in silico alternative that may greatly accelerate neuroscience research on how our brains work.
AI is rapidly being applied in neurological imaging, diagnostics, early disease detection, neurotech devices, wearable brain monitors, surgical support, precision medicine, drug discovery, and neuroscience research. The worldwide market for AI in neurology is expected to reach $2.5 billionUSD by 2030, according to forecasts.
Existing AI models are largely narrow point-solutions that were trained on smaller datasets. Traditional AI models are often designed for a targeted purpose that is narrow in scope, capabilities, and modalities.
However, the human brain processes multiple stimuli, such as sight, sound, and language, to name a few. In pursuit of creating a more robust AI model, the FAIR (Fundamental AI Research) at Meta research team created a foundational model called TRansfomer for In-silico Brain Experiments (TRIBE), capable of processing video, audio, and text, in order to predict human brain responses. Its newest iteration, TRIBE v2, is based on the architecture of an earlier version that won first place in the Algonauts 2025 Challenge, a biennial challenge in computational neuroscience to evaluate model performance in predicting human brain activity.
In computer science, AI foundation models are artificial neural networks that have been trained on immense unlabeled data in order to perform a wide variety of more general purposes functions versus narrow point-solutions. As the name indicates, foundation models can serve as the base for various applications to build upon. For example, the GPT (generative pre-trained transformer) series of foundation models were created largely from the vast amount of public information on the internet, in addition to third-party data, and data generated or provided by researchers, users, and trainers, according to OpenAI. The GPT foundation models are the underlying technology that powers the popular OpenAI chatbot ChatGPT.
For the new study, the FAIR at Meta research team set out to create a foundation model that is not only more flexible and general purpose but also capable of going beyond processing language data to include sound and video. In other words, the goal was to create an AI foundation model that is robust enough to accurately predict human brain activity from more natural stimuli across language, audio, and video.
“The representational alignment between brains and algorithms delineates a path toward a foundation model of human brain function—derived not from first principles but from the direct mapping of large amounts of brain responses to pretrained AI architectures,” wrote Stéphane d’Ascoli and Jean-Rémi King, two of the researchers. Their digital twin of the human brain provides a more integrated, multisensory view of human brain activity.
“Leveraging a unified dataset of over 1,000 hours of fMRI across 720 subjects," they reported, "we demonstrate that our model accurately predicts high-resolution brain responses for novel stimuli, tasks, and subjects, superseding traditional linear encoding models, delivering several-fold improvements in accuracy."
Copyright © 2026 Cami Rosso All rights reserved.
There was a problem adding your email address. Please try again.
By submitting your information you agree to the Psychology Today Terms & Conditions and Privacy Policy
