Latest In


Network Dynamics

A neuroinformatics platform called "The Virtual Brain (TVB)" uses medically accurate connections to simulate complete brain networks. This simulation environment makes it possible to figure out, using models, the neurophysiological processes that cause large-scale neuroimaging signals like functional MRI (fMRI), EEG, and MEG across different brain scales.

Author:Suleman Shah
Reviewer:Han Ju
Sep 08, 2022178 Shares2.4K Views
A neuroinformatics platform called "The Virtual Brain(TVB)" uses medically accurate connections to simulate complete brain networks.
This simulation environment makes it possible to figure out, using models, the neurophysiological processes that cause large-scale neuroimaging signals like functional MRI (fMRI), EEG, and MEG across different brain scales.
Brain function is thought to come from the interaction of a large number of neurons, which are limited in space and time by the structure of the brain and the needs of thought.
Modern network simulations mostly focus on the microscopic and mesoscopic levels, which are neural networks and neural masses that represent a specific cortical area. At these levels, simulations include detailed biophysical information, but they often lose sight of how the brain works as a whole.
On the other hand, over the last few decades, the level of evaluation of global cortical dynamics in human patients and research subjects has increased dramatically across all imaging modalities.
Cognitive and clinical neuroscience, in particular, uses imaging techniques of macroscopic brain activity, such as intracerebral measurements, stereotactic electroencephalography (sEEG), electroencephalography (EEG), magnetoencephalography (MEG), and functional magnetic resonance imaging (fMRI), to measure brain dynamics and evaluate diagnostic and therapeutic approaches.
So, there is a big reason to make an effective, flexible neuroinformatics platform at this macroscopic level of brain structure, so that a wide range of brain dynamics can be re-created and studied, and data can be processed and shown quickly.


The virtual brain has genuine, large-scale primate brain connections. Long-range brain fiber tracts influence connectivity, as found by tractography or the CoCoMac neuroinformatics database.
The virtual brain's tract-length matrix of the demonstration connectivity dataset is symmetric owing to directionality-insensitive fiber identification algorithms. The weights matrix uses directional information from CoCoMac tracer investigations to be asymmetric.
Such information is in the virtual brain connectivity demo dataset. The weights and tract-length matrices are not constrained by symmetry (or lack thereof). Weights and tract lengths are implemented as node-by-node matrices without symmetry.
In the virtual brain, the connection matrix and the folded cortical surface provide long-range and short-range structural connectivity, respectively. The connectivity matrix describes link strengths and time delays between brain areas. In neural field modeling, cortical surface sampling gives an approximation of brain activity that is continuous in space.

The Virtual Brain


A large-scale simulation project needs a well-defined, flexible process that is adaptive to user profiles. A typical virtual brain process includes keeping track of project information, uploading data, setting simulation settings (model, integration scheme, output mode), starting simulations (in parallel if needed), analyzing and visualizing, storing findings, and sharing output data.
The webinterface enables users without programming skills to access the virtual brain to run customized simulations (e.g., physicians may utilize DTI patient data). It also helps them comprehend the behind-the-scenes theories.
Theoreticians may develop their own models to determine biophysical realism, physiological applications, and consequences. Both types of users may operate inside the same framework, accelerating theory and experiment or application. Stronger programmers gain from Python's easy-to-learn, easy-to-use, scriptable, and broad scientific module selection.
Python was chosen because of its versatility, existing libraries, and simplicity of use by non-programmers. The simulation core was made in MATLAB at first, but it was changed to Python because it is important in numerical computing and neuroscience and makes it easy to make modeling tools.

The Virtual Brain Framework

A database back-end, workflow management, and a variety of tools are provided by the supporting framework to facilitate collaborative work. The latter capability enables the multi-user configuration of the virtual brain.
In this setup, users may access their own sessions using a login system; by default, their projects and data are private but can be shared with other users. The web-based graphical user interface (GUI) uses HTML 5, WebGL, CSS3, and Java Script to create a dynamic interface that is easy to use and can be seen both locally and from afar.

A Web-Based Graphical User Interface

The virtual brain is a web-based tool for visualizing connectivity and network dynamics. The virtual brain includes time-series analysis tools, structural and functional connectivity analysis tools, and parameter exploration capabilities that may run simulations in parallel on a cluster or on multiple server computing cores.
The virtual brain's GUI contains six key areas: USER, PROJECT, SIMULATOR, ANALYZE, STIMULUS, and CONNECTIVITY. The USER manages accounts and virtual brain settings. PROJECT manages individual projects and provides tools to analyze their structure and data.
A sub-menu in this section gives a dashboard with a list of all activities, their state (running, error, done), owner, wall-time, and related data. In SIMULATOR, a large-scale network model is built up and simulations are started. Structural and functional data viewers, as well as other displays, help visualize simulation results. This topic includes a simulated history. ANALYZE provides time-series and network analysis. Interactively construct stimulation patterns in STIMULUS. Connection gives users an interactive interface to alter connectivity matrices.

Data Management And Exchange

The virtual brain aims to provide researchers with varying programming abilities easy access to simulated data. Data from the virtual brain may be transferred to other instances (copies on separate machines) or with neuroscientific software, such as MATLAB, Octave, or The Connectome ToolKit.
XML files are developed to store structured metadata, such as the steps necessary to set up a simulation, configuration settings for specific operations, time-stamps, and user account information.
This method summarizes researchers' project protocols for transferring data between virtual brain instances. The second export technique exports data items. The virtual brain uses the HDF5 format because it can store large amounts of data in a condensed manner, arrange data in a tree structure, assign metadata at every level, and is widely used in programming languages and applications.
Each thing in the virtual brain has a global unique identifier (GUID), which makes it easier to find on different computers and keeps files with similar things from having the same name.

File Storage

The storage system is a hierarchical structure of folders and files. The user may choose the exact location on the disk, although the default is a folder named "the virtual brain" in the user's home folder. Each project has a sub-folder in which an XML file providing information about the project is maintained.
Then, for each operation, a folder is created that contains a collection of.h5 files produced during that process, as well as one XML file that describes the operation itself. The XML comprises tags such as creation date, operation status (e.g., Finished, Error), algorithm reference, operation GUID, and, most crucially, a dictionary of input parameters.
The file system stores enough precise information to allow data to be exported from one instance of the virtual brain and then imported into another, accurately replicating projects, including all operations and their outcomes. Although the amount of data generated by each operation varies significantly due to the monitors used and simulation settings, some general estimates are provided below.
  • A 1000 ms region-based simulation with the default settings takes around 1 MB of storage space.
  • A 10-ms surface-based simulation that uses a precalculated sparse matrix to define the local connectivity kernel and all the default settings uses about 280 MB.
Users may manually delete unwanted data from the virtual brain's GUI by utilizing the associated controls. In this situation, any files associated with this data are erased as well, freeing up storage space. In the GUI's USER Settings working area, you can change how much physical storage space the virtual brain can use. This is, of course, limited by how much free space is on the user's hard disks.

The Virtual Brain Workshop - Simulated brains. Real memories. Part 1

Database Management System

Internally, the virtual brain architecture employs a relational database (DB) for entity ordering and linking, as well as an indexing capability for efficient data retrieval. Users may choose SQLite (a file-based database and one of the most popular embedded DB systems) or PostgreSQL (a robust, widely used, open-source object-relational DB system that needs user installation) as the DB engine during the installation process.
Only references to entities are saved in the database, with actual operation results always being stored in files owing to capacity constraints. A relational database was chosen because it is easy to filter entities and move through entity connection trees.

Analyzers And Visualizers

Several methods and methodologies are already accessible in the virtual brain for the analysis and display of simulated neuronal dynamics as well as imported data such as anatomical structure and experimentally recorded time-series. Here we present some of the techniques and approaches available for data analysis and display through a graphical user interface.
Analyzers are mostly time-series and network analysis methods. Techniques covering functions from Numpy (Fast Fourier Transform (FFT), auto-correlation, variance metrics), Scipy (cross-correlation), scikit-learn (ICA), and matplotlib-mlab are included in the analyzers (PCA). There are other implementations of the wavelet transform, complex coherence, and multiscale entropy (MSE).
Visualizers are tools that are intended to accurately handle and show certain types of data. Histogram plots, interactive time-series plots, EEG, 2D head topographic maps, 3D displays of surfaces and animations, and network graphs are now accessible in the graphical user interface. There is also a suite of charting tools based on matplotlib and mayavi accessible for shell users.

Performance, Reproducibility, And Flexibility

Testing For Speed

There is no alternative platform in the context of complete brain models against which we may measure the performance results of the virtual brain and establish a suitable run-time/real-time ratio.
In the Brian spiking neural network simulator, a basic network of 74 nodes was developed as a first approximation, with node dynamics regulated by the equations of the Generic2dOscillator model. The simulation duration was 2048 ms and the integration step size was 0.125 ms (dt = 23 ms).
This network was tested using a random sparse connection matrix with no time delays. The execution times in Brian were around 4.5 seconds and 15 seconds in the virtual brain.
When heterogeneous time delays were incorporated, the running times of the simulations executed in Brian grew significantly (about 6.5x), although they were hardly altered in the virtual brain (approximately 1.2x). The simulations were carried out on an Intel® Xeon® W3520 CPU running at 2.67 GHz.
Even though these results are helpful, they show that the virtual brain and the Brian stimulator have different designs. This is because they were made to meet different modeling goals.

Higher-Level Simulation Scenarios Using Stimulation Protocols

The McIntosh et al. approach was used, and the purpose was to illustrate how to design stimulation patterns in the virtual brain, utilize them in a simulation, collect EEG recordings of resting state (RS) and evoked responses (ER), and then compute MSE to analyze the complexity of the resultant time-series.
The two-stream theory in visual neuroscience implies ventral and dorsal information processing. In the ventral stream, subcortical activity projects to V1 and propagates to temporal cortices via V2 and V4. To show how virtual brain stimulation patterns work, the primary visual cortex (V1) was turned on, and an observation was made of how a periodic rectangular pulse spreads to nearby areas, especially V2.
Benefiting from the virtual brain's flexibility, it is possible to systematically stimulate a specific brain region (e.g., V1) and highlight the anatomical connection to its target region (e.g., V2) by observing the delayed activity; analyzing the model's responses; handling multi-modal simulated data; and extracting metrics from computationally expensive algorithms to characterize both the "resting" and "evoked" states.
Two hands joined together side by side with a colored brain hovering over them
Two hands joined together side by side with a colored brain hovering over them

Dynamic Modeling

From both the shell and web interface, it is possible to exploit another feature of the virtual brain: simulation continuation, i.e., a simulation can be stopped to allow users to modify model parameters, scaling factors, apply or remove stimulation or spatial constraints (e.g., local connectivity), or make any other change that does not alter the spatiotemporal domain of the system or its output (integration step, transmission speed, and spatial support).
This allows dynamic runtime simulation updates. This kind of dynamic approach leads to adaptive modeling, in which stimuli and other elements change based on what is happening right now (for now, this last feature can only be changed through the scripting interface).

People Also Ask

What Is The Virtual Brain?

The Virtual Brain (TVB) is a free and open platform for creating and modeling individual brain network models. The TVB-on-EBRAINS ecosystem has a number of modules, simulation tools, pipelines, and data sets that are ready to use on EBRAINS.

Why Do We Need The Virtual Brain?

The virtual brain may store a person's prior experiences, memories, and knowledge. We will be able to retrieve data about the individual at any moment by utilizing this recording. We would be able to identify the person's crime using this advantage.

What Is The Virtual Brain In Blue Brain?

A virtual brain is a computer that simulates mammalian brain activities and generates output. This process requires the use of a powerful supercomputer. Blue Gene is an IBM supercomputer designed to deal with such problems, thus the moniker Blue Brain.

What Is The Benefit Of The Virtual Brain?

The Virtual Brain Initiative is one of the most well-known experiments for understanding and organizing brain data. It is a neuroinformatics platform that attempts to model brain structure on a macroscopic scale. This tool is based on the idea that imaging methods like MRI, functional MRI, and transcranial magnetic stimulation can be used to get functional and structural information about the brain.


Regarding performance, it's important to assess all the factors that affect surface-based simulation memory and execution time. Realistic brain network models are made on surface meshes with thousands of vertices per hemisphere (213 for the virtual brain demonstration cortical surface), but they can easily have more than 40,000.
Developing new tests to verify the simulation engine's consistency and stability is also crucial.
Our modeling method, which is outside the scope of this paper, also merits consideration. In academic articles to come, the idea behind building a broad framework for modeling brain networks will be explained.
To get the most information into the virtual brain, we're building an educational platform on the web that will be a major reference.
As virtual brain simulations are based on the human brain's large-scale anatomy, integrating fresh, trustworthy structural data is important to the platform's advancement. The Human Connectome Project database will be a future resource.
Jump to
Suleman Shah

Suleman Shah

Suleman Shah is a researcher and freelance writer. As a researcher, he has worked with MNS University of Agriculture, Multan (Pakistan) and Texas A & M University (USA). He regularly writes science articles and blogs for science news website and open access publishers OA Publishing London and Scientific Times. He loves to keep himself updated on scientific developments and convert these developments into everyday language to update the readers about the developments in the scientific era. His primary research focus is Plant sciences, and he contributed to this field by publishing his research in scientific journals and presenting his work at many Conferences. Shah graduated from the University of Agriculture Faisalabad (Pakistan) and started his professional carrier with Jaffer Agro Services and later with the Agriculture Department of the Government of Pakistan. His research interest compelled and attracted him to proceed with his carrier in Plant sciences research. So, he started his Ph.D. in Soil Science at MNS University of Agriculture Multan (Pakistan). Later, he started working as a visiting scholar with Texas A&M University (USA). Shah’s experience with big Open Excess publishers like Springers, Frontiers, MDPI, etc., testified to his belief in Open Access as a barrier-removing mechanism between researchers and the readers of their research. Shah believes that Open Access is revolutionizing the publication process and benefitting research in all fields.
Han Ju

Han Ju

Hello! I'm Han Ju, the heart behind World Wide Journals. My life is a unique tapestry woven from the threads of news, spirituality, and science, enriched by melodies from my guitar. Raised amidst tales of the ancient and the arcane, I developed a keen eye for the stories that truly matter. Through my work, I seek to bridge the seen with the unseen, marrying the rigor of science with the depth of spirituality. Each article at World Wide Journals is a piece of this ongoing quest, blending analysis with personal reflection. Whether exploring quantum frontiers or strumming chords under the stars, my aim is to inspire and provoke thought, inviting you into a world where every discovery is a note in the grand symphony of existence. Welcome aboard this journey of insight and exploration, where curiosity leads and music guides.
Latest Articles
Popular Articles