Keep up-to-date with our latest news.
Maturation and plasticity in biological and artificial neural networks
Cargèse, Corsica
October 21-25, 2024
The Barcelona and CNRS teams recently met in Cargèse within the context of the “Maturation and Plasticity” workshop, and with the aim to advance discussions of ongoing works and progress in WP3 & WP4. The workshop, coorganized by Remi Monasson (CNRS), was also a unique opportunity to advertise the experimental and theoretical results obtained by NEUCHIP to the large community of neuroscientists that attended the workshop and that were interested in plasticity and maturation of neural networks.



Jordi Soriano, Anna Haeb, Mikel Ocio-Moliner (Barcelona) and Francesco Borra (CNRS) gave invited and contributed talks on their research, while Akke Houben, Mireia Olives and Belén Montenegro (Barcelona) presented their results during the poster session, see photos.
More information about the workshop and program can be found in: https://www.phys.ens.psl.eu/~monasson/Cargese2024/workshop_cargese_2024.htm
FENS Blog: On-Device Machine Learning with Memristors in the Neuromorphic Era
Talk given by Prof. Shahar Kvatinsky
Today’s AI applications demand tremendous computing power. However, existing hardware for AI is hitting a bottleneck in terms of speed and power. This is particularly because of the data movement required due to the separation of computing and memory in the von Neumann architecture and the large energy consumption, access time and cost of existing memory technologies. Existing AI uses devices like GPUs and dedicated hardware like TPUs and edge inference devices like ASICs. However, the brain is able to perform its processing at very low power and is particularly good at perception tasks.

In this talk, Prof. Shahar Kvatinsky explained potential ways to improve hardware for AI by brain inspired neuromorphic computing using emerging memory devices called memristors. Memristors can emulate synaptic functions and can be used to accelerate neural networks. One way is to perform Vector Matrix Multiplication (VMM) and Multiply and Accumulate (MAC) operations using crossbars of memristors. He explained training of memristive neuromorphic systems using backpropagation and stochastic gradient descent, methods for low power neuromorphic computing like low precision AI inference and trainable data converters and discussed some of the security issues and mitigation strategies. He also talked about various emerging memory devices that can be used to build these systems like Magnetic Tunnel Junction (MTJ) based devices and floating gate flash memory called Yflash, built by Tower Semiconductor. He described quantized deep neural networks with MTJ based devices and simpler neural network models like Deep Belief Networks (DBNs) built with Yflash devices.

Prof. Shahar Kvatinsky’s talk highlighted some of the devices and techniques to accelerate and improve the performance of hardware for AI applications.
Written by Rishona Daniels
International dissemination of Neu-ChiP
Jordi Soriano from University of Barcelona is participating in the school ENREDANDO 2024, a school on complex networks and nonlinear dynamics that is taking place at Universidad Nacional de Colombia, in Bogotá. The school covers aspects from epidemics to neuroscience and artificial intelligence. and it is thought to motivate young students to engage themselves in these research topics or related fields.


PDRA Vacancy in Bio-engineering
We have a Research Associate vacancy in bio-engineering at Loughborough University, UK – further details for this can be found here.
HO FAI (JACKY) PO – FENS Blog
field of biological machine learning, promising attempts have been made to use cortical neurons for machine learning tasks. However, these efforts often lack a strong theoretical foundation from a mathematical perspective.


Prof Saad’s talk introduced three mathematical tools to address this gap:
- Neuronal Network Inference: Saad presented an algorithm that uses machine learning and statistical physics to infer the structure and connectivity of neuronal networks from spontaneous activities. This is crucial for understanding neuronal architecture and plasticity.
- Visual Informatics Approach: He showcased methods to study differences in neuronal activities under various conditions using advanced dimensionality reduction techniques, retaining the global structure of high-dimensional data better than traditional methods like PCA and t-SNE.
- Spatial Entropy Measurement: Saad emphasized using a Bayesian approach to measure the spatial entropy of neuronal activities accurately, revealing significant differences between spontaneous and stimulated activities. This provides a reliable metric for studying neuronal behaviour.
Overall, Prof Saad highlighted the potential of mathematical tools to enhance our understanding of neuronal networks and advance biological machine learning.
FENS SATELLITE EVENT
NEU-ChiP visits Sicily
Presenting the consorta at the 2023 International Symposium on Nonlinear Theory and Its Applications (NOLTA) a number of the group were able to present over the week event, bring in lots of discussion and interest from others, as well as push forward ideas amongst NeU-ChiP partners in various places, even on Mount Etna. Papers and abstracts from the event can be found here.

A special symposia for NEU-ChiP ran over two days as a hybrid meeting, enabling discussion with collaborators around the world.
Around the physical sessions the NEU-ChiP team found time to discuss ideas.





Around the conference we had the chance to see a little of the city of Catania, with some great food, and an excellent conference dinner in the beautiful museum.

The NEU-ChiP team also found some time to enjoy the surrounding sites with some adventures up to Mount Etna.




Aston Researcher Jacky Po at SigmaPhi 2023
I had an amazing time at SigmaPhi 2023 in Crete! As part of the NEU-CHiP consortium, I presented our work on “Inferring Effective Structure from Cortical Neural Network Activities” at this prominent conference in statistical physics. Such a fantastic platform for knowledge exchange and networking! #SigmaPhi2023 #NEUCHiP #NeuralNetworks #ConferenceExperience

Rhein Parri discussing NEU-ChiP
On world brain day Prof Parri gave a brief overview of the cutting edge work of his lab group and the EU funded NEU-ChiP project
Blog – Mathematical Modelling the Brain
The brain is one of the most complex organs in the human body, responsible for everything from our thoughts and emotions to our ability to move and sense the world around us. It is a fascinating and mysterious structure, and scientists have been studying it for centuries in an attempt to understand how it functions.
One of the most recent and exciting developments in this area is the use of mathematical models to understand the brain. Mathematical models are simplified representations of complex systems, and they can be used to predict the behavior of those systems under different conditions.
In the context of the brain, mathematical models can help us understand how neurons communicate with each other, how neural networks form, and how the brain processes information. They can also be used to simulate the effects of drugs or other interventions on the brain, which could lead to the development of new treatments for neurological disorders.
One of the most famous examples of a mathematical model of the brain is the Hodgkin-Huxley model, developed in the 1950s. This model describes the behavior of neurons and their ability to transmit electrical signals. Since then, many other mathematical models have been developed, each one building on the knowledge gained from previous models.
One of the key advantages of using mathematical models to study the brain is that they allow us to explore the behavior of the brain in a way that would be impossible with traditional experiments. For example, it would be difficult to study the behavior of millions of neurons in real-time, but a mathematical model can simulate this behavior and allow us to explore the consequences of different scenarios.
Mathematical models can also be used to test hypotheses in a more systematic way. Instead of relying on trial-and-error experiments, researchers can use mathematical models to predict the outcome of an experiment before it is conducted. This can save time and resources and lead to more efficient research.
Of course, there are also limitations to using mathematical models to study the brain. For example, mathematical models are only as good as the data that goes into them, and there is still much we don’t know about how the brain functions. Additionally, mathematical models can only provide a simplified representation of the brain, and it is important to remember that they are just one tool in the arsenal of neuroscientists.
In conclusion, the development of mathematical models to understand the brain is an exciting and rapidly evolving field of research. By using these models, scientists are gaining new insights into how the brain functions and how it can be treated when it malfunctions. While there are limitations to using mathematical models, their potential for advancing our understanding of the brain is enormous, and we can expect to see many more exciting developments in the years to come.