Spatially embedded recurrent neural networks reveal widespread links between structural and functional neuroscience findings
There is a growing need to better understand where the intersections between biological and artificial intelligence start and end. One exciting avenue of research in this area is to instantiate artificial models of cognition, with constraints we observe in neuroscience. Doing this helps us better understand to what extent constraints that have formed through our evolution and development shape the fundamental architectures of human intelligence.
To do this, we have recently developed a new model called the spatially-embedded recurrent neural network (seRNN) that was published in 2023 in Nature Machine Intelligence. Here, we provide neural networks with a three-dimensional geometric structure along with communication constraints - directly capturing how the brain's structural geometry can explain numerous otherwise disparate observations across sub-fields of neuroscience. This work is leading to many cutting-edge downstream projects that we are currently developing, and has been featured in many national and international outlets. You can read about this here:
It is not easy to know the best way to think about how and why there are individual differences between us as we develop. But one way is to think of it at the level of the brain and ask: what are the similarities and differences that we observe, between us? Even better is to build a model that can actually simulate out these similarities and differences so that we can observe the space of all possible brains that can develop.
In this work, published in 2021 in Nature Communications, we utilised a technique called generative network modelling to simulate brain connectivity formation in a large sample of neurodiverse children.
For a condensed summary, you can see my talk at the University of Edinburgh here