I had the immense privilege to attend the annual meeting of the Organization for Human Brain Mapping (OHBM) in Geneva last week. OHBM is a fantastic venue to see the latest and greatest developments in the field of human neuroimaging and this year made no exception. The program was jam-packed with keynote lectures, symposia, poster presentations, educational courses, and many informal discussions. It is almost impossible to describe the full breadth of the meeting, but I will try to summarize the ideas and developments that were most interesting to me.
Three broad themes emerged for me: big data, methodological rigor, and new approaches to science. Big data was pervasive throughout the meeting with a large number of posters making use of huge databases and special interest talks focussing on the practicalities and promises of data sharing and meta-analysis. It seems like the field is reacting to the widely discussed reproducibility crises (http://www.apa.org/monitor/2015/10/share-reproducibility.aspx) and prominent review articles about the faults of the low sample sizes that so far have been common practice in neuroimaging (http://www.nature.com/nrn/journal/v14/n5/full/nrn3475.html). These efforts seem well suited to firmly establish many features of brain structure and function, especially around typical brain development. In the coming years, this is likely to influence publishing standards, education, and funding priorities on a wide scale. I hope that this will not lead to the infant being ejected with the proverbial lavational liquid. There is still a need to study small samples of rare populations that give a better insight into biological mechanisms, e.g. rare genetic disorders. Further, highly specific questions about cognitive and brain mechanisms that require custom assessments will probably continue to be assessed in smaller scale studies, before being rolled out for large samples.
A related issue that probably also arose from the replication discussions is methodological rigor. Symposia at OHBM2016 discussed many issues that had been raised in the literature, like the effect of head motion on structural and functional imaging, comparison of post-mortem anatomy with diffusion imaging, and procedures to move beyond statistical association. Efforts to move towards higher transparency of analysis strategies were also prominently discussed. This includes sharing of more complete statistical maps (more info here: http://nidm.nidash.org/specs/nidm-overview.html– soon to be available in SPM and FSL), tools for easier reporting and visualisation of analysis pipelines, and access to well described standard datasets. I can imagine a future in which analyses are published in an interactive format that allows for access to the data and the possibility to tweak parameters to assess the robustness of the results.
These exciting developments also pose some challenges. The trend towards large datasets also requires a new kind of analytic and theoretical approach. This leads to a clash between traditional scientific approach and big data science. Let me expand: The keynote lectures presented impressive work that was carried out in the traditional hypothesis-test-refine-hypothesis fashion. For instance, keynote speaker Nora Volkow, National Institute of Drug Abuse, presented a comprehensive account of dopamine receptors in human addiction based on a series of elegant, but conceptually simple PET experiments. In contrast to the traditional approach of collecting a few measurements to test a specific hypothesis, big data covers a lot of different measurements with a very broad aim. This creates the problem of high-dimensional data that need to be reduced to reach meaningful conclusions. Machine learning approaches emerged as a relatively new addition to the human neuroscience toolkit to tackle pervasive problems associated with this. There is great promise in these methods, but standards for reliability still need to be established and new theoretical developments are needed to integrate these findings with current knowledge. Hopefully, there will be closer communication between method developers and scientists applying these tools to human neuroscience as these methods mature.
Comments