Why you should care
Because new brain maps mean new tools to study Alzheimer’s, Parkinson’s and other neurological diseases.
The human brain is the most complex computational device in the known universe (for now). It’s so powerful that it has managed to design its greatest rival — the supercomputer. And before the machines take over and start doing whatever the hell they want, scientists are harnessing them to understand their creators, crunching vast amounts of data to produce maps of the human brain with more intricacy and accuracy than ever before.
Neuroimaging scientists liken their work to that of early cartographers mapping the globe. It’s an appealing analogy: If our current understanding of the brain is akin to the charts that Columbus used, then the ultimate goal is something like a Google Maps of the brain. Imagine a Street View that allows us to visualize information traveling along neural pathways, or traffic updates that alert us to breakdowns and offer detour options. There’s a whole lot of history between Columbus and Google, and so a whole lot of science needs to be done.
One of the remarkable things about the human brain is how much it differs from one individual to the next.
Dr. David Van Essen, professor of neuroscience, Washington University in St. Louis
Fortunately, we don’t need a new imaging machine to get started. Even with existing technology, the application of big data techniques to brain mapping can allow researchers “to derive ever more information” from the “fairly primitive” signals coming off scanners, says Dr. Arthur Toga, who runs USC’s Institute of Neuroimaging and Informatics. Magnetic resonance imaging (MRI) scanners alone can employ different “pulse sequences” to learn separately about anatomy, functional activity and even brain wiring. Toga’s work combines and compares all of this information from hundreds or even thousands of individuals and also layers in data from other brain-imaging machines such as PET scans (technically, positron emission tomography), as well as participants’ genetic information and behavioral characteristics. And, in some cases, by scanning the same individuals multiple times throughout their lives, they can even layer the dimension of time into their models.
Data sharing between neuroscientists enables these efforts, says Dr. David Van Essen, professor of neuroscience at Washington University in St. Louis. As a principal investigator of the National Institutes of Health’s $40 million Human Connectome Project (HCP), Van Essen oversaw a collaboration between 10 different university institutions, including Toga’s lab, then based at UCLA. The project collated MRI scans and nearly 500 behavioral, cognitive and demographic indicators from more than 1,000 individuals to produce one of the richest arrays of brain data to date. The data collection is wrapping up, so now it’s time to crowd-source the mining: The HCP has uploaded 10 million gigabytes of data to the cloud so that anyone (yes, even you) can log in to the mother lode. One of the first studies based on the HCP data mapped nearly 100 previously unidentified functional areas of the cerebral cortex.
“One of the remarkable things about the human brain is how much it differs from one individual to the next,” says Van Essen. Even the macro-level shape of how the brain matter of the cerebral cortex is folded into the skull differs between identical twins. This poses one of the greatest challenges of brain mapping: If every brain is unique, how can we distinguish between irrelevant and important characteristics? For instance, which of the myriad unique features of an Alzheimer’s patient’s brain hint at the root causes of the disease, and which are just personal quirks?
That’s where big data proves its worth. Researchers can build a model of the “average” brain, enabling them to detect variations from the norm. For example, Toga and his team have created the first time-lapse map of the progression of Alzheimer’s in a generalizable case, enabling them to identify the neurological footprint of the disease up to 10 years before symptoms appear. “The greatest hope for at least slowing down the progression of the disease — if not curing it — would be to get at the disease in its earliest stages,” Toga says, “prior to when cognitive decline really manifests itself.” Supercomputers also look to identify similar signatures, or biomarkers, that might indicate Parkinson’s disease, autism and more. Similar approaches could yield all manner of insights in neurology, and even other branches of medicine, as we learn to better compile and understand data from hundreds, thousands or even millions of individuals at once.
However, our ignorance of what’s between our ears remains vast. Amazingly, “we still don’t even know all of the cell types in the brain,” says Dr. Guoying Liu, director of the MRI program at the National Institute of Biomedical Imaging and Bioengineering. Indeed, supercomputers don’t provide “a window into the brain with unlimited resolution,” says Toga. Big-data strategies are starting to hit the limits of current imaging technology. The HCP compiled data to within the precision of a single voxel, or 3D pixel, but more sophisticated machines are needed for greater resolution. In 2013, President Obama announced the BRAIN Initiative to coordinate millions of research dollars behind this very problem, but it’s likely to be a long wait: The majority of BRAIN-funded project teams are only working on small animals’ brains, while the one working on humans, co-led by Liu, “is still at a very early stage.”
The challenge facing neuroimagers is far more complex than mapping the Earth’s continents and oceans. It’s like trying to infer how human society does (and does not) work by studying a satellite image. But due to the diversity of human brains, there are also seven billion or so unique worlds to map. Faced with that kind of cartographic challenge, Columbus might never have set sail.