Friday, January 25, 2013

Human Brain Project or Euro Drain Project?

Thursday, January 24th, 2013, saw the announcement of EUR €0.5bn funding for what has now been a media darling over the past five years, the Human Brain Project (HBP).

For a sense of scale of what 500 million means, let us note that the public component of the Human Genome Project cost $3bn and the Large Hadron Collider cost $7.5bn. Have a look at some other mega projects in science.

Despite the sensationalist title of this post, I'll try to stay balanced. All said and done, this humble postdoc is looking forward to the trickle-down effects of mega-funding!

Let's start with the criticisms. I see the common criticisms and voices of support, distributed across 4 broad camps.

Too many assumptions, too little detail
The first camp is of the opinion that we simply don't have enough knowledge to simulate a brain that is faithful to nature. For example, to digital neuroanatomists such as Nobel Laureate Bert Sakmann, the BBP assumes too much about the statistics of anatomical connections within a cortical column. Without detailed microscopy-driven reconstruction of columnar connectivity (a goal of connectomics), Sakmann believes that the eventual computer simulation of columnar function is merely an unverifiable candidate for how a true column behaves. To generalize a little, I would say that his is attitude is similar to that of Tony Movshon (see Movshon vs. Seung), who believes that neuroscience is not ready yet for unified theories, and must try to become comfortable about its identity as a collection of cottage industries for another half a century or more.

Too much detail, not enough abstraction
The second camp consists of astute modelers such as Gustavo Deco, who seem to believe that running experiments on a simulated brain will only tell us what we already know. They question the epistemic value of the simulate everything approach. Modelers in general, tend to believe that artful abstractions alone — abstractions which leave noisy details out — will eventually lead to testable predictions and new knowledge. Personally, I believe that this criticism comes from a failure to grasp the difference between modeling and simulation-with-the-purpose-of providing an in-silico experimental platform.

Markram is difficult and unconventional
The third camp consists of the majority of scientists who feel that Henry Markram is hard to work with and doesn't subject enough of his findings to peer-review. One recent example of his fiery temper is the debate between him and IBM's Dharmendra Modha about simulating the cat brain, which ran afoul.

Technological consequences
Finally, we have a camp of pragmatic technocentrists, who suggest that as usually is the case with ambitious projects, there are necessary technological by-products such as the patch-clamp robot (perfect postdoc), and the IBM Blue Gene. Thus, they believe that although the HBP might not achieve its stated goals, the technology left in its wake might ultimately help neuroscientists. Indeed, this could be an amazing learning experience for neuroscientists to make the transition from cottage-industry-scale to industrial-revolution-scale knowledge creation, as certain other fields have done.

All of the above views are important, but there is yet another aspect that I would like to bring up. In one sentence, I believe that the HBP is operating in an epistemological gap. Let me explain by analogy.

LHC. Yes. Standard model. No
I personally think of the HBP as the Large Hadron Collider (LHC) of neuroscience. The LHC creates states of extremely high energy which are rare on earth, but are nevertheless known to be abundant in the known universe. Now, there are two crucial points about studying these high-energy states for the purposes of our analogy:

1. The properties of matter and interaction in these states are not completely known, but we have strong hypotheses about them based on theoretical work.

2. These states are impossible to naturally access and therefore directly characterize with known technology.

Thus, we have three recourses to the two problems:
(A) Develop theories that make testable predictions as we wait for technology to improve so that we may test them.

(B) Improve the technology needed to access these hard-to-access states*, or

(C) Simulate with high fidelity, everything we know about these states, and then do controlled experiments on the simulated system.

Having said this much, the similarities between the LHC and the HBP become immediately obvious. We don't know everything about neural circuits but have a few testable predictions. We cannot simultaneously access and measure the whole brain at all levels. Further, both the HBP and the LHC adopt strategy (C) to problems 1 and 2, except that the HBP is a computer simulation.

*Incidentally, a lesser known project that aims to record from all neurons in the mouse brain, provides an exciting and ambitious solution adopting strategy (B).

However, and finally coming to my point, it is also possible to note that unlike in the case of the LHC, whose flagship project tested the standard model of particle physics, neuroscientists have (I) no agreed-upon hypothesis which is difficult to test in vivo or in vitro, but easy to test in silico. Further, since the HBP is a computer simulation, we have (II) no agreed-upon way to declare the simulation as faithful to nature. This is what I refer to as operating in an epistemological gap.

So now that the HBP has been funded, what can cottage-industry-scale neuroscientists do, other than wringing hands, pointing fingers, twiddling thumbs, and umm, writing blogs?

Downscaling the HBP on your ordinary cluster
First, the computational neuroscientists, and high performance computing fields could independently simulate much smaller-scale brains or brain phenomena, so as to work out the details that would feed into the HBP.

There are two ways to downscale the HBP: downscale the level of complexity of a single neuron while maintaining the order of magnitude of the brain size: e.g. simulate 1 million integrate-and-fire neurons, as a Canadian group has recently done, or downscale the number of neurons while preserving the complexity, as the Blue Brain Project, the precursor to the HBP, has done.

Agree on killer apps
Second, the HBP could organize regular panels or boards, where leading neuroscientists could come up with ways to bridge the epistemological gap, that are not just hand-waving: "we will test the effect of drugs on diseased brains". Specifically,

(I) Propose and vote on a battery of tests that can benchmark the fidelity of the simulation.
(II) Propose and vote on a transparent list of the top 10 hypotheses that cannot be tested in vivo but can be tested on the HBP infrastructure.

So to summarize, neuroscience now has an LHC in the making but doesn't have a Higgs. In the 10 years that it might take to get the LHC built, those of us who wish to use it, could work on formulating as exactly as possible, our favorite top 10 hypotheses to test in-silico.


Mainak said...

Trickle down effects, haha! I always thought grants were a zero-sum game.

It boggles me why Prof. Markram doesn't crowdsource his computing. It'd be much cheaper that way. Have an agreement with EA Sports. Every time you install/run FIFA, you agree to dedicate part (say 5%) of your computing resources to the Blue Brain project - or something of that sort.

Anyway, here's my two cents. Mega projects, as I understand them, are better suited to _engineering_ projects (where you have clear cut goals, very good economic/business motivations and the requisite knowledge to achieve them) as opposed to _science_ projects.

The only possible exception where a science mega project succeeded in the true sense of the word is the Manhattan project, but that too was driven by military motivations!

The blue brain project has neither an economic nor a military driver. So, I really really wonder ...

Pavan said...

On trickle-down effects: they're giving postdoc grants.

Crowdsourced computing is very interesting! Check out SETI@home, which got started off even before the buzzwords of big data, crowd sourcing, etc. were born. The only bottleneck I see is that the HBP might require precise timing of information exchange between cores (with each neuron simulating a core). I'm not knowledgeable enough to tell whether this is possible with current network computing tools, let alone distributed computing on PCs.

I agree in general that engineering lends itself more readily to mega-projects. Even the human genome project, the connectome project, and the various brain mapping initiatives by the Allen Institute may be considered mega-engineering projects in the service of science.

Pavan said...

Interestingly, Allen Brain Institute's budget is about the same as the HBP: