Thursday, June 14, 2007

Is the Brain a Spintronic Device?

Spintronics is a new paradigm of electronics based on the spin degree of freedom of the electron. Either adding the spin degree of freedom to conventional charge-based electronic devices or using the spin alone has the potential advantages of nonvolatility, increased data processing speed, decreased electric power consumption, and increased integration densities compared with conventional semiconductor devices.

All spintronic devices act according to the simple scheme: (1) information is stored (written) into spins as a particular spin orientation (up or down), (2) the spins, being attached to mobile electrons, carry the information along a wire, and (3) the information is read at a terminal. Spin orientation of conduction electrons survives for a relatively long time (nanoseconds, compared to tens of femtoseconds during which electron momentum decays), which makes spintronic devices particularly attractive for memory storage and magnetic sensors applications, and, potentially for quantum computing where electron spin would represent a bit (called qubit) of information.

Given the incredible intricacies of the brain's ultrastructure and the billions of years it has had to evolve, it is certainly conceivable that the brain may utilize spintronics. Of course, any talk of quantum mechanical effects in the brain is often greeted with scepticism, thanks to the shameless shenanigans of Roger Penrose and Stuart Hameroff involving Bose-Einstein condensates and microtubules. However, there may be a role for quantum mechanical effects in neural computation yet, and it may be spintronics. The idea is speculative, but definitely worth further consideration, bearing in mind that one potential problem with spintronics is whether spin states are stable long enough to be used in neural computation.

Another Neural Prediction Challenge!

In a previous post, I noted that Jack Gallant had issued a Neural Prediction Challenge. Now, we have another challenge, this time from the Gerstner lab:

Here is our Challenge, open to everybody in in neural modeling, machine learning, or similar fields:

- Is it possible to predict the timing
of every spike that a neuron emits with 2 ms precision?
- Is it possible to predict the subthreshold membrane potential
with a precision of 2 mV for arbitrary input?

Annotated training data and test stimuli from several
cells under different stimulation conditions are available
at
http://icwww.epfl.ch/~gerstner/QuantNeuronMod2007/challenge.html

Important dates
* Data set available by March 16.
* Participants must submit their prediction by June 1st.
* Winner announced around June 10 .
* Winning results will be presented at the workshop June 25/26
Quantitative Neuron Modeling: Predicting every spike?


Competition and Prizes
The competition is organized in several categories, called A,B,C,D.
Participants may run in one or several categories

* 1st prize :

o 4 nights of hotel in Lausanne at the Lake of Geneva,
June 23-27.
o Free participation in the Quantitative Neuron Modeling
workshop June 25/26
o 35-minute-slot for talk as an
Invited Speaker at the workshop.

* 2nd prize:

o Free participation in the
Quantitative Neuron Modeling workshop June 25/26
o Poster presentation and poster spotlight in the workshop.


Methods and Models:
The only aspect that counts for us is the quality of the prediction
on the test set. In terms of methods, anything goes
(Machine learning, compartmental model, integrate-and-fire model,
systems identification etc)



We hope that many people will take up the challenge.
Let the best model win!

Labels: , , , ,

Synapse Resolution Whole-Brain Atlases


It is well-known that the highest resolution whole brain atlases are currently at BrainMaps.org, which has been compared to a Google Maps for the brain. However, these atlases are 0.46 microns per pixel, and are not sufficient to discern individual synapses, which require nanometer resolution. So in this post, I will consider the problems associated with constructing a synapse resolution (nanometer resolution) whole-brain atlas.

There seem to be two fundamental hurdles to constructing a synapse resolution whole-brain atlas: 1) image acquisition, and 2) digital technologies for working with the images and serving them over a network.

The first hurdle encompasses the time bottleneck and section preparation. If each section is 50 nm thick, then for a 10 mm mouse brain, 20,000 sections are needed, thus requiring some type of automation for section preparation. If we consider the time to scan a single 10mmx10mm section at 1MHz, it comes out to 46 days, which is unacceptable. Even with 20,000 TEMs (transmission electron microscopes) in parallel, one for each section, it will take 46 days for the complete scan. An alternative is offered by way of virtual microscopy solutions offered for light microscopy. One way would be to scan over the section, acquiring one column at a time instead of a patchwork of small images for montaging. Another alternative would be to construct a TEM with parallel scanning capabilities (having parallel magnetic lenses and electron beams), so that the entire section could be scanned at once, instead of scanning each little image patch in serial. This solution requires constructing a special type of TEM which implements certain features found in current day virtual microscopy systems for LM (light microscopy), and thus requires a team of hardware and software specialists to specially design, in addition to some physicists who are intimately acquianted with the physics behind TEM.

The second hurdle involves digital technologies, and the observation that even if a whole mouse brain was able to be acquired through TEM, that digital technologies currently would not be able to deal with that much data (8 x 10^17 pixels, or 2.4 10^18 megabytes (uncompressed)). A single section is 4 x 10^12 pixels, which comes out to 12 x 10^12 megabytes or 12,000 petabytes (uncompressed), which is still not feasible using today's digital technologies.

Let's consider a less ambitious proposal: TEM montaging of a 1mm x 2mm area at 2.5 nm resolution. TEMs typically acquire images in 2kx2k patches, which means that each patch is 5 microns x 5 microns. So for 2mmx1mm, it's 80,000 patches, and the montaged image size would be 800k x 400k, which is already a problem since there are file format size limitations on common formats like TIFF and JPG, and so to acquire such a large image would necessitate using a non-standard file format, which makes the issue of making the images web accessible more problematic. The largest images, say at BrainMaps.org, are 120k x 100k, which works out to 3 GB as a JPG-compressed TIFF file (or 30 GB uncompressed), and which is already near the limit for the TIFF file format (which is 4 GB), which means that images much exceeding 120k x 100k are already going to present a problem.

In conclusion, for purposes of obtaining information about whole-brain connectivity, a nanometer-resolution whole-brain scan is required, and current-day tracer experiments are suboptimal and will always leave room for ambiguities that can only be resolved by completely mapping every synapse and axon in the brain. However, constructing a synapse resolution (or nanometer resolution) whole-brain atlas for even a mouse brain is so formidable as to be seemingly beyond today's technological capabilities. Maybe in 10-20 years.

Wednesday, February 07, 2007

Pubmed Pet Peeves

Suggestions for the Pubmed developers:

1) Assign a unique author ID so that you can pull up all publications for a given individual as opposed to all individuals who happen to have the same name.

2) Ability to export references to Bibtex format. (Google Scholar does this already).

3) Include number of times cited. (Google Scholar does this also).

What is a Brain Area?

What is a "brain area"? More recently, I have become aware of the inadequateness of the concept of "brain area", or at any rate, to call into question the basis for such a concept. This basis is three-fold, as noted by Felleman/van Essen: cortical areas (or in general, brain areas) are defined by 1) connectivity, 2) functional maps, and 3) chemical or architectonic signatures. However, for the most part, parcellations of the primate (and non-primate) brain have been based on studies using Nissl- or myelin-stained material that are over a century old, and investigators have come up with widely different parcellation schemes for the brain, which in my opinion, is a prominent warning sign that the notion of "cortical area" is ill-defined. Further anatomical studies of the brain have confirmed this point to me. And so, while I recognize the utility to conventionally naming different brain areas on the basis of Nissl-stained material or otherwise, I do not believe we currently possess an adequate conceptual understanding of what really constitutes a "brain area". In early sensori-motor areas, this concept seems applicable since we are talking about mappings from sensory receptor sheets onto the cortex, which get mapped onto well-defined areas of the brain, but other areas of the brain are not like this, and there is no reason a priori to expect that these association and limbic parts of the brain should be nicely parcellated into anything like discrete non-overlapping brain areas.

Part of the problem involves considering useful alteratives to this notion of discrete non-overlapping brain areas which is prevalent in the neuroscience community, and which heavily biases interpretions of experiments. It is largely a conceptual problem, but I am confident that a revolution in our notion of "brain area" will be forthcoming in the near future. Such an overhaul in this precious concept is requisite to a better understanding of the brain.

What I find amusing is that neuroscience textbooks never address this conceptual issue, though it is widely recognized by many prominent neuroscientists as a central problem. This has the peculiar effect that students of neuroscience often learn about their subject, thinking that all of the fundamental conceptual issues have been worked out and that the field of neuroscience rests on a firm foundation. This is not the case, and I would not be surprised if this shaky foundation crumbles, and that many of the "mysteries" of the brain's organization and function, when viewed in a new light and a new foundation, do not seem that mysterious after all, but rather obey a very precise and well-defined logic and reason.

The observation that the concept of "brain area" is ill-defined means, in part, that current attempts to analyze whole-brain connectivity using graph theory are based on incorrect data and incorrect assumptions, since we may legitimately question whether the nodes in the graph have any real meaning. So claims like "the brain is a small-world network" purported by some are empty, and are merely the consequence of following the recent fad in "network science", where anyone and everyone attempts to show that their favorite system is a so-called small-world network. How unoriginal and blase! If only these people could think for themselves instead of parroting the latest fad. The worst part of it is when these people actually publish such nonsense since it misleads other people (usually laymen, but also some neuroscientists) who don't know any better.

No comments: