Brains in Action

February 2014
Source: The Scientist Staff
brain-in-action.jpg

Dissecting how the brain works is tricky. Genetic engineering techniques allow researchers to tag neurons of interest with fluorescent markers that glow upon neural activation, but capturing those brain areas in action, both as distinct neural circuits and at the resolution of single cells, can be hard. Studying the neural activity of Drosophila, for example, involves microsurgery to remove the top of a fly’s head to get a clear view of its brain, a task so delicate that only practiced technicians with steady hands are able to complete it successfully. To monitor brain activity in mice and other animals, neuroscientists often rely on a well-established technique called patch clamping, which can ignite career-questioning frustration, as electrical noise spoils seemingly good data and cells begin to die after only a few seconds of recording.

“But today is an age of automation,” says MIT’s Ed Boyden. “In a lot of other fields, like genomics and synthetic biology and pharmacology, automation is at the core of success,” he says; neuroscience is no exception. In recent years, scientists have begun to design and build robotic systems to perform arduous techniques, supporting the execution of high-throughput experiments—recording from hundreds of neurons, or even dozens of animals, in real time.

“The new thing we can do now is study how the neurons talk to each other across the entire brain,” says Misha Ahrens, a neuroscientist at the Howard Hughes Medical Institute’s Janelia Farm Research Campus in Ashburn, Virginia. “That’s a departure from previous studies, where people tended to look at many neurons, but a small fraction of the entire number.”

Read full article.