TumbleScan

Dive into a world of creativity!

Neuro… - Blog Posts

2 years ago

i wanna know what everyone’s majors are mutuals i want to know i love you and i’m interested


Tags
8 months ago

Please draw thinkfastshipping >,<

Please Draw Thinkfastshipping >,

They gunna do shenanigans


Tags
3 years ago

Cranial nerves mnemonic

On, on, on, they travelled and found Voldemort guarding very ancient horcruxes.

Olfactory, optic, oculomotor, trochlear, trigeminal, abducens, facial, vestibulocochlear, glossopharyngeal, vagus, accessory, hypoglossal.

On - Olfactory nerve (CN I)

On - Optic nerve (CN II)

On - Oculomotor nerve (CN III)

They - Trochlear nerve (CN IV)

Travelled - Trigeminal nerve (CN V)

And - Abducens nerve (CN VI)

Found - Facial nerve (CN VII)

Voldermort - Vestibulocochlear nerve (CN VIII)

Guarding - Glossopharyngeal nerve (CN IX)

Very - Vagus nerve (CN X)

Ancient - Accessory nerve (CN XI)

Horcruxes - Hypoglossal nerve (CN XII)


Tags
3 years ago
I Was Reading About Francis Crick And James Watson’s Discovery Of DNA In 1953…and Admiring Santiago’s
I Was Reading About Francis Crick And James Watson’s Discovery Of DNA In 1953…and Admiring Santiago’s
I Was Reading About Francis Crick And James Watson’s Discovery Of DNA In 1953…and Admiring Santiago’s
I Was Reading About Francis Crick And James Watson’s Discovery Of DNA In 1953…and Admiring Santiago’s
I Was Reading About Francis Crick And James Watson’s Discovery Of DNA In 1953…and Admiring Santiago’s
I Was Reading About Francis Crick And James Watson’s Discovery Of DNA In 1953…and Admiring Santiago’s

I was reading about Francis Crick and James Watson’s discovery of DNA in 1953…and admiring Santiago’s beautiful drawings of neurons…and Alan Hodgkin et Andrew Huxley’s mathematical discovery of calculating how action potentials propagates along a neuron…I couldn’t help but think how romantic it all is. To me it’s so interesting learning about the process of discovery. It’s incredible because all these people were just like us—students. It’s romantic because it’s human—a human experience—an insatiable thirst for knowledge, curiosity that knows no end. A perseverance to succeed. The ultimate quest to generate a novel idea before anyone else does. How can anyone say that science is not poetic? Science is poetry written in a different language, an esoteric one at that. But poetry nonetheless.


Tags
7 years ago
How We Determine Who’s To Blame

How we determine who’s to blame

How do people assign a cause to events they witness? Some philosophers have suggested that people determine responsibility for a particular outcome by imagining what would have happened if a suspected cause had not intervened.

This kind of reasoning, known as counterfactual simulation, is believed to occur in many situations. For example, soccer referees deciding whether a player should be credited with an “own goal” — a goal accidentally scored for the opposing team — must try to determine what would have happened had the player not touched the ball.

This process can be conscious, as in the soccer example, or unconscious, so that we are not even aware we are doing it. Using technology that tracks eye movements, cognitive scientists at MIT have now obtained the first direct evidence that people unconsciously use counterfactual simulation to imagine how a situation could have played out differently.

“This is the first time that we or anybody have been able to see those simulations happening online, to count how many a person is making, and show the correlation between those simulations and their judgments,” says Josh Tenenbaum, a professor in MIT’s Department of Brain and Cognitive Sciences, a member of MIT’s Computer Science and Artificial Intelligence Laboratory, and the senior author of the new study.

Tobias Gerstenberg, a postdoc at MIT who will be joining Stanford’s Psychology Department as an assistant professor next year, is the lead author of the paper, which appears in the Oct. 17 issue of Psychological Science. Other authors of the paper are MIT postdoc Matthew Peterson, Stanford University Associate Professor Noah Goodman, and University College London Professor David Lagnado.

Follow the ball

Until now, studies of counterfactual simulation could only use reports from people describing how they made judgments about responsibility, which offered only indirect evidence of how their minds were working.

Gerstenberg, Tenenbaum, and their colleagues set out to find more direct evidence by tracking people’s eye movements as they watched two billiard balls collide. The researchers created 18 videos showing different possible outcomes of the collisions. In some cases, the collision knocked one of the balls through a gate; in others, it prevented the ball from doing so.

Before watching the videos, some participants were told that they would be asked to rate how strongly they agreed with statements related to ball A’s effect on ball B, such as, “Ball A caused ball B to go through the gate.” Other participants were asked simply what the outcome of the collision was.

As the subjects watched the videos, the researchers were able to track their eye movements using an infrared light that reflects off the pupil and reveals where the eye is looking. This allowed the researchers, for the first time, to gain a window into how the mind imagines possible outcomes that did not occur.

“What’s really cool about eye tracking is it lets you see things that you’re not consciously aware of,” Tenenbaum says. “When psychologists and philosophers have proposed the idea of counterfactual simulation, they haven’t necessarily meant that you do this consciously. It’s something going on behind the surface, and eye tracking is able to reveal that.”

The researchers found that when participants were asked questions about ball A’s effect on the path of ball B, their eyes followed the course that ball B would have taken had ball A not interfered. Furthermore, the more uncertainty there was as to whether ball A had an effect on the outcome, the more often participants looked toward ball B’s imaginary trajectory.

“It’s in the close cases where you see the most counterfactual looks. They’re using those looks to resolve the uncertainty,” Tenenbaum says.

Participants who were asked only what the actual outcome had been did not perform the same eye movements along ball B’s alternative pathway.

“The idea that causality is based on counterfactual thinking is an idea that has been around for a long time, but direct evidence is largely lacking,” says Phillip Wolff, an associate professor of psychology at Emory University, who was not involved in the research. “This study offers more direct evidence for that view.”

How We Determine Who’s To Blame

(Image caption: In this video, two participants’ eye-movements are tracked while they watch a video clip. The blue dot indicates where each participant is looking on the screen. The participant on the left was asked to judge whether they thought that ball B went through the middle of the gate. Participants asked this question mostly looked at the balls and tried to predict where ball B would go. The participant on the right was asked to judge whether ball A caused ball B to go through the gate. Participants asked this question tried to simulate where ball B would have gone if ball A hadn’t been present in the scene. Credit: Tobias Gerstenberg)

How people think

The researchers are now using this approach to study more complex situations in which people use counterfactual simulation to make judgments of causality.

“We think this process of counterfactual simulation is really pervasive,” Gerstenberg says. “In many cases it may not be supported by eye movements, because there are many kinds of abstract counterfactual thinking that we just do in our mind. But the billiard-ball collisions lead to a particular kind of counterfactual simulation where we can see it.”

One example the researchers are studying is the following: Imagine ball C is headed for the gate, while balls A and B each head toward C. Either one could knock C off course, but A gets there first. Is B off the hook, or should it still bear some responsibility for the outcome?

“Part of what we are trying to do with this work is get a little bit more clarity on how people deal with these complex cases. In an ideal world, the work we’re doing can inform the notions of causality that are used in the law,” Gerstenberg says. “There is quite a bit of interaction between computer science, psychology, and legal science. We’re all in the same game of trying to understand how people think about causation.”


Tags
7 years ago
Big Improvements To Brain-Computer Interface

Big Improvements to Brain-Computer Interface

When people suffer spinal cord injuries and lose mobility in their limbs, it’s a neural signal processing problem. The brain can still send clear electrical impulses and the limbs can still receive them, but the signal gets lost in the damaged spinal cord.

The Center for Sensorimotor Neural Engineering (CSNE)—a collaboration of San Diego State University with the University of Washington (UW) and the Massachusetts Institute of Technology (MIT)—is working on an implantable brain chip that can record neural electrical signals and transmit them to receivers in the limb, bypassing the damage and restoring movement. Recently, these researchers described in a study published in the journal Nature Scientific Reports a critical improvement to the technology that could make it more durable, last longer in the body and transmit clearer, stronger signals.

The technology, known as a brain-computer interface, records and transmits signals through electrodes, which are tiny pieces of material that read signals from brain chemicals known as neurotransmitters. By recording brain signals at the moment a person intends to make some movement, the interface learns the relevant electrical signal pattern and can transmit that pattern to the limb’s nerves, or even to a prosthetic limb, restoring mobility and motor function.

The current state-of-the-art material for electrodes in these devices is thin-film platinum. The problem is that these electrodes can fracture and fall apart over time, said one of the study’s lead investigators, Sam Kassegne, deputy director for the CSNE at SDSU and a professor in the mechanical engineering department.

Kassegne and colleagues developed electrodes made out of glassy carbon, a form of carbon. This material is about 10 times smoother than granular thin-film platinum, meaning it corrodes less easily under electrical stimulation and lasts much longer than platinum or other metal electrodes.

“Glassy carbon is much more promising for reading signals directly from neurotransmitters,” Kassegne said. “You get about twice as much signal-to-noise. It’s a much clearer signal and easier to interpret.”

The glassy carbon electrodes are fabricated here on campus. The process involves patterning a liquid polymer into the correct shape, then heating it to 1000 degrees Celsius, causing it become glassy and electrically conductive. Once the electrodes are cooked and cooled, they are incorporated into chips that read and transmit signals from the brain and to the nerves.

Researchers in Kassegne’s lab are using these new and improved brain-computer interfaces to record neural signals both along the brain’s cortical surface and from inside the brain at the same time.

“If you record from deeper in the brain, you can record from single neurons,” said Elisa Castagnola, one of the researchers. “On the surface, you can record from clusters. This combination gives you a better understanding of the complex nature of brain signaling.”

A doctoral graduate student in Kassegne’s lab, Mieko Hirabayashi, is exploring a slightly different application of this technology. She’s working with rats to find out whether precisely calibrated electrical stimulation can cause new neural growth within the spinal cord. The hope is that this stimulation could encourage new neural cells to grow and replace damaged spinal cord tissue in humans. The new glassy carbon electrodes will allow her to stimulate, read the electrical signals of and detect the presence of neurotransmitters in the spinal cord better than ever before.


Tags
7 years ago
New Discovery Could Be A Major Advance For Understanding Neurological Diseases

New discovery could be a major advance for understanding neurological diseases

The discovery of a new mechanism that controls the way nerve cells in the brain communicate with each other to regulate our learning and long-term memory could have major benefits to understanding how the brain works and what goes wrong in neurodegenerative disorders such as epilepsy and dementia. The breakthrough, published in Nature Neuroscience, was made by scientists at the University of Bristol and the University of Central Lancashire.   The findings will have far-reaching implications in many aspects of neuroscience and understanding how the brain works.

The human brain contains around 100-billion nerve cells, each of which makes about 10,000 connections to other cells, called synapses. Synapses are constantly transmitting information to, and receiving information from other nerve cells. A process, called long-term potentiation (LTP), increases the strength of information flow across synapses. Lots of synapses communicating between different nerve cells form networks and LTP intensifies the connectivity of the cells in the network to make information transfer more efficient. This LTP mechanism is how the brain operates at the cellular level to allow us to learn and remember. However, when these processes go wrong they can lead to neurological and neurodegenerative disorders.

Precisely how LTP is initiated is a major question in neuroscience. Traditional LTP is regulated by the activation of special proteins at synapses called NMDA receptors. This study, by Professor Jeremy Henley and co-workers reports a new type of LTP that is controlled by kainate receptors.

This is an important advance as it highlights the flexibility in the way synapses are controlled and nerve cells communicate. This, in turn, raises the possibility of targeting this new pathway to develop therapeutic strategies for diseases like dementia, in which there is too little synaptic transmission and LTP, and epilepsy where there is too much inappropriate synaptic transmission and LTP.

Jeremy Henley, Professor of Molecular Neuroscience in the University’s School of Biochemistry in the Faculty of Biomedical Sciences, said: “These discoveries represent a significant advance and will have far-reaching implications for the understanding of memory, cognition, developmental plasticity and neuronal network formation and stabilisation. In summary, we believe that this is a groundbreaking study that opens new lines of inquiry which will increase understanding of the molecular details of synaptic function in health and disease.”

Dr Milos Petrovic, co-author of the study and Reader in Neuroscience at the University of Central Lancashire added: “Untangling the interactions between the signal receptors in the brain not only tells us more about the inner workings of a healthy brain, but also provides a practical insight into what happens when we form new memories. If we can preserve these signals it may help protect against brain diseases.

“This is certainly an extremely exciting discovery and something that could potentially impact the global population. We have discovered potential new drug targets that could help to cure the devastating consequences of dementias, such as Alzheimer’s disease. Collaborating with researchers across the world in order to identify new ways to fight disease like this is what world-class scientific research is all about, and we look forward to continuing our work in this area.”


Tags
8 years ago
(Image Caption: New Model Mimics The Connectivity Of The Brain By Connecting Three Distinct Brain Regions

(Image caption: New model mimics the connectivity of the brain by connecting three distinct brain regions on a chip. Credit: Disease Biophysics Group/Harvard University)

Multiregional brain on a chip

Harvard University researchers have developed a multiregional brain-on-a-chip that models the connectivity between three distinct regions of the brain. The in vitro model was used to extensively characterize the differences between neurons from different regions of the brain and to mimic the system’s connectivity.

The research was published in the Journal of Neurophysiology.

“The brain is so much more than individual neurons,” said Ben Maoz, co-first author of the paper and postdoctoral fellow in the Disease Biophysics Group in the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS). “It’s about the different types of cells and the connectivity between different regions of the brain. When modeling the brain, you need to be able to recapitulate that connectivity because there are many different diseases that attack those connections.”

“Roughly twenty-six percent of the US healthcare budget is spent on neurological and psychiatric disorders,” said Kit Parker, the Tarr Family Professor of Bioengineering and Applied Physics Building at SEAS and Core Faculty Member of the Wyss Institute for Biologically Inspired Engineering at Harvard University. “Tools to support the development of therapeutics to alleviate the suffering of these patients is not only the human thing to do, it is the best means of reducing this cost.“

Researchers from the Disease Biophysics Group at SEAS and the Wyss Institute modeled three regions of the brain most affected by schizophrenia  — the amygdala, hippocampus and prefrontal cortex.

They began by characterizing the cell composition, protein expression, metabolism, and electrical activity of neurons from each region in vitro.

“It’s no surprise that neurons in distinct regions of the brain are different but it is surprising just how different they are,” said Stephanie Dauth, co-first author of the paper and former postdoctoral fellow in the Disease Biophysics Group. “We found that the cell-type ratio, the metabolism, the protein expression and the electrical activity all differ between regions in vitro. This shows that it does make a difference which brain region’s neurons you’re working with.”

Next, the team looked at how these neurons change when they’re communicating with one another. To do that, they cultured cells from each region independently and then let the cells establish connections via guided pathways embedded in the chip.

The researchers then measured cell composition and electrical activity again and found that the cells dramatically changed when they were in contact with neurons from different regions.

“When the cells are communicating with other regions, the cellular composition of the culture changes, the electrophysiology changes, all these inherent properties of the neurons change,” said Maoz. “This shows how important it is to implement different brain regions into in vitro models, especially when studying how neurological diseases impact connected regions of the brain.”

To demonstrate the chip’s efficacy in modeling disease, the team doped different regions of the brain with the drug Phencyclidine hydrochloride — commonly known as PCP — which simulates schizophrenia. The brain-on-a-chip allowed the researchers for the first time to look at both the drug’s impact on the individual regions as well as its downstream effect on the interconnected regions in vitro.

The brain-on-a-chip could be useful for studying any number of neurological and psychiatric diseases, including drug addiction, post traumatic stress disorder, and traumatic brain injury.

"To date, the Connectome project has not recognized all of the networks in the brain,” said Parker. “In our studies, we are showing that the extracellular matrix network is an important part of distinguishing different brain regions and that, subsequently, physiological and pathophysiological processes in these brain regions are unique. This advance will not only enable the development of therapeutics, but fundamental insights as to how we think, feel, and survive.”


Tags
8 years ago
How To Make A Motor Neuron

How to Make a Motor Neuron

A team of scientists has uncovered details of the cellular mechanisms that control the direct programming of stem cells into motor neurons. The scientists analyzed changes that occur in the cells over the course of the reprogramming process. They discovered a dynamic, multi-step process in which multiple independent changes eventually converge to change the stem cells into motor neurons.

“There is a lot of interest in generating motor neurons to study basic developmental processes as well as human diseases like ALS and spinal muscular atrophy,” said Shaun Mahony, assistant professor of biochemistry and molecular biology at Penn State and one of the lead authors of the paper. “By detailing the mechanisms underlying the direct programing of motor neurons from stem cells, our study not only informs the study of motor neuron development and its associated diseases, but also informs our understanding of the direct programming process and may help with the development of techniques to generate other cell types.”

The direct programming technique could eventually be used to regenerate missing or damaged cells by converting other cell types into the missing one. The research findings, which appear online in the journal Cell Stem Cell on December 8, 2016, show the challenges facing current cell-replacement technology, but they also outline a potential pathway to the creation of more viable methods.

“Despite having a great therapeutic potential, direct programming is generally inefficient and doesn’t fully take into account molecular complexity,” said Esteban Mazzoni, an assistant professor in New York University’s Department of Biology and one of the lead authors of the study. “However, our findings point to possible new avenues for enhanced gene-therapy methods.”

The researchers had shown previously that they can transform mouse embryonic stem cells into motor neurons by expressing three transcription factors – genes that control the expression of other genes – in the stem cells. The transformation takes about two days. In order to better understand the cellular and genetic mechanisms responsible for the transformation, the researchers analyzed how the transcription factors bound to the genome, changes in gene expression, and modifications to chromatin at 6-hour intervals during the transformation.

“We have a very efficient system in which we can transform stem cells into motor neurons with something like a 90 to 95 percent success rate by adding the cocktail of transcription factors,” said Mahony. “Because of that efficiency, we were able to use our system to tease out the details of what actually happens in the cell during this transformation.”

“A cell in an embryo develops by passing through several intermediate stages,” noted Uwe Ohler, senior researcher at the Max Delbrück Center for Molecular Medicine (MDC) in Berlin and one of the lead authors of the work. “But in direct programming we don’t have that: we replace the gene transcription network of the cell with a completely new one at once, without the progression through intermediate stages. We asked, what are the timing and kinetics of chromatin changes and transcription events that directly lead to the final cell fate?“

The research team found surprising complexity – programming of these stem cells into neurons is the result of two independent transcriptional processes that eventually converge. Early on in the process, two of the transcription factors – Isl1 and Lhx3 – work in tandem, binding to the genome and beginning a cascade of events including changes to chromatin structure and gene expression in the cells. The third transcription factor, Ngn2, acts independently making additional changes to gene expression. Later in the transformation process, Isl1 and Lhx3 rely on changes in the cell initiated by Ngn2 to help complete the transformation. In order for direct programming to successfully achieve cellular conversion, it must coordinate the activity of the two processes.

“Many have found direct programming to be a potentially attractive method as it can be performed either in vitro – outside of a living organism – or in vivo – inside the body and, importantly, at the site of cellular damage,” said Mazzoni. “However, questions remain about its viability to repair cells – especially given the complex nature of the biological process. Looking ahead, we think it’s reasonable to use this newly gained knowledge to, for instance, manipulate cells in the spinal cord to replace the neurons required for voluntary movement that are destroyed by afflictions such as ALS.”


Tags
8 years ago
(Image Caption: Brain Showing Hallmarks Of Alzheimer’s Disease (plaques In Blue). Credit: ZEISS Microscopy)

(Image caption: Brain showing hallmarks of Alzheimer’s disease (plaques in blue). Credit: ZEISS Microscopy)

New imaging technique measures toxicity of proteins associated with Alzheimer’s and Parkinson’s diseases

Researchers have developed a new imaging technique that makes it possible to study why proteins associated with Alzheimer’s and Parkinson’s diseases may go from harmless to toxic. The technique uses a technology called multi-dimensional super-resolution imaging that makes it possible to observe changes in the surfaces of individual protein molecules as they clump together. The tool may allow researchers to pinpoint how proteins misfold and eventually become toxic to nerve cells in the brain, which could aid in the development of treatments for these devastating diseases.

The researchers, from the University of Cambridge, have studied how a phenomenon called hydrophobicity (lack of affinity for water) in the proteins amyloid-beta and alpha synuclein – which are associated with Alzheimer’s and Parkinson’s respectively – changes as they stick together. It had been hypothesised that there was a link between the hydrophobicity and toxicity of these proteins, but this is the first time it has been possible to image hydrophobicity at such high resolution. Details are reported in the journal Nature Communications.

“These proteins start out in a relatively harmless form, but when they clump together, something important changes,” said Dr Steven Lee from Cambridge’s Department of Chemistry, the study’s senior author. “But using conventional imaging techniques, it hasn’t been possible to see what’s going on at the molecular level.”

In neurodegenerative diseases such as Alzheimer’s and Parkinson’s, naturally-occurring proteins fold into the wrong shape and clump together into filament-like structures known as amyloid fibrils and smaller, highly toxic clusters known as oligomers which are thought to damage or kill neurons, however the exact mechanism remains unknown.

For the past two decades, researchers have been attempting to develop treatments which stop the proliferation of these clusters in the brain, but before any such treatment can be developed, there first needs to be a precise understanding of how oligomers form and why.

“There’s something special about oligomers, and we want to know what it is,” said Lee. “We’ve developed new tools that will help us answer these questions.”

When using conventional microscopy techniques, physics makes it impossible to zoom in past a certain point. Essentially, there is an innate blurriness to light, so anything below a certain size will appear as a blurry blob when viewed through an optical microscope, simply because light waves spread when they are focused on such a tiny spot. Amyloid fibrils and oligomers are smaller than this limit so it’s very difficult to directly visualise what is going on.

However, new super-resolution techniques, which are 10 to 20 times better than optical microscopes, have allowed researchers to get around these limitations and view biological and chemical processes at the nanoscale.

Lee and his colleagues have taken super-resolution techniques one step further, and are now able to not only determine the location of a molecule, but also the environmental properties of single molecules simultaneously.

Using their technique, known as sPAINT (spectrally-resolved points accumulation for imaging in nanoscale topography), the researchers used a dye molecule to map the hydrophobicity of amyloid fibrils and oligomers implicated in neurodegenerative diseases. The sPAINT technique is easy to implement, only requiring the addition of a single transmission diffraction gradient onto a super-resolution microscope. According to the researchers, the ability to map hydrophobicity at the nanoscale could be used to understand other biological processes in future.


Tags
8 years ago
Neuroscientists Call For Deep Collaboration To ‘crack’ The Human Brain

Neuroscientists call for deep collaboration to ‘crack’ the human brain

The time is ripe, the communication technology is available, for teams from different labs and different countries to join efforts and apply new forms of grassroots collaborative research in brain science. This is the right way to gradually upscale the study of the brain so as to usher it into the era of Big Science, claim neuroscientists in Portugal, Switzerland and the United Kingdom. And they are already putting ideas into action.

In a Comment in the journal Nature, an international trio of neuroscientists outlines a concrete proposal for jump-starting a new, bottom-up, collaborative “big science” approach to neuroscience research, which they consider crucial to tackle the still unsolved great mysteries of the brain.

How does the brain function, from molecules to cells to circuits to brain systems to behavior? How are all these levels of complexity integrated to ultimately allow consciousness to emerge in the human brain?

The plan now proposed by Zach Mainen, director of research at the Champalimaud Centre for the Unknown, in Lisbon, Portugal; Michael Häusser, professor of Neuroscience at University College London, United Kingdom; and Alexandre Pouget, professor of neuroscience at the University of Geneva, Switzerland, is inspired by the way particle physics teams nowadays mount their huge accelerator experiments to discover new subatomic particles and ultimately to understand the evolution of the Universe.

“Some very large physics collaborations have precise goals and are self-organized”, says Zach Mainen. More specifically, his model is the ATLAS experiment at the European Laboratory of Particle Physics (CERN, near Geneva), which includes nearly 3,000 scientists from tens of countries and was able (together with its “sister” experiment, CMS) to announce the discovery of the long-sought Higgs boson in July 2012.

Although the size of the teams involved in neuroscience may not be nearly comparable to the CERN teams, the collaborative principles should be very similar, according to Zach Mainen. “What we propose is very much in the physics style, a kind of 'Grand Unified Theory’ of brain research, he says. "Can we do it? Clearly, it’s not going to happen within five years, but we do have theories that need to be tested, and the underlying principles of how to do it will be much the same as in physics.”

To help push neuroscience research to take the leap into the future, the three neuroscientists propose some simple principles, at least in theory: “focus on a single brain function”; “combine experimentalists and theorists”; “standardize tools and methods”; “share data”; “assign credit in new ways”. And one of the fundamental premises to make this possible is to “engender a sphere of trust within which it is safe [to share] data, resources and plans”, they write.

Needless to say, the harsh competitiveness of the field is not a fertile ground for this type of “deep” collaborative effort. But the authors themselves are already putting into practice the principles they advocate in their article.

“We have a group of 20 researchers (10 theorists and 10 experimentalists), about half in the US and half in the UK, Switzerland and Portugal” says Zach Mainen. The group will focus on only one well-defined goal: the foraging behavior for food and water resources in the mouse, recording activity from as much of the brain as possible - at least several dozen brain areas.

“By collaboration, we don’t mean business as usual; we really mean it”, concludes Zach Mainen. “We’ll have 10 labs doing the same experiments, with the same gear, the same computer programs. The data we will obtain will go into the cloud and be shared by the 20 labs. It’ll be almost as a global lab, except it will be distributed geographically.”


Tags
8 years ago

Balancing Time and Space in the Brain: A New Model Holds Promise for Predicting Brain Dynamics

For as long as scientists have been listening in on the activity of the brain, they have been trying to understand the source of its noisy, apparently random, activity. In the past 20 years, “balanced network theory” has emerged to explain this apparent randomness through a balance of excitation and inhibition in recurrently coupled networks of neurons. A team of scientists has extended the balanced model to provide deep and testable predictions linking brain circuits to brain activity.

Lead investigators at the University of Pittsburgh say the new model accurately explains experimental findings about the highly variable responses of neurons in the brains of living animals. On Oct. 31, their paper, “The spatial structure of correlated neuronal variability,” was published online by the journal Nature Neuroscience.

The new model provides a much richer understanding of how activity is coordinated between neurons in neural circuits. The model could be used in the future to discover neural “signatures” that predict brain activity associated with learning or disease, say the investigators.

“Normally, brain activity appears highly random and variable most of the time, which looks like a weird way to compute,” said Brent Doiron, associate professor of mathematics at Pitt, senior author on the paper, and a member of the University of Pittsburgh Brain Institute (UPBI). “To understand the mechanics of neural computation, you need to know how the dynamics of a neuronal network depends on the network’s architecture, and this latest research brings us significantly closer to achieving this goal.”

Earlier versions of the balanced network theory captured how the timing and frequency of inputs—excitatory and inhibitory—shaped the emergence of variability in neural behavior, but these models used shortcuts that were biologically unrealistic, according to Doiron.

“The original balanced model ignored the spatial dependence of wiring in the brain, but it has long been known that neuron pairs that are near one another have a higher likelihood of connecting than pairs that are separated by larger distances. Earlier models produced unrealistic behavior—either completely random activity that was unlike the brain or completely synchronized neural behavior, such as you would see in a deep seizure. You could produce nothing in between.”

In the context of this balance, neurons are in a constant state of tension. According to co-author Matthew Smith, assistant professor of ophthalmology at Pitt and a member of UPBI, “It’s like balancing on one foot on your toes. If there are small overcorrections, the result is big fluctuations in neural firing, or communication.”

The new model accounts for temporal and spatial characteristics of neural networks and the correlations in the activity between neurons—whether firing in one neuron is correlated with firing in another. The model is such a substantial improvement that the scientists could use it to predict the behavior of living neurons examined in the area of the brain that processes the visual world.

After developing the model, the scientists examined data from the living visual cortex and found that their model accurately predicted the behavior of neurons based on how far apart they were. The activity of nearby neuron pairs was strongly correlated. At an intermediate distance, pairs of neurons were anticorrelated (When one responded more, the other responded less.), and at greater distances still they were independent.

“This model will help us to better understand how the brain computes information because it’s a big step forward in describing how network structure determines network variability,” said Doiron. “Any serious theory of brain computation must take into account the noise in the code. A shift in neuronal variability accompanies important cognitive functions, such as attention and learning, as well as being a signature of devastating pathologies like Parkinson’s disease and epilepsy.”

While the scientists examined the visual cortex, they believe their model could be used to predict activity in other parts of the brain, such as areas that process auditory or olfactory cues, for example. And they believe that the model generalizes to the brains of all mammals. In fact, the team found that a neural signature predicted by their model appeared in the visual cortex of living mice studied by another team of investigators.

“A hallmark of the computational approach that Doiron and Smith are taking is that its goal is to infer general principles of brain function that can be broadly applied to many scenarios. Remarkably, we still don’t have things like the laws of gravity for understanding the brain, but this is an important step for providing good theories in neuroscience that will allow us to make sense of the explosion of new experimental data that can now be collected,” said Nathan Urban, associate director of UPBI.


Tags
8 years ago
Rabies Viruses Reveal Wiring In Transparent Brains

Rabies Viruses Reveal Wiring in Transparent Brains

Scientists under the leadership of the University of Bonn have harnessed rabies viruses for assessing the connectivity of nerve cell transplants. Coupled with a green fluorescent protein, the viruses show where replacement cells engrafted into mouse brains have connected to the host neural network.

The research is in Nature Communications. (full open access)


Tags
8 years ago

OSKI

Pop-Outs: How The Brain Extracts Meaning From Noise

Pop-Outs: How the Brain Extracts Meaning From Noise

UC Berkeley neuroscientists have now observed this re-tuning in action by recording directly from the surface of a person’s brain as the words of a previously unintelligible sentence suddenly pop out after the subject is told the meaning of the garbled speech. The re-tuning takes place within a second or less, they found.

The research is in Nature Communications. (full open access)


Tags
8 years ago
Researchers Identify Method Of Creating Long-lasting Memories

Researchers identify method of creating long-lasting memories

Imagine if playing a new video game or riding a rollercoaster could help you prepare for an exam or remember other critical information.

A new study in mice shows this link may be possible.

Attention-grabbing experiences trigger the release of memory-enhancing chemicals. Those chemicals can etch memories into the brain that occur just before or soon after the experience, regardless of whether they were related to the event, according to researchers at UT Southwestern Medical Center’s Peter O’Donnell Jr. Brain Institute.

The findings, published in Nature, hold intriguing implications for methods of learning in classrooms as well as an array of potential uses in the workplace and personal life, researchers said.

The trick to creating long-lasting memories is to find something interesting enough to activate the release of dopamine from the brain’s locus coeruleus (LC) region.

“Activation of the locus coeruleus increases our memory of events that happen at the time of activation and may also increase the recall of those memories at a later time,” said Dr. Robert Greene, the study’s co-senior author and a Professor of Psychiatry and Neurosciences with the O’Donnell Brain Institute.

The study explains at the molecular level why people tend to remember certain events in their lives with particular clarity as well as unrelated details surrounding those events: for instance, what they were doing in the hours before the Sept. 11, 2001, terrorist attacks; or where they were when John F. Kennedy was assassinated.

“The degree to which these memories are enhanced probably has to do with the degree of activation of the LC,” said Dr. Greene, holder of the Sherry Gold Knopf Crasilneck Distinguished Chair in Psychiatry, in Honor of Mollie and Murray Gold, and the Sherry Knopf Crasilneck Distinguished Chair in Psychiatry, in Honor of Albert Knopf. “When the New York World Trade Center came down on 9/11, that was high activation.”

But life-changing events aren’t the only way to trigger the release of dopamine in this part of the brain. It could be as simple as a student playing a new video game during a quick break while studying for a crucial exam, or a company executive playing tennis right after trying to memorize a big speech.

“In general, anything that will grab your attention in a persistent kind of way can lead to activation,” Dr. Greene said.

Scientists have known dopamine plays a large role in memory enhancement, though where the chemical originates and how it’s triggered have been points of study over the years.

Dr. Greene led a study published in 2012 that identified the locus coeruleus as a third key source for dopamine in the brain, besides the ventral tegmental area and the substantia nigra. That research demonstrated the drug amphetamine could pharmacologically trigger the brain’s release of dopamine from the LC.

The latest study builds upon those findings, establishing that dopamine in this area of the brain can be naturally activated through behavioral actions and that these actions enhance memory retention.

The new study suggests that drugs targeting neurons in the locus coeruleus may affect learning and memory as well. The LC is located in the brain stem and has a range of functions that affect a person’s emotions, anxiety levels, sleep patterns, memory and other aspects of behavior.

The study tested 120 mice to establish a link between locus coeruleus neurons and neuronal circuits of the hippocampus – the region of the brain responsible for recording memories – that receive dopamine from the LC.

One part of the research involved putting the mice in an arena to search for food hidden in sand that changed location each day. The study found that mice that were given a “novel experience” – exploring an unfamiliar floor surface 30 minutes after being trained to remember the food location – did better in remembering where to find the food the next day.

Researchers correlated this memory enhancement to a molecular process in the brain by injecting the mice with a genetically encoded light-sensitive activator called channelrhodopsin. This sensor allowed them to selectively activate dopamine-carrying neurons of the locus coeruleus that go to the hippocampus and to see first-hand which neurons were responsible for the memory enhancement.

They found that selectively activating the channelrhodopsin-labeled neurons with blue light (a technique called optogenetics) could substitute for the novelty experience as a memory enhancer in mice. They also found that this activation could cause a direct, long-lasting synaptic strengthening – an enhancement of memory-relevant communication occurring at the junctions between neurons in the hippocampus. This process can mediate improvement of learning and memory.

Some next steps include investigating how big an impact this finding can have on human learning, whether it can eventually lead to an understanding of how patients can develop failing memories, and how to better target effective therapies for these patients, said Dr. Greene.


Tags
8 years ago

Sleeping brain's complex activity mimicked by simple model

Researchers have built and tested a new mathematical model that successfully reproduces complex brain activity during deep sleep, according to a study published in PLOS Computational Biology.

Sleeping Brain's Complex Activity Mimicked By Simple Model

Recent research has shown that certain patterns of neuronal activity during deep sleep may play an important role in memory consolidation. Michael Schellenberger Costa and Arne Weigenand of the University of Lübeck, Germany, and colleagues set out to build a computational model that could accurately mimic these patterns.

The researchers had previously modeled the activity of the sleeping cortex, the brain’s outer layer. However, sleep patterns thought to aid memory arise from interactions between the cortex and the thalamus, a central brain structure. The new model incorporates this thalamocortical coupling, enabling it to successfully mimic memory-related sleep patterns.

Using data from a human sleep study, the researchers confirmed that their new model accurately reproduces brain activity measured by electroencephalography (EEG) during the second and third stages of non-rapid eye movement (NREM) sleep. It also successfully predicts the EEG effects of stimulation techniques known to enhance memory consolidation during sleep.

The new model is a neural mass model, meaning that it approximates and scales up the behavior of a small group of neurons in order to describe a large number of neurons. Compared with other sleep models, many of which are based on the activity of individual neurons, this new model is relatively simple and could aid in future studies of memory consolidation.

“It is fascinating to see that a model incorporating only a few key mechanisms is sufficient to reproduce the complex brain rhythms observed during sleep,” say senior authors Thomas Martinetz and Jens Christian Claussen.


Tags
8 years ago
(Image Caption: Young Neurons (pink), Responsible For Encoding New Memories, Must Compete With Mature

(Image caption: Young neurons (pink), responsible for encoding new memories, must compete with mature neurons (green) to survive and integrate into the hippocampal circuit. Credit: Kathleen McAvoy, Sahay Lab)

Making memories stronger and more precise during aging

When it comes to the billions of neurons in your brain, what you see at birth is what get — except in the hippocampus. Buried deep underneath the folds of the cerebral cortex, neural stem cells in the hippocampus continue to generate new neurons, inciting a struggle between new and old as the new attempts to gain a foothold in the memory-forming center of the brain.

In a study published online in Neuron, Harvard Stem Cell Institute (HSCI) researchers at Massachusetts General Hospital and the Broad Institute of MIT and Harvard in collaboration with an international team of scientists found they could bias the competition in favor of the newly generated neurons.

“The hippocampus allows us to form new memories of ‘what, when and where’ that help us navigate our lives,” said HSCI Principal Faculty member and the study’s corresponding author, Amar Sahay, PhD, “and neurogenesis—the generation of new neurons from stem cells—is critical for keeping similar memories separate.”

As the human brain matures, the connections between older neurons become stronger, more numerous, and more intertwined, making integration for the newly formed neurons more difficult. Neural stem cells become less productive, leading to a decline in neurogenesis. With fewer new neurons to help sort memories, the aging brain can become less efficient at keeping separate and faithfully retrieving memories.

The research team selectively overexpressed a transcription factor, Klf9, only in older neurons in mice, which eliminated more than one-fifth of their dendritic spines, increased the number of new neurons that integrated into the hippocampus circuitry by two-fold, and activated neural stem cells.

When the researchers returned the expression of Klf9 back to normal, the old dendritic spines reformed, restoring competition. However, the previously integrated neurons remained.

“Because we can do this reversibly, at any point in the animals life we can rejuvenate the hippocampus with extra, new, encoding units,” said Sahay, who is also an investigator with the MGH Center for Regenerative Medicine.

The authors employed a complementary strategy in which they deleted a protein important for dendritic spines, Rac1, only in the old neurons and achieved a similar outcome, increasing the survival of the new neurons.

In order to keep two similar memories separate, the hippocampus activates two different populations of neurons to encode each memory in a process called pattern separation. When there is overlap between these two populations, researchers believe it is more difficult for an individual to distinguish between two similar memories formed in two different contexts, to discriminate between a Sunday afternoon stroll through the woods from a patrol through enemy territory in a forest, for example. If the memories are encoded in overlapping populations of neurons, the hippocampus may inappropriately retrieve either. If the memories are encoded in non-overlapping populations of neurons, the hippocampus stores them separately and retrieves them only when appropriate.

Mice with increased neurogenesis had less overlap between the two populations of neurons and had more precise and stronger memories, which, according to Sahay, demonstrates improved pattern separation.

Mice with increased neurogenesis in middle age and aging cohorts exhibited better memory precision.

“We believe that by increasing the hippocampus’s ability to do what it supposed to do and not retrieve past experiences when it shouldn’t can help,” Sahay said. This may be particularly useful for individuals suffering from post-traumatic stress disorder, mild cognitive impairment, or age-related memory loss.


Tags
Loading...
End of content
No more pages to load
Explore Tumblr Blog
Search Through Tumblr Tags