Without neurons reacting to the blood leptin level, the brain does not control the feeling of hunger and fullness. This type of genetic defects results in severe obesity in humans and animals. Scientists from Harvard University (HU), Massachusetts General Hospital (MGH) and the Nencki Institute of Experimental Biology of the Polish Academy of Sciences (Nencki Institute) in Warsaw have demonstrated in their experiments on mice that it is possible to restore brain functions by transplantation of small numbers of new neurons into the damaged area of the brain.
Without neurons reacting to the blood leptin level, the brain does not control the feeling of hunger and fullness. This type of genetic defects results in severe obesity in humans and animals. Scientists have now demonstrated in their experiments on mice that it is possible to restore brain functions by transplantation of small numbers of new neurons into the damaged area of the brain. (Credit: Nencki Institute, Grzegorz Krzyżewski)
"A spectacular effect in the brain repair that we were able to achieve was significantly reduced weight of genetically defective obese mice and further significant reduction of adverse symptoms accompanying diabetes," says Dr. Artur Czupryn (Nencki Institute, HU, MGH), first author of a paper published in the latest issue of Science.
Already for some time medicine has attempted to repair damaged brain fragments through transplants of stem cells. These interventions are risky. Transplanted cells often develop in an uncontrolled manner, which frequently leads to cancer.
The aim of the research carried out for the past five years at HU, MGH and the Nencki Institute was to show that transplantation of small numbers of cells could restore the missing neuronal circuits and restore the lost brain functions. Genetically defective mice, deficient in leptin receptor, have been used in these experiments. Leptin is a protein secreted from cells of the fat tissue into the blood when eating. When it reaches the hypothalamus, it reacts with specific neurons and its presence or its low level cause the feeling of fullness or hunger, respectively. Leptin receptor deficient mice do not know the feeling of fullness. They weigh up to twice more than healthy individuals and suffer from advanced diabetes.
The team from Harvard University and Nencki Institute focused on the transplants of immature neurons (neuroblasts) and progenitors, which are specific stem cells with already determined developmental direction. Cells isolated from small regions of developing embryonic brains of healthy mice were used for transplantations. Thus, the probability increased that cells introduced into recipients' brains will transform into neurons or accompanying glial cells.
Millions of cells are usually transplanted. In this project, however, scientists injected a suspension of barely several thousand progenitors and neuroblasts into the hypothalamus of mice. About 300 nanolitres of cell suspension was injected into the mouse hypothalamus in the course of low invasive method -- by a thin micropipette with a diameter only several times larger than individual cells.
"The suspension was introduced into strictly defined region of the hypothalamus of mice, measuring about 200-400 micrometres in length. We were able to locate it thanks to unique high-frequency ultrasound microscopic guidance available at Harvard University. It allowed us to carry out complex non-invasive microtransplants with unprecedented precision, because we were able to carry out high resolution imaging of both the brain structures as well as the introduced micropipette," says Dr. Czupryn.
All transplanted cells have been marked with a fluorescent protein, which made possible to follow them in the recipients' brains. Observations carried out 20 or more weeks after the procedure have shown that almost half of transplanted cells transformed into neurons with typical morphology, producing proteins characteristic for normal neurons. By applying sophisticated research techniques, it was possible to demonstrate that the entire range of missing types of neurons was restored in the brain centre for controlling hunger and fullness. Moreover, the new neurons have already formed synapses and communicated with other neurons in the brain, as well as reacted properly to changes in levels of leptin, glucose, and insulin.
The final proof for restoration of proper functioning of the hypothalamus in mice was brought by measurements of body weight and blood metabolic factors. Unlike control population of genetically defective obese mice, the weight of mice with transplanted neurons resembled normal weight. Reversal of unfavourable changes of the blood metabolic parameters has also been observed.
"Many attempts have been described in the literature to date of transplanting cells into the brain. We have shown that a really small transplant of neuroblasts and progenitors was able to reconstitute damaged brain areas and influence the whole organism. We have shown that it is possible to introduce new neurons, which function properly, integrate well into the recipient nervous tissue and restore missing brain functions. Moreover this method turned out to be low invasive and safe, since it did not lead to tumour formation," sums up Dr. Czupryn.
Results achieved by the group from Harvard University and the Nencki Institute define a promising research direction, which could lead to the development of new repair therapies. This novel method could help, for example, eliminate the effects of stroke or improve the treatment of Parkinson's disease, which is associated with dysfunction within a defined brain area. Scientists emphasize however that long years of experiments, research, and tests are needed before therapies based on their ideas end up in the clinics and hospitals.
Gray Matter in Brain’s Control Center Linked to Ability to Process Reward
Structure-function impairments observed in people addicted to cocaine
November 29, 2011
UPTON, NY — The more gray matter you have in the decision-making, thought-processing part of your brain, the better your ability to evaluate rewards and consequences. That may seem like an obvious conclusion, but a new study conducted at the U.S. Department of Energy’s Brookhaven National Laboratory is the first to show this link between structure and function in healthy people — and the impairment of both structure and function in people addicted to cocaine. The study appears in the Journal of Cognitive Neuroscience.
Methodology and key findings
Structural analysis of the MRI scans was performed using voxel-based morphometry (VBM). VBM is a whole-brain, fully automated, unbiased and operator-independent MRI analysis technique that is commonly used to detect regionally specific differences in brain tissue composition using voxel-wise comparisons. The gray matter tissue probability from each voxel was then correlated with the P300 amplitude difference between the highest monetary reward (45 cents per correct response) and no money (0 cents) conditions of a sustained attention task. The red/orange/yellow highlights on these brain scans indicate the regions where the correlation between gray matter volume and differential P300 response was quite strong in healthy control subjects but weak or nonexistent in cocaine-addicted individuals — the dorso-lateral and ventro-lateral prefrontal cortex, anterior cingulate cortex, and the orbitofrontal cortex, which are known to be functionally involved in reward processing and decision-making. These results suggest that the structural integrity of the prefrontal cortex modulates electrocortical sensitivity to monetary reward. Impairments in these regions may also be related to decreased ability to assess and respond to other modulated rewards and consequences, such as those associated with using addictive drugs.
“This study documents for the first time the importance to reward processing of gray matter structural integrity in the parts of the brain’s prefrontal cortex that are involved in higher-order executive function, including self-control and decision-making,” said Muhammad Parvaz, a post-doctoral fellow at Brookhaven Lab and a co-lead author on the paper.
“Previous studies conducted at Brookhaven and elsewhere have explored the structural integrity of the prefrontal cortex in drug addiction and the functional components of reward processing, but these studies were conducted separately,” Parvaz said. “We wanted to know whether the specific function of reward processing could be ‘mapped’ onto the underlying brain structure — whether and how these two are related,” he added.
Differences in gray matter volume — the amount of brain matter made up of nerve cell bodies, as opposed to the “white matter” axons that form the connections between cells — have been observed in a range of neuropsychiatric diseases when compared with healthy states, explained Anna Konova, the other co-lead author on the paper. “We wanted to know more about what these differences mean functionally in healthy individuals and in drug-addicted individuals,” she said.
To explore this structure-function relationship, the scientists performed magnetic resonance imaging (MRI) brain scans to measure brain volume in 17 healthy people and 22 cocaine users. The scans collect structural measurements for the entire brain, and can be analyzed voxel-by-voxel — the equivalent of three-dimensional pixels — to get detailed measurements for individual brain regions.
Within a short period of the MRI scans, the scientists also used electrodes placed on the research subjects’ scalps to measure a particular electrical signal known as the P300 (an event-related potential derived from an ongoing electroencephalogram, or EEG, that is time-locked to a particular event). This specific measure can index brain activity related to reward processing. During these electrical recordings, the subjects performed a timed psychological task (pressing buttons according to a specific set of rules) with the prospect of earning varying levels of monetary reward, from no money up to 45 cents for each correct response with a total potential reward of $50.
Previous studies by the research team have shown that, in healthy subjects, the P300 signal increases in magnitude with the amount of monetary reward offered. Cocaine-addicted individuals, however, do not exhibit this differential response in the P300 measure of brain activity, even though they, like the healthy subjects, rate the task as more interesting and exciting when the potential reward is greater.
The current study extended these results by linking them for the first time with the structural measurements.
The scientists used statistical methods to look for correlations between the difference in brain activity observed in the high-reward and no-reward conditions — how much the brain’s P300 response changed with increasing reward — and the gray matter volume in various parts of the brain as measured voxel-by-voxel in the MRI scans.
In the healthy subjects, the magnitude of change in the P300 signal with increasing reward was most strongly correlated with the volume of gray matter in three regions of the prefrontal cortex.
“The higher the gray matter volume in those particular regions, the more brain activity increased for the highest monetary reward as compared to the non-reward condition,” Konova said.
The cocaine-addicted individuals had reduced gray matter volume in these regions compared with the healthy subjects, and no detectable differences between the reward conditions in the P300 measure of brain activity. There were also no significant correlations between the former and latter — structure and function measures — in the cocaine-addicted subjects.
“These findings suggest that impaired reward processing may be attributed to deficits in the structural integrity of the brain, particularly in prefrontal cortical regions implicated in higher order cognitive and emotional function,” Parvaz said. “This study therefore validates the use of the structural measures obtained by MRI as indicative of functional deficits.”
The implications are important for understanding the potential loss of control and disadvantageous decision-making that can occur in people suffering from drug addiction, Konova explained: “These structure-function deficits may translate into dysfunctional behaviors in the real world. Specifically, impaired ability to compare rewards, and reduced gray matter in the prefrontal cortex, may culminate in the compromised ability to experience pleasure and to control behavior, especially in high-risk situations — for example, when craving or under stress — leading individuals to use drugs despite catastrophic consequences.”
The authors acknowledge that there are still questions about whether these changes in brain structure and function are a cause or a consequence of addiction. But the use of multimodal imaging techniques, as illustrated by this study, may open new ways to address these and other questions relevant to understanding human motivation in both health and disease states, with particular relevance to treating drug addiction.
This research was performed at Brookhaven Lab under the guidance of Rita Goldstein, Director of Brookhaven Lab’s Neuropsychoimaging Group and the corresponding author on the paper. Dardo Tomasi of the National Institute on Alcohol Abuse and Alcoholism, who runs Brookhaven’s MRI facility, and Nora Volkow, Director of the National Institute on Drug Abuse (NIDA), were co-authors. The research was funded by a grant to Goldstein from the National Institutes of Health and by the General Clinical Research Center of Stony Brook University.
ScienceDaily (Nov. 29, 2011) — By placing real and virtual objects in the flight paths of bats, scientists at the Universities of Bristol and Munich have shed new light on how echolocation works. Their research is published today in Behavioural Processes.
Overlay of 20 video images showing the flight paths of bats passing the loudspeaker used for virtual object presentation. (Credit: Image by University of Bristol School of Biological Sciences)
The researchers found that it is not the intensity of the echoes that tells the bats the size of an object but the 'sonar aperture', that is the spread of angles from which echoes impinge on their ears.
Echolocating bats emit calls for orientation. These calls bounce off objects in a bat's environment, carrying information about the object back to the bat -- for example, the echoes of large objects are louder than those of small objects. Analysing echoes when surrounded by a cacophony of calls and echoes from other bats, however, makes this a difficult task for the auditory system.
The Bristol and Munich researchers first wanted to know whether bats are able to use echolocation in such a crowded situation at all. The team filmed the flight paths of hundreds of bats of 13 different species while the bats were emerging from a cave, and then placed a small novel object in the flight paths.
Dr Holger Goerlitz, now a Research Fellow at Bristol's School of Biological Sciences, was amazed by the experience: "The videos clearly showed curves in the bats' flight paths after we introduced the small novel object. This means that the bats were able to use echolocation in this familiar and crowded situation to detect the object, which measured only 5x8 cm, and to guide their evasive flight."
But how do bats perceive the size of an object from the echoes bouncing off it? To test whether bats use echo intensity, the team used echoes of virtual objects, which could be manipulated in size, from a loudspeaker. This method records the calls of passing bats and simulates in real time the echoes of objects that are not present physically -- just like a projector can show visual images of absent objects. Using this method for the first time with wild bats, the researchers could manipulate a single echo parameter -- intensity -- and study its effect on the perception of object size.
Although the size of the virtual object, and thus its echo intensity, was more than ten times larger than the small, real object used before, the bats did not show any evasive flight.
Dr Goerlitz said: "This result suggested that the virtual object was lacking a crucial feature for object size perception. We think that bats use another echo parameter beside intensity: the sonar aperture, which is the spread of angles of incidence from which echoes impinge on a bat's ears. The sonar aperture directly correlates with the size of real objects. And in contrast to real objects, virtual objects presented from a single loudspeaker lack a wide sonar aperture."
A second study, just published in the Journal of Neuroscience by Dr Goerlitz's colleagues in Munich, confirms this finding. Using loudspeaker arrays, Melina Heinrich and colleagues trained bats in the lab to chose the larger of two objects. The results show that the bats were able to choose the larger object using the sonar aperture only, independently of echo intensity. This behaviour was reflected in the activity of nerve cells that reacted specifically to echoes of a given sonar aperture.
Together, these studies have uncovered a novel mechanism for object size perception in bats, which employs the small echo differences between both ears generated by echoes arriving from different directions. In contrast, our eyes can measure object size directly from the two-dimensional retinal image. By perceiving the intensity and sonar aperture of object echoes, however, the auditory system has evolved its own solution for the perception of object features -- giving bats access to comparable information about objects as we obtain with our eyes.
The six experiments at the LHC are all run by international collaborations, bringing together scientists from institutes all over the world. Each experiment is distinct, characterised by its unique particle detector.
The two large experiments, ATLAS and CMS, are based on general-purpose detectors to analyse the myriad of particles produced by the collisions in the accelerator. They are designed to investigate the largest range of physics possible. Having two independently designed detectors is vital for cross-confirmation of any new discoveries made.
Two medium-size experiments, ALICE and LHCb, have specialised detectors for analysing the LHC collisions in relation to specific phenomena.
Two further experiments, TOTEM and LHCf, are much smaller in size. They are designed to focus on "forward particles" (protons or heavy ions). These are particles that just brush past each other as the beams collide, rather than meeting head-on.
The ATLAS, CMS, ALICE and LHCb detectors are installed in four huge underground caverns located around the ring of the LHC. The detectors used by the TOTEM experiment are positioned near the CMS detector, whereas those used by LHCf are near the ATLAS detector.
The LHC, the world’s largest and most powerful particle accelerator, is the latest addition to CERN’s accelerator complex. It mainly consists of a 27-kilometre ring of superconducting magnets with a number of accelerating structures to boost the energy of the particles along the way.
Inside the accelerator, two beams of particles travel at close to the speed of light with very high energies before colliding with one another. The beams travel in opposite directions in separate beam pipes – two tubes kept at ultrahigh vacuum. They are guided around the accelerator ring by a strong magnetic field, achieved using superconducting electromagnets. These are built from coils of special electric cable that operates in a superconducting state, efficiently conducting electricity without resistance or loss of energy. This requires chilling the magnets to about ‑271°C – a temperature colder than outer space. For this reason, much of the accelerator is connected to a distribution system of liquid helium, which cools the magnets, as well as to other supply services.
Thousands of magnets of different varieties and sizes are used to direct the beams around the accelerator. These include 1232 dipole magnets of 15m length which are used to bend the beams, and 392 quadrupole magnets, each 5–7m long, to focus the beams. Just prior to collision, another type of magnet is used to "squeeze" the particles closer together to increase the chances of collisions. The particles are so tiny that the task of making them collide is akin to firing needles from two positions 10km apart with such precision that they meet halfway!
The CERN Control CentreAll the controls for the accelerator, its services and technical infrastructure are housed under one roof at the CERN Control Centre. From here, the beams inside the LHC are made to collide at four locations around the accelerator ring, corresponding to the positions of the particle detectors.
Our understanding of the Universe is about to change...
The Large Hadron Collider (LHC) is a gigantic scientific instrument near Geneva, where it spans the border between Switzerland and France about 100m underground. It is a particle accelerator used by physicists to study the smallest known particles – the fundamental building blocks of all things. It will revolutionise our understanding, from the minuscule world deep within atoms to the vastness of the Universe.
Two beams of subatomic particles called "hadrons" – either protons or lead ions – travel in opposite directions inside the circular accelerator, gaining energy with every lap. Physicists use the LHC to recreate the conditions just after the Big Bang, by colliding the two beams head-on at very high energy. Teams of physicists from around the world then analyse the particles created in the collisions using special detectors in a number of experiments dedicated to the LHC.
There are many theories as to what will result from these collisions. For decades, the Standard Model of particle physics has served physicists well as a means of understanding the fundamental laws of Nature, but it does not tell the whole story. Only experimental data using the high energies reached by the LHC can push knowledge forward, challenging those who seek confirmation of established knowledge, and those who dare to dream beyond the paradigm.
In the early 1990's, one thing was fairly certain about the expansion of the Universe. It might have enough energy density to stop its expansion and recollapse, it might have so little energy density that it would never stop expanding, but gravity was certain to slow the expansion as time went on. Granted, the slowing had not been observed, but, theoretically, the Universe had to slow. The Universe is full of matter and the attractive force of gravity pulls all matter together. Then came 1998 and the Hubble Space Telescope (HST) observations of very distant supernovae that showed that, a long time ago, the Universe was actually expanding more slowly than it is today. So the expansion of the Universe has not been slowing due to gravity, as everyone thought, it has been accelerating. No one expected this, no one knew how to explain it. But something was causing it.
Eventually theorists came up with three sorts of explanations. Maybe it was a result of a long-discarded version of Einstein's theory of gravity, one that contained what was called a "cosmological constant." Maybe there was some strange kind of energy-fluid that filled space. Maybe there is something wrong with Einstein's theory of gravity and a new theory could include some kind of field that creates this cosmic acceleration. Theorists still don't know what the correct explanation is, but they have given the solution a name. It is called dark energy.
What Is Dark Energy?
Universe Dark Energy-1 Expanding Universe
This diagram reveals changes in the rate of expansion since the universe's birth 15 billion years ago. The more shallow the curve, the faster the rate of expansion. The curve changes noticeably about 7.5 billion years ago, when objects in the universe began flying apart as a faster rate. Astronomers theorize that the faster expansion rate is due to a mysterious, dark force that is pulling galaxies apart.
More is unknown than is known. We know how much dark energy there is because we know how it affects the Universe's expansion. Other than that, it is a complete mystery. But it is an important mystery. It turns out that roughly 70% of the Universe is dark energy. Dark matter makes up about 25%. The rest - everything on Earth, everything ever observed with all of our instruments, all normal matter - adds up to less than 5% of the Universe. Come to think of it, maybe it shouldn't be called "normal" matter at all, since it is such a small fraction of the Universe.
One explanation for dark energy is that it is a property of space. Albert Einstein was the first person to realize that empty space is not nothing. Space has amazing properties, many of which are just beginning to be understood. The first property that Einstein discovered is that it is possible for more space to come into existence. Then one version of Einstein's gravity theory, the version that contains a cosmological constant, makes a second prediction: "empty space" can possess its own energy. Because this energy is a property of space itself, it would not be diluted as space expands. As more space comes into existence, more of this energy-of-space would appear. As a result, this form of energy would cause the Universe to expand faster and faster. Unfortunately, no one understands why the cosmological constant should even be there, much less why it would have exactly the right value to cause the observed acceleration of the Universe.
Perseus Cluster Dwarf Galaxies
These four dwarf galaxies are part of a census of small galaxies in the tumultuous heart of the nearby Perseus galaxy cluster. The galaxies appear smooth and symmetrical, suggesting that they have not been tidally disrupted by the pull of gravity in the dense cluster environment. Larger galaxies around them, however, are being ripped apart by the gravitational tug of other galaxies.
Another explanation for how space acquires energy comes from the quantum theory of matter. In this theory, "empty space" is actually full of temporary ("virtual") particles that continually form and then disappear. But when physicists tried to calculate how much energy this would give empty space, the answer came out wrong - wrong by a lot. The number came out 10120 times too big. That's a 1 with 120 zeros after it. It's hard to get an answer that bad. So the mystery continues.
Another explanation for dark energy is that it is a new kind of dynamical energy fluid or field, something that fills all of space but something whose effect on the expansion of the Universe is the opposite of that of matter and normal energy. Some theorists have named this "quintessence," after the fifth element of the Greek philosophers. But, if quintessence is the answer, we still don't know what it is like, what it interacts with, or why it exists. So the mystery continues.
A last possibility is that Einstein's theory of gravity is not correct. That would not only affect the expansion of the Universe, but it would also affect the way that normal matter in galaxies and clusters of galaxies behaved. This fact would provide a way to decide if the solution to the dark energy problem is a new gravity theory or not: we could observe how galaxies come together in clusters. But if it does turn out that a new theory of gravity is needed, what kind of theory would it be? How could it correctly describe the motion of the bodies in the Solar System, as Einstein's theory is known to do, and still give us the different prediction for the Universe that we need? There are candidate theories, but none are compelling. So the mystery continues.
The thing that is needed to decide between dark energy possibilities - a property of space, a new dynamic fluid, or a new theory of gravity - is more data, better data.
What Is Dark Matter?
Abell 2744: Pandora's Cluster Revealed
One of the most complicated and dramatic collisions between galaxy clusters ever seen is captured in this new composite image of Abell 2744. The blue shows a map of the total mass concentration (mostly dark matter).
By fitting a theoretical model of the composition of the Universe to the combined set of cosmological observations, scientists have come up with the composition that we described above, ~70% dark energy, ~25% dark matter, ~5% normal matter. What is dark matter?
We are much more certain what dark matter is not than we are what it is. First, it is dark, meaning that it is not in the form of stars and planets that we see. Observations show that there is far too little visible matter in the Universe to make up the 25% required by the observations. Second, it is not in the form of dark clouds of normal matter, matter made up of particles called baryons. We know this because we would be able to detect baryonic clouds by their absorption of radiation passing through them. Third, dark matter is not antimatter, because we do not see the unique gamma rays that are produced when antimatter annihilates with matter. Finally, we can rule out large galaxy-sized black holes on the basis of how many gravitational lenses we see. High concentrations of matter bend light passing near them from objects further away, but we do not see enough lensing events to suggest that such objects to make up the required 25% dark matter contribution.
However, at this point, there are still a few dark matter possibilities that are viable. Baryonic matter could still make up the dark matter if it were all tied up in brown dwarfs or in small, dense chunks of heavy elements. These possibilities are known as massive compact halo objects, or "MACHOs ". But the most common view is that dark matter is not baryonic at all, but that it is made up of other, more exotic particles like axions or WIMPS (Weakly Interacting Massive Particles).