Tuesday, November 29, 2011

Do Lipid Rafts Exist?

The contention that molecular platforms known as lipid rails sail on the cell's outer, or plasma. membrane has kept researchers debating for more than a decade. Although many scientists argue that rafts either don't exist or have no biological relevance, their supporters insist the idea remains afloat. Cell biologist Kai Simons. now at the Max Planck Institute of Molecular Cell Biology and Genetics in Dresden, Germany. and his colleague Dina lkonen christened the term "lipid raft" in a I 997 Nature paper that detailed the concept. At the time, the main model of the plasma membrane portrayed it as a sea of lipids through which proteins drifted with little or no organization.
But the duo proposed that two kinds of lipids. cholesterol and sphingolipids, huddle together in the membrane, producing stable formalions they called rafts. One line of evidence thr that concept, the team noted, was the goop left behind in test tube studies when certain detergents dissolve the plasma membrane. This so-called detergent-resistant membrane oozes with cholesterol. sphingolipids, and select membrane proteins.
Rafts serve the cell, the hypothesis suggested, because they gather in one place the proteins necessary for a particular task, such as importing material or relaying a message across the plasma membrane. Proposed passengers on the rafts included glycosylphos-ph ati dy I in os ito I (GPO-anchored prole ins. which adhere to the outer layer of the plasma membrane and perform functions such as receiving signals and helping cells stick together. The idea roiled the cell biology community. "Right away. there were two camps;' Simons says.
"One camp didn't believe a word.- But plenty of scientists hopped aboard. More than 3000 papers later, the activities attributed to lipid rafts include promoting drug resistance in cancer cells and serving as escape hatches for viruses such as the ones that cause flu. Possibly the most debated hypothesis invoked rafts to explain the activation of the T cell receptor, the cell surface protein that spurs these immune cells to action when a pathogen is on the loose in the body. Incorporating the receptor into a raft helps switch it on. studies have suggested, possibly by allowing the receptor to hobnob with other proteins necessary for stimulating the T cell or because those proteins need the raft environment to work. Members of both camps concur that the raft concept was compelling and galvanized investigation into membrane organiza-tion.
"The raft hypothesis is brilliant in some ways," says biophysical chemist Jay Groves of the University of Calitbrnia, Berkeley. "My personal opinion is that the very idea of rafts enriches scientific research' says biophysicist Sarah Keller of the University of Washington, Seattle, "whether or not rafts exist in either specific cases or more generally!' But how solid is the proof there are rafts? Skeptics abound. and they've scored some hits on the original raft evidence. Membrane biologist Michael Edidin of Johns Hopkins Uni-versity in Baltimore, Maryland, says the field has fallen victim to what he calls the "sins of detergent extraction!' Too many researchers have assumed that detergent-resistant mem-branes are genuine rafts, even though studies reveal that extraction can disrupt their compo-sition.
 "The idea of these isolatable islands of raft lipids is probably not viable," says membrane biologist Ken Jacobson of the Univer-sity of North Carolina, Chapel I fill. According to the raft hypothesis, certain lipids naturally sort themselves to create the organized pockets of proteins that make up rafts. But many researchers don't buy that mechanism for inducing order in the mem-brane. It is too passive, especially when the plasma membrane is constantly churning, says Satyaj it Mayor, a membrane biologist at the National Centre for Biological Science in Bangalore, India. Instead, he says. his group's research points to a more active pro-cess in which "the cell is using energy to con-struct regions in the membrane." Groves says the original hypothesis gave lipids too much credit—and proteins too little.
"Proteins define their own environment. Lipids almost completely follow their behavior," he says. Critics have also griped because the vital statistics of lipid rafts, such as their size and life span on a cell membrane, have proven so difficult to pin down. In an early study, Simons and colleagues estimated the d iam-eter of rafts at about 50 nanometers, or more than 3000 sphingolipid molecules across. In a 2006 attempt to sharpen the raft definition, a group of membrane researchers suggested a size range of 10 nanometers to 200 nano-meters, and other estimates have come in higher or lower. Rafts still have their supporters, how-ever. Akihiro Kusumi, a membrane biophysicist at Kyoto University in Japan. says that if researchers specify raft criteria, such as size. and spell out which isolation techniques they use, they can demonstrate structures that qualify as rafts.
For his part. Simons acknowledges the failings of detergent extraction but counters that new cell imaging techniques are adding to the evidence for rafts. Researchers using one form of super-resolution microscopy, known as stimulated emission depleted microscopy, found in 2009 that sphingolipids and GPI-anchored proteins tarried in certain molecular clusters in the membrane, as if they briefly joined rafts. Cell biologists say it's important to resolve the lipid raft debate eventually because the plasma membrane controls what enters and exits cells and how they send and receive sig-nals. Although researchers have proposed several alternatives for how the plasma membrane organizes itself, none of them has caught on. But if a better explanation rises to the surfitce, cell biologists will have to give some of the credit to rafts.
SOURCE : SCIENCE MAGAZINE VOL 334

Monday, November 28, 2011

To Self-Diagnose, Spit On iPhone

Handheld gadgets could one day diagnose infections at the push of a button by using the supersensitive touchscreens in today's smartphones. Many believe that in the future collecting samples of saliva, urine or blood could be performed using a cheap, USB-stick-sized throwaway device called a lab-on-a-chip. The user would inject a droplet of the fluid in the chip, and micropumps inside it would send the fluid to internal vessels containing reagents that extract target disease biomarker molecules. The whole device would then be sent to a lab for analysis.
But Hyun Gyu Park and Byoung Yeon Won at the Korea Advanced  Institute for Science and Technology in Daejeon think touchscreens could improve the process by letting your phone replace the lab work. Park suggests the lab-on-a-chip could present a tiny droplet of the sample to be pressed against a phone's touchscreen for analysis, where an app would work out whether you have food poisoning, strep throat or flu, for example. The idea depends on a method the pair have devised to harness the way a touchscreen senses a fingertip's ability to store electric charge — known as its capacitance.
The capacitive sensitivity of touchscreens is far higher than what is needed to sense our fingers as we play games or tap out tweets. "Since these touchscreens can detect very small capacitance changes we thought they could serve as highly sensitive detection platforms for disease biomarkers," says Park. So the pair began proof-of-concept tests to see if the touchscreens in our pockets could play a role in diagnosing our ailments.
First they took three solutions containing differing concentrations of DNA from the bacteria that causes chlamydia and applied droplets from each to an iPhone-sized multitouch display. They found that the output from the screen's array of crisscrossed touch-sensing electrodes could distinguish between the capacitances caused by each concentration using droplets of only in microlitres (Angewandte Chemie International Edition, DOI: ia1002/anie2m105986).
 The technology is not yet able to identify individual pathogens but Park sees the display's ability to differentiate between concentrations as a first step towards this. However, before the idea can be rolled out the built-in software on touchscreens that eliminates false-touch signals caused by moisture or sweat would need modifying.
Park also plans to develop a film that can be stuck on a touchscreen to which the biomarkers will attach. "Nobody wants direct application of bio-samples onto their phone," he says. "This is potentially possible," says Harpal Minhas, editor of the journal Lab On A Chip. "But any changes to current production-line touchscreens would need to demonstrate huge financial benefits before they are implemented!' And DNA sequencing, rather than concentration measurement, is more likely to be necessary for disease diagnosis, he adds.
SOURCE : NEW SCIENTIST MAGAZINE NOVEMBER 2011

Sunday, November 27, 2011

Extreme Weather, Time To Prepare

An international scientific assessment finds for the first time that human activity has indeed driven not just global warming but also increases in some extreme weather and climate events around the world in recent decades. And those and likely other weather extremes will worsen in coming decades as greenhouse gases mount, the report finds.
But uncertainties are rife in the still-emerging field of extreme events. Scientists cannot attribute a particular drought or flood to global warming, and they can say little about past or future trends in the risk of high-profile hazards such as tropical cyclones. Damage from weather disasters has been climbing, but the report can attribute that trend only to the increasing exposure of life and property to weather risks. Climate change may be involved, but a case cannot yet be made.
Despite the uncertainties, the special report from the Intergovernmental Panel on Climate Change (IPCC) released 18 Novem-ber stresses that there is still reason for taking action now. The panel recommends "low-regrets measures," such as improvements in everything from drainage systems to early warning systems. Such measures would ben-efit society in dealing with the current climate as well as with almost any range of possible future climates.
The report takes a cautious, consensus-based approach that draws on the published literature. Headlines and even some scien-tists may point to the current Texas drought or the 2003 European heat wave as the result of the strengthening greenhouse. But the report finds that extreme weather and climate events are far too rare to blame any one of them on global warming. A 29-page summary released  for policymakers has one sentence on the sub-ject: "Attribution of single extreme events to anthropogenic climate change is challenging?'
The report does find "evidence ... of change in some extremes." These are generally lower-profile changes. For example, the report finds that it is likely that the number of cold days and nights has decreased since 1950. In many regions "there is medium con-fidence that the length or number of warm spells, or heat waves, has increased." And the frequency of heavy precipitation events has changed in some regions, with increases being more likely than decreases.
There is no sign that any of these climate changes has been driving the obvious rise in economic losses from weather- and climate-related disasters, the report finds. Instead, it says, "the major cause of the long-term increases in economic losses" has been an increase in the number of dangerously placed people and their increasing wealth. More and more people have been living in the path of disastrous weather, whether poor people with nowhere else to live but low-lying deltas or the rich flocking to the coastlines.
Advocates and some scientists have pushed mounting disasters as reasons for action to rein in global warming. But "as com-pelling as disasters are," says climate policy analyst Roger Pielke Jr. of the University of Colorado, Boulder, "I've never thought disas-ters were an appropriate use" for advocating reduction of greenhouse emissions. "I give some credit to the IPCC," he says.
The report does find reasons to take certain kinds of action. It points to evidence that at least some of the recent changes can be attributed to humans. "It is likely that" human influences have raised the lowest and highest temperatures in a day on a global scale. And the intensification of extreme precipitation can likely be attributed to human influence. Based on climate model results and basic physics, these and perhaps other trends are likely to continue and accelerate as the green-house strengthens. Tropical cyclone maximum wind speeds are likely to increase, the report says, droughts will intensify in some regions, and sea level will continue to rise, flooding low-lying coastal areas.
Even with trends in extreme events continuing, In many regions, the main drivers for future increases in economic losses due to some climate extremes will be socioeconomic in nature," according to the report. That is, the main driver will be increasing exposure of rich and poor to climatic hazanis, with the poor being more vulneralie than the rich. But whatever the drivers of future losses and what-ever the uncertaindes, low-regrets actions can be taken now, acconling to the report. "Even with substantial uncertainties about extremes and extreme events that may lie ahead," says Thomas Wilbanks of Oak Ridge National Laboratory in Tennessee, a report lead author. "there are things that we can—and should—be doing now to increase our resilience."
The report lists actions that it says would improve human well-being in the short term while laying a foundation for tackling the changes that appear to be in the offing. Plan-ning land use, managing ecosystems, and improving water supplies and irrigation systems all provide "chances to make the world more livable while decreasing risk" from future climate changes, said Christopher Field of Stanford University in Palo Alto, California, a co-chair for the report. Rajendra Pachauri, chair of the 1PCC, added his hope that that message and the rest of the report would be well received at the 2011 United Nations Climate Change Conference that starts 28 November in Durban, South Africa.

SOURCE : SCIENCE MAGAZINE VOLUME 334

Sauna's Boost For Heart and Homour

That warm, fuzzy feeling you get from sitting in a sauna isn't in your imagination — and it may also help your heart. People with chronic heart failure who took saunas five times a week for three weeks improved their heart function and the amount of exercise they could do. Meanwhile, neurons that release the "happiness molecule" serotonin respond to increases in body temperature, perhaps explaining the sauna's pleasurable effects.
Heart failure occurs when the heart is unable to supply enough blood to the body, resulting in shortness of breath and difficulty exercising. Previous studies have hinted that saunas might boost health. To investigate, Takashi Ohori at the University of Toyama in Japan and colleagues asked 41 volunteers with heart failure to take 15-minute saunas five times per week, using a blanket for 30 minutes afterwards to keep their body temperature about 1°C higher than normal.
Sauna treatment increased the heart's ability to pump blood, and boosted the distance participants could walk in 6 minutes from 337 metres to 379 metres. The team also noticed improved function of the endothelium - the membrane lining the inside of the heart that releases factors controlling the diameter of blood vessels, and clotting.
The researchers also found more circulating endothelial progenitor cells - adult stem cells that can turn into endothelial cells (The American Journal of Cardiology, DOT: 10.1016/ Lamjcard.2011.08.014). In a separate study, the same group temporarily cut off blood supply to rats' hearts to mimic a heart attack, then gave them a sauna every day for four weeks. Later examination saw fewer of the changes to the heart's chambers that usually occur after heart attacks in rats not exposed to a sauna. In addition, the sauna rats showed increases in endothelial nitric oxide synthase, an enzyme that regulates blood pressure and the growth of new blood vessels (AJP: Heart and Circulatory Physiology, DOI: io.1152/ajpheartoolo3.2on).
"We think that repeated saunas trigger pathways that produce nitric oxide and other signalling molecules that eventually reduce resistance to the pumping capacity of the heart," says Tofy Mussivand at the University of Ottawa Heart Institute in Ontario, Canada, who was not involved in the research. Heating might have other benefits, says Christopher Lowry of the University of Colorado at Boulder. He has identified a group of serotonin-releasing neurons in a region of the brain called the dorsal raphe nucleus, which fire in response to increases in body temperature.
They seem to initiate cooling, but these neurons also project into a region of the brain that regulates mood, which may account for the pleasure of a sauna. Intriguingly, these same neurons feed into the sympathetic nervous system. Activation of the SNS boosts blood pressure and heart rate, but "by heating up the skin you inhibit the sympathetic nervous system, which is probably a good thing if you've had a heart attack", says Lowry. Mussivand cautions against people with heart failure rushing to the nearest spa, though. "Cardiologists currently don't recommend that heart failure patients should be exposed to heat, so this has to be done under medical supervision," he says.

SOURCE : NEW SCIENTIST MAGAZINE NOVEMBER 2011

Friday, November 25, 2011

Alzheimer’s Damage Reserved With A Jolt

Brain shrinkage in people with Alzheimer's disease can be reversed in some cases - by jolting the degenerating tissue with electrical impulses. Moreover, doing so reduces the cognitive decline associated with the disease. "In Alzheimer's disease it is known that the brain shrinks, particularly the hippocampus," says Andres Lozano at Toronto Western Hospital in Ontario, Canada.
What's more, brain scans show that the temporal lobe, which contains the hippocampus, and another region called the posterior cingulate use less glucose than normal, suggesting they have shut down. Both regions play an important role in memory. To try to reverse these degenerative effects, Lozano and his team turned to deep brain stimulation - sending electrical impulses to the  brain via implanted electrodes.
The group inserted electrodes into the brains of six people who had been diagnosed with Alzheimer's at least a year earlier. They placed the electrodes next to the fornix - a bundle of neurons that carries signals to and from the hippocampus - and left them there, delivering tiny pulses of electricity 130 times per second.
Follow-up tests a year later showed that the reduced use of glucose by the temporal lobe and posterior cingulate had been reversed in all six people (Annals of Neurology, DOT: 10.1002/ ana.22089). The researchers have now begun to investigate the effects on the hippocampus. At the Society for Neuroscience annual meeting in Washington DC last week they announced that while they saw hippocampal shrinking in four of the volunteers, the region grew in the remaining two participants.
"Not only did the hippocampus not shrink, it got bigger - by 5 per cent in one person and 8 per cent in the other," says Lozano. It's an amazing" result, he adds. Tests showed that these two individuals appeared to have better than expected cognitive function, although the other four volunteers did not. Though Lozano is not sure exactly how the treatment works, his team's recent work in mice suggests that the electrical stimulation might drive the birth of new neurons in the brain.
Deep brain stimulation in mice also triggers the production of proteins that encourage neurons to form new connections. The researchers are now embarking on a trial involving  around 50 people, but John Wesson Ashford at Stanford University, California, wonders how practical the approach will be when there are millions of people with Alzheimer's. Lozano points out that around 90,000 people worldwide with Parkinson's disease have already received deep brain stimulation. The incidence of Alzheimer's is only five times that of Parkinson's, he says. "If it can be used in Parkinson's, it can be used in Alzheimer's."
SOURCE : NEW SCIENTIST MAGAZINE NOVEMBER 2011

Humanity’s First Word? Duh!

You may think humanity's first words are lost in the noise of ancient history, but an unlikely experiment using plastic tubes and puffs of air is helping to recreate the first sounds uttered by our distant ancestors.
Many animals communicate with sounds, but it is the variety of our language that sets us apart. Over millions of years, changes to our vocal organs have allowed us to produce a rich mix of sounds. One such change was the loss of the air sac — a balloon-like organ that helps primates to produce booming noises.
All primates have an air sac except humans, in whom it has shrunk to a vestigial organ. Palaeontologists can date when our ancestors lost the organ, as the tissue attaches to a skeletal feature called the hyoid bulla, which is absent in humans. "Lucy's baby", an Australopithecus afarensis girl who lived 3.3 million years ago, had a hyoid bulla; but by the time Homo heidelbergensis arrived on the scene 600,000 years ago, air sacs were a thing of the past.
To find out how this changed the sounds produced, Bart de Boer of the University of Amsterdam in the Netherlands created artificial vocal tracts from shaped plastic tubes. Air forced down them produced different vowel sounds, and half of the models had an extra chamber to mimic an air sac. De Boer played the sounds to 22 people and asked them to identify the vowel. If they got it right, they were asked to try again, only this time noise was added to make it harder to identify the sound. If they got it wrong, noise was reduced.
He found that those listening to tubes without air sacs could tolerate much more noise before the vowels became unintelligible. The air sacs acted like bass drums, resonating at low frequencies, and causing vowel sounds to merge; Lucy's baby would have had a greatly reduced vocabulary. Even simple words — such as "tin" and "ten" —would have sounded the same to her.
Observations of soldiers from the first world war corroborate de Boer's findings. Poison gas enlarged the vestigial air sacs of some soldiers, who are said to have had speech problems that made them hard to comprehend.
De Boer's study provides clear evidence supporting the idea that the need to produce complex sounds to communicate better made air sacs shrink, says Ann MacLarnon of the University of Roehampton in London. More sounds meant more information could be shared, giving those who lacked air sacs a better chance of survival in a dangerous world.
De Boer found that air sacs also interfered with the workings of the vocal cords, making consonants trickier. Only once they had gone could words like it perpetual", requiring rapid changes in sound, be produced.
What, then, might our ancestors' first words have been? With air sacs, vowels tend to sound like the "u" in "ugg". But studies suggest it is easier to produce a consonant plus a vowel, and "d" is easier to form with "u". "Drawing it all together, I think it is likely cavemen and cavewomen said (duh before they said `ugg'," says de Boer.
SOURCE : NEW SCIENTIST MAGAZINE NOVEMBER 2011

Our Ancestor, The Mega-Organism

ONCE upon a time, 3 billion years ago, there lived a single organism called LUCA. It was enormous: a mega-organism like none seen since, it filled the planet's oceans before splitting into three and giving birth to the ancestors of all living things on Earth today. 
This strange picture is emerging from efforts to pin down the last universal common ancestor — not the first life that emerged on Earth but the life form that gave rise to all others. The latest results suggest LUCA was the result of early life's fight to survive, attempts at which turned the ocean into a global genetic swap shop for hundreds of millions of years. Cells struggling to survive on their own exchanged useful parts with each other without competition — effectively creating a global mega-organism. 
It was around 2.9 billion years ago that LUCA split into the three domains of life: the single-celled bacteria and archaea, and the more complex eukaryotes that gave rise to animals and plants (see timeline, opposite) It's hard to know what happened before the split. Hardly any fossil evidence remains from this time, and any genes that date that far back are likely to have mutated beyond recognition. 
That isn't an insuperable obstacle to painting LUCA's portrait, says Gustavo Caetano-Anolles of the University of Illinois at Urbana-Champaign. While the sequence of genes changes quickly, the three-dimensional structure of the proteins they code for is more resistant to the test of time. So if all organisms today make a protein with the same overall structure, he says, it's a good bet that the structure was present in LUCA. He calls such structures living fossils, and points out that since the function of a protein is highly dependent on its structure, they could tell us what LUCA could do. 
"Structure is known to be conserved when sequences aren't," agrees Anthony Poole of the University of Canterbury in Christchurch, New Zealand, though he cautions that two very similar structures could conceivably have evolved independently after LUCA. 
To reconstruct the set of proteins LUCA could make, Caetano-Anolles searched a database of proteins from 420 modern organisms, looking for structures that were common to all. Of the structures he found, just 5 to ii per cent were universal, meaning they were conserved enough to have originated in LUCA (BMC Evolutionary Biology, DOT: 10.1186/1471-2148-11-140). 
By looking at their function, he concludes that LUCA had enzymes to break down and extract energy from nutrients, and some protein-making equipment, but it lacked the enzymes for making and reading DNA molecules.
 This is in line with unpublished work by Wolfgang Nitschke of the Mediterranean Institute of Microbiology in Marseille, France. He reconstructed the history of enzymes crucial to metabolism and found that LUCA could use both nitrate and carbon as energy sources. Nitschke presented his work at the UCL Symposium on the Origin of Life in London on 11 November.
If LUCA was made of cells it must have had membranes, and Armen Mulkidjanian of the University of Osnabruck in Germany thinks he knows what kind. He traced the history of membrane proteins and concluded that LUCA could only make simple isoprenoid membranes, which were leaky compared with more modern designs (Proceedings of the International Moscow Conference on Computational Molecular Biology, 2011, p 92).
LUCA probably also had an organelle, a cell compartment with a specific function. Organelles were thought to be the preserve of eukaryotes, but in 2003 researchers found an organelle called the acidocalcisome in bacteria. Caetano-Anolles has now found that tiny granules in some archaea are also acidocalcisomes, or at least their precursors. That means acidocalcisomes are found in all three domains of life, and date back to LUCA (Biology Direct, DOI: 101186/1745-6150-6-50).
So LUCA had a rich metabolism that used different food sources, and it had internal organelles. So far, so familiar. But its genetics are a different story altogether. For starters, LUCA may not have used DNA. Poole has studied the history of enzymes called ribonucleotide reductases, which create the building blocks of DNA, and found no evidence that LUCA had them (BMC Evolutionary Biology, DOT: 10.118 6/1471-2148- 10-383). Instead, it may have used RNA: many biologists think RNA came first because it can store information and control chemical reactions (New Scientist, 13 August, p 32).
The crucial point is that LUCA was a "progenote", with poor control over the proteins that it made, says Massimo Di Giulio Df the Institute of Genetics and Biophysics in Naples, Italy. Progenotes can make proteins using genes as a template, but the process is so error-prone that the proteins can be quite unlike what the gene specified. Both Di Giulio and Caetano-Anolles have found evidence that systems that make protein synthesis accurate appear long after LUCA. "LUCA was a clumsy guy trying to solve the complexities of living on primitive Earth," says Caetano-Anolles.
He thinks that in order to cope, the early cells must have shared their genes and proteins with each other. New and useful molecules would have been passed from cell to cell without competition, and eventually gone global. Any cells that dropped out of the swap shop were doomed. It was more important to keep the living system in place than to compete with other systems," says Caetano-Anolles. He says the free exchange and lack of competition mean this living primordial ocean essentially functioned as a single mega-organism.
"There is a solid argument in favour of sharing genes, enzymes and metabolites," says Mulkidjanian. Remnants of this gene-swapping system are seen in communities of microorganisms that can only survive in mixed communities. And LUCA's leaky membranes would have made it easier for cells to share.
"It's a plausible idea," agrees Eric Alm of the Massachusetts Institute of Technology. But he says he "honestly can't tell" if it is true.
Only when some of the cells evolved ways of producing everything they needed could the mega-organism have broken apart. We don't know why this happened, but it appears to have coincided with the appearance of oxygen in the atmosphere, around 2.9 billion years ago. Regardless of the cause, life on Earth was never the same again.

SOURCE : NEW SCIENTIST MAGAZINE NOVEMBER 2011

Thursday, November 24, 2011

A Collection Of Nothings Means Everything To Mathematics

THE mathematicians' version of nothing is the empty set. This is a collection that doesn't actually contain anything, such as my own collection of vintage Rolls-Royces. The empty set may seem a bit feeble, but appearances deceive; it provides a vital building block for the whole of mathematics. It all started in the late 1800s.
While most mathematicians were busy adding a nice piece of furniture, a new room, even an entire storey to the growing mathematical edifice, a group of worrywarts started to fret about the cellar. Innovations like non-Euclidean geometry and Fourier analysis were all very well - but were the underpinnings sound? To prove they were, a basic idea needed sorting out that no one really understood. Numbers. Sure, everyone knew how to do sums.
Using numbers wasn't the problem. The big question was what they were. You can show someone two sheep, two coins, two albatrosses, two galaxies. But can you show them two? The symbol "2"? That's a notation, not the number itself. Many cultures use a different symbol. The word "two"? No, for the same reason: in other languages it might be deux or zwei oribtatsu. For thousands of years humans had been using numbers to great effect; suddenly a few deep thinkers realised no one had a clue what they were. An answer emerged from two different lines of thought: mathematical logic, and Fourier analysis, in which a complex waveform describing a function is represented as a combination of simple sine waves.
These two areas converged on one idea. Sets. A set is a collection of mathematical objects - numbers, shapes, functions, networks, whatever. It is defined by listing or characterising its members. "The set with members 2, 4, 6, 8" and "the set of even integers between i and 9" both define the same set, which can be written as {2, 4, 6, 8}.
Around 1880 the mathematician Georg Cantor developed an extensive theory of sets. He had been trying to sort out some technical issues in Fourier analysis related to discontinuities — places where the waveform makes sudden jumps. His answer involved the structure of the set of discontinuities. It wasn't the individual discontinuities that mattered, it was the whole class of discontinuities.
How many dwarfs?
One thing led to another. Cantor devised a way to count how many members a set has, by matching it in a one-to-one fashion with a standard set. Suppose, for example, the set is {Doc, Grumpy, Happy, Sleepy, Bashful, Sneezy, Dopey}.
To count them we chant "1, 2, 3..." while working along the list: Doc (i), Grumpy (2), Happy (3), Sleepy (4), Bashful (5), Sneezy (6) Dopey (7). Right: seven dwarfs. We can do the same with the days of the week: Monday (0, Tuesday (2), Wednesday (3), Thursday (4), Friday (5), Saturday (6), Sunday (7). Another mathematician of the time, Gottlob Frege, picked up on Cantor's ideas and thought they could solve the big philosophical problem of numbers.
The way to define them, he believed, was through the process of deceptively simple process of counting. What do we count? A collection of things — a set. How do we count it? By matching the things in the set with a standard set of known size. The next step was simple but devastating: throw away the numbers.
You could use the dwarfs to count the days of the week. Just set up the correspondence: Monday (Doc), Tuesday (Grumpy)... Sunday (Dopey). There are Dopey days in the week. It's a perfectly reasonable alternative number system. It doesn't (yet) tell us what a number is, but it gives a way to define "same number". The number of days equals the number of dwarfs, not because both are seven, but because you can match days to dwarfs. What, then, is a number? Mathematical logicians realised that to define the number 2, you need to construct a standard set which intuitively has two members. To define 3, use a standard set with three numbers, and so on.
But which standard sets to use? They have to be unique, and their structure should correspond to the process of counting. This was where the empty set came in and solved the whole thing by itself. Zero is a number, the basis of our entire number system (see "From zero to hero", page 41). So it ought to count the members of a set. Which set? Well, it has to be a set with no members. These aren't hard to think of: "the set of all honest bankers", perhaps, or "the set of all mice weighing 20 tonnes". There is also a mathematical set with no members: the empty set.
It is unique, because all empty sets have exactly the same members: none. Its symbol, introduced in 1939 by a group of mathematicians that went by the pseudonym Nicolas Bourbaki, is 0. Set theory needs 0 for the same reason that arithmetic needs o: things are a lot simpler if you include it. In fact, we can define the number o as the empty set. What about the number 1? Intuitively, we need a set with exactly one member. Something unique. Well, the empty set is unique. So we define ito be the set whose only member is the empty set: in symbols, {0}. This is not the same as the empty set, because it has one member, whereas the empty set has none.
Agreed, that member happens to be the empty set, but there is one of it. Think of a set as a paper bag containing its members. The empty set is an empty paper bag. The set whose only member is the empty set is a paper bag containing an empty paper bag. Which is different: it's got a bag in it (see diagram). The key step is to define the number 2. We need a uniquely defined set with two members. So why not use the only two sets we've mentioned so far: 0 and {0}? We therefore define 2 to be the set {0, {0}1. Which, thanks to our definitions, is the same as fo, Now a pattern emerges. Define 3 as {o,1, 2}, a set with three members, all of them already defined. Then 4 is 10,1, 2, 31, 5 is 1, 2, 3, 41, and so on. Everything traces back to the empty set: for instance, 3 is {0, {0}, {0, {0}1} and 4 is {0, {0}, {0, {0}1, {0, {0}, {0, {0}111.
You don't want to see what the number of dwarfs looks like. The building materials here are abstractions: the empty set and the act of forming a set by listing its members. But the way these sets relate to each other leads to a well-defined construction for the number system, in which each number is a specific set that intuitively has that number of members. The story doesn't stop there. Once you've defined the positive whole numbers, similar set-theoretic trickery defines negative numbers, fractions, real numbers (infinite decimals), complex numbers... all the way to the latest fancy mathematical concept in quantum theory or whatever. So now you know the dreadful secret of mathematics: it's all based on nothing.
SOURCE : NEW SCIENTIST MAGAZINE NOVEMBER 2011

Wednesday, November 23, 2011

A Burger Every Few Days To Keep Climate Change At Bay

Meat is bad: bad for you, bad for the environment. At least that's the usual argument. Each year, the doors to the UN climate negotiations, which kick off again in Durban, South Africa, on 28 November, are assailed by demonstrators brandishing pro-vegetarian placards. The fact is that livestock farming accounts for a whopping 15 per cent of all greenhouse gas emissions. We can't all go veggie, so just how much meat is it OK for an eco-citizen to eat?
It's not just the demonstrators who are concerned about food's impact on the climate. This week, a major report concludes that food production is too close to the limits of a "safe operating  space" defined by how much we need, how much we can produce, and its impact on the climate.
Meat is a major contributor to that: 80 per cent of agricultural emissions come from meat production, and the problem is getting worse. As people get richer, the demand for protein gets stronger, says Molly Jahn, a former undersecretary at the US Department of Agriculture, and one of the authors of Achieving Food Security in the Face of Climate Change, commissioned by the Consultative Group on International Agricultural Research ([OAR). It's unrealistic to expect everyone to give up meat entirely, and many of the world's poor need to increase their meat consumption to overcome malnutrition and food insecurity.
The solution is to eat less meat rather than no meat. In 2007, Colin Butler of the Australian National University in Canberra estimated that the average person consumed 100 grams of meat a day, or about one burger (a quarter-pounder is 113 g). The rich eat 10 times more than the poor - in other words, some people get 10 burgers a day while others get none. Butler showed that if every person in the world ate 50 g of red meat and 40 g of white meat per day by 2050, greenhouse gas emissions from meat production would stabilise at 2005 levels - a target cited in national plans for agricultural emissions. That's about one burger and one small chicken breast per person every two days.
Butler's 2007 figures didn't take into account the fact that we throw out a lot of the animal mass produced because we consider it inedible. Western countries are the biggest offenders: while many cultures are not fazed by a meal of brains or testicles, Butler estimates that Americans and Australians throw out up to half the cow mass they produce.
At New Scientist's request, he updated his calculations. He estimates that globally we discard between 5 and 10 per cent of the animal. This means we can only allow ourselves 80 to 85 g of red and white meat, or one burger and one chicken fillet every three days. That's an upper limit Emissions may need to be cut further. Our allowance would drop further if more people were as wasteful as the Americans and Australians. And, according to CGIAR, in addition to the waste between the abattoir and the plate, one-third of all produced food is spoiled because of poor refrigeration, pests and bulk packaging that ; encourages consumers to buy more ; than they can eat. All of which eat into Flour meat allowance.
SOURCE : NEW SCIENTIST MAGAZINE NOVEMBER 2011

Tuesday, November 22, 2011

Hothouse Earth Is On The Horizon


An era of ice that has gripped Earth's poles for 35 million years could come to an end as extreme global warming really begins to bite. Previously unknown sources of positive feedback — including "hyperwarming" that was last seen on Earth half a billion years ago— may push global temperatures high enough to send Earth into a hothouse state with tropical forests growing close to the poles.
Climate scientists typically limit themselves to the 21st century when predicting how human activity will affect global temperatures. The latest predictions are bolder, though: the first systematic forecasts through to 2300 are beginning to arrive. They follow four possible futures, including one in which we rapidly cut emissions and another in which we burn fossil fuels into the 22nd century (Climatic Change, DO!: 10.1007/ s1o584-011-0157-y).
Chris Jones of the UK Met Office in Exeter says that unpublished results suggest the "burn everything" scenario could see atmospheric carbon dioxide levels reach 2000 parts per million—the figure today is 388 ppm. That pulse of CG, could lead to a global temperature rise of 10°C. Temperatures this high were last seen in the Eocene, 34 million years ago, says Paul Pearson of Cardiff University in the UK. Conditions were so different back then that the Canadian High Arctic was populated by plants that are now found in the south-eastern US (Proceedings of the Royal Society B, DO!: 10.1098/ rspb.2011.1704).
The Eocene marked the end of a hothouse that had begun in the Cretaceous (see chart). Throughout this time there was no ice at the poles; Antarctica was once populated by dinosaurs. Might the predicted rise in temperature be enough to see a return to an ice-free world?
The poles will warm much more than the tropics, says Tim Lenton of the University of Exeter, UK, so the Arctic could well lose all its ice. But Antarctic ice would probably survive, thinks Andrew Watson of the University of East Anglia, in Norwich, UK, because Antarctica is isolated from the rest of the continents.
In fact, Antarctica may have gained its ice when it became cut off from Australia during the Eocene and lost the warming influence of equatorial currents. Plate tectonic models predict that Antarctica will remain isolated from the other continents for at least the next 250 million years (New Scientist, 17 September, p 16). Even so, Antarctica's icy future may not be secure. The long-term climate models that go up to the year 2300 are missing key positive feedbacks that could send global temperatures towards levels high enough to melt even an isolated Antarctica.
In particular, the release of methane from melting Arctic permafrost has not yet been factored in. Methane is a potent greenhouse gas, but remains in the atmosphere for only to years on average before it reacts with hydroxyl radicals in the air to form CO,. However, a large release of methane from melting permafrost could swamp the hydroxyl supply, allowing the methane to linger in the atmosphere for 15 years or more, further amplifying the warming (Global Biogeochemical Cycles, DOI: to.1029/2010GB003845). Some feedbacks never before considered might also come into play. Pearson says that in the future oceans may store less carbon. Normally some atmospheric carbon is lost at sea, buried in the carcasses of tiny marine animals. But sediment from the Eocene contains little carbon, suggesting that this process failed during the last hothouse (Paleoceanography, DOT: 10.1029/2005PA0o123o).
To work out why, Pearson looked at fossils of foraminifera, microscopic shelled marine animals. The tiny shells contain a chemical record of the position the animals occupied in the water column when they were alive. He found that Eocene foraminifera lived closer to the ocean surface than they do today, suggesting there was little food to sustain deeper-dwelling species.
Pearson thinks the warmer temperatures allowed bacteria at the ocean surface to metabolise faster, recycling carbon before it could sink and feed foraminifera living at depth. "if we warm the planet now, we switch on our bacteria," he said last month at a Royal Society discussion meeting in London.
A warming climate will also see trees and other large plants spreading north into the Arctic, says Bette Otto-Bliesner of the US National Center for Atmospheric  Research in Boulder, Colorado, who also attended last month's Royal Society event. Plants are darker than snow, so they absorb more of the sun's radiation. When Otto-Bliesner plugged the effect into a climate model of the Arctic, it got 3°C warmer.
Then there's hypenvarming. Ed Landing of the New York State Museum in Albany coined the term to describe the spiralling temperatures seen during the Cambrian period as a result of rising sea levels. Vast areas of the continents were covered with shallow seas during the Cambrian, which began 542 million years ago, because sea levels were sometimes tens of metres higher than today. Sea water absorbs more of the sun's heat than land, so swamping the continents caused the planet to warm up even more. Sea temperatures reached 40°C and oxygen levels in the water crashed (Palaeogeography, Palaeoclimatology, Pa laeoecology, DOI: to.1p16/j.palaeo.2o11.o9.005).
Something similar could happen again today. "These effects will operate as sea level rises to an appreciable degree and floods continental areas," agrees Thomas Algeo of the University of Cincinnati in Ohio. However, the effect today may not be as strong as it was in the Cambrian, says Lee Kump of Pennsylvania State University in University Park. There were no land plants back then, so the continents were more reflective and flooding them had a bigger effect.
Pearson and Landing's processes have not yet been plugged into any climate models so we do not know how significant they will be to our future. Pearson emphasises that hothouse Earth is far from inevitable. "We can prevent this happening," he says. But as researchers dig deeper into the factors that influence global climate, it is becoming increasingly clear that global warming might be about to get much more extreme.
SOURCE : NEW SCIENTIST MAGAZINE NOVEMBER 2011

See Beyond The Light To Find Future Disease

DEEP in the heart of the cell, your DNA may be undergoing subtle changes that could lead to a devastating disease several years down the line. New microscopy techniques are now lifting the lid on this inner world, potentially offering an early-warning system for cancer or Alzheirner's long before the diseases begin to bite.
Full-blown disease may be preceded by a long build-up. For example, a change in chromatin — the complex of DNA and proteins that packages DNA into the cell nucleus— is one of the earliest events to occur after exposure to carcinogens or ultraviolet rays. Changes sometimes happen years before symptoms of a tumour manifest themselves.
 However, tracking those changes has been frustratingly beyond the reach of medicine. They involve tweaks to structures that are less than 400 nanometres across, which is smaller than the wavelength of the visible light used in ordinary optical microscopy.
"When you have two structures that are smaller than the wavelength of light, you can't really tell them apart and everything is merged into one big blur," says Vadim Backman of Northwestern University in Evanston, Illinois. "We're missing all that complexity!' To make sense of the blur, Backman has ditched standard microscopes in favour of a method called partial wave spectroscopic (PWS) microscopy.
PWS looks at how a light beam interacts with a cell. As the beam travels through the cell it reflects off different structures within according to their density. The pattern from the reflected light is used to reconstruct the nanoscale detail inside the cell.
"It's almost like you have a cat in a black box. Instead of trying to X-ray it, you hear it miaow and so you know it is a cat," says Backman, who presented his work at the Frontiers in Cancer Prevention Research meeting in Boston last month. PWS is one of many new techniques for studying cells at the nanoscale.
It is particularly good at detecting changes in density in complexes like chromatin. So far, Backman has used PWS to show that apparently healthy cells taken from people with lung, colon, pancreatic, ovarian and oesophageal cancer have unusual chromatin densities not seen in cells from people who are cancer-free. What's more, such changes are relatively easy to detect because they often occur in normal cells as well as those that are or will become cancerous.
For example, Backman used PWS to identify which of 135 smokers had lung cancer and which were cancer-free by analysing cells swabbed from the inside of the cheek (Cancer Research, DO!: 10.1158/0008-5472. can-10-1686). Similarly, he found that a swab of rectal cells could identify people with colon cancer, and a cervical swab could detect women with ovarian cancer. "It is a very creative and promising method," says Igor Sokolov of Clarkson University in Potsdam, New York, who is using another nanoscale technique called atomic force microscopy to look for differences between healthy and cancerous cervical cells.
"Anything that provides new information about cellular structure at the nanoscale will potentially be advantageous for both diagnostics and further understanding of diseases." The hope is that PWS could be used to screen the general population for early signs of cancer. Backman also has preliminary evidence that PWS could be used to diagnose autoimmune diseases such as inflammatory bowel syndrome and to investigate the changes in cells that cause Alzheimer's disease to develop.

SOURCE : NEW SCIENTIST MAGAZINE NOVEMBER 2011

Brain Doping


MOST of us want to reach our full potential. We might drink a cup of coffee to stay alert, or go for a run to feel on top of the job. So where's the harm in taking a pill that can do the same thing?
So-called cognitive-enhancing drugs are usually prescribed to treat medical conditions, but they are also known for their ability to improve memory or focus. Many people buy them over the internet, which is risky because they don't know what they are getting. We also know next to nothing about their long-term effects on the brains of healthy people, particularly the young. But some scientists believe they could have a beneficial role to play in society, if properly regulated.
So who's taking what? The BBC's flagship current affairs show Newsnight and New Scientist ran an anonymous online questionnaire to find out. I also decided to try a cognitive enhancer for myself.
 The questionnaire was completed by 761 people, with 38 per cent saying they had taken a cognitive-enhancing drug at least once. Of these, nearly 40 per cent said they had bought the drug online and 92 per cent said they would try it again. Though not representative of society, the survey is an interesting, anecdotal snapshot of a world for which there is little data.
 The drugs people said they had taken included modafinil, normally prescribed for sleep disorders, and Ritalin and Adderall, taken for ADHD. The range of experiences is striking. One respondent wrote: "It helps me extend my concentration. I can study a topic for six hours, for example, that would have me bored to tears in two." Another wrote: "Did not help me do anything but feel anxious and excited, could not sit still even 15 hours later."
 When asked about the drugs' potential impact on society, people reported concerns beyond safety, for example warning that the drugs might create a two-tier education system in which some can afford the drugs and others can't. They voiced wider concerns too, such as: "If society has come to the point that we have to take cognitive enhancers to function or perform to certain expected levels, then it is a society that has placed performance over happiness and health."
Laurie Pycroft, a student at the University of Oxford, talked to Newsnight about his experiences with modafinil. "I've taken it a few times, primarily for its ability to increase wakefulness and allow me to concentrate and stay awake for very extended periods of time. I don't take it very often but if I want to stay awake for zo or 30 hours working on an essay it's very useful," he said.
Keen to learn more, I contacted Barbara Sahakian, a neuroscientist at the University of Cambridge. She and her team work with people who have conditions such as Alzheimer's and Parkinson's disease. One area of their research is testing whether cognitive-enhancing drugs such as modafinil help. Sahakian thinks these drugs could play a wider role in society.
Her most recent research showed that sleep-deprived surgeons performed better on modafinil. "I do think we've undervalued [the drugs]. As a society we could perhaps move fonvard if we all had a form of cognitive enhancement that was safe," she told me. Before I could self-experiment with the drug! had to satisfy Sahakian's colleague James Rowe that there were no risks. We also had trained medical staff nearby. I took a tablet on two separate days without knowing which one was modafinil and which was a placebo. I then did an hour or so of tests involving memory, strategy, planning and tests of impulsiveness.
On the second day I felt more focused and in control and thought I performed better in the tests. That was the day I had been given modafinil. Rowe summed up my performance: "What we've seen today is some very striking improvements.., in memory and, for example, your planning abilities and on impulsivity." It's human nature to want to push against our limitations, but what about the risks? Before sanctioning a drug as a cognitive enhancer for healthy people, regulators would require long-term safety studies so they could weigh up the risks and benefits.
Pharmaceutical companies are not rushing to carry out such studies, but Sahakian is calling for such work to be done before someone comes to harm. Some cognitive enhancers, such as Ritalin, are controlled drugs. Modafinil is not, so it is legal to buy it online, though it is illegal to supply it without a prescription. The UK government, through the Medicines and Healthcare products Regulatory Agency, told Newsnight that tackling the illegal sale and supply of medicines over the Internet is a priority. It's not just students who claim to find the drug beneficial.
Anders Sandberg of the Future of Humanity Institute at the University of Oxford talks openly about using cognitive-enhancing drugs. He is about to start a study in Germany to compare the effects of a range of cognitive enhancers, including two hormones —ghrelin, which promotes hunger, and oxytocin, which is associated with empathy—to test their powers at what he calls "moral enhancement".
"Once we have figured out how morality works as an emotional and mental system there might be ways of improving it," he told me. The bottom line is that cognitive-enhancing pills are a reality and people are using them. But how comfortable are we with the knowledge that some of our children's classmates might be taking such drugs to perform better at school, or that one candidate for a job interview might use modafin i I to outshine the others? And who was the real me, the one on modafinil, or the one not? Perhaps we should start thinking these questions through, before a drug offering far more than a few percentage points of enhancement comes our way.
SOURCE : NEW SCIENTIST MAGAZINE NOVEMBER 2011

Monday, November 21, 2011

Liquid Power For Chips

GETTING microchips wet is normally best avoided. But a new type of chip that is both powered and cooled by fluid pumping through it could power the computers, srnartphones and tablets of the future. If the design is successful, its inventors at IBM argue an entire supercomputer—like Watson, the firm's natural-language-processing trivia savant— could one day be squeezed onto mobile devices small enough to fit in your pocket.
The idea is inspired by the way the human brain is powered, says Bruno Michel, who is leading the work at IBM's Zurich Research Laboratory in Switzerland. "The human brain is 10,000 times more dense and efficient than any computer today. That's possible because it uses only one, extremely efficient, network of capillaries and blood vessels to transport heat and energy, all at the same time," he says.
Michel and his team's idea is to stack hundreds of silicon wafers on top of each other to create three-dimensional processors. Between each layer is a pair of fluidic networks. One of these carries in charged fluid to power the chip, while the second carries away the same fluid after it has picked up heat from the active transistors —effectively creating a microscopic flow battery.
Chips in 3D have already been  developed. Intel's Ivy Bridge processors, which are expected to appear in consumer products next year, will use vertical transistors to allow more components to be crammed into a given area, yielding many times the processing power of conventional 2D chips. Intel has also dabbled with stacking microchips on top of each other, as have other chip manufacturers including Belgium-based IMEC and Tezzaron in Illinois."The use of liquid to cool 3D chips is not new," says Bob Patti, chief technology officer of Tezzaron. "However, using the liquid as a power source as well as for cooling is a concept I haven't seen before!' Mark Zwolinski at the University of Southampton, UK, agrees it's an interesting approach. "To get increases in high-performance computing it's going to be necessary to move chips closer together," he says, and that means stacking them. But powering them with liquid is uncharted territory, he says. "It's not completely outrageous. I can't think why it shouldn't work, but it has never been done before!' It had better work, says Michel, because the computing industry has for decades depended on computer chips to double in processing power approximately every two years—the phenomenon known as Moore's law. But as transistors on conventional chips have shrunk, so too have the wires connecting them, increasing their resistance and making them less energy-efficient.
It takes about 85 kilowatts to run Watson, for example—enough to heat a dozen homes. And the machine's servers take up the same amount of space as to large refrigerators.
Using this biologically inspired approach to combine the electrical and cooling systems into one should make it possible to reduce that power consumption considerably. Michel says he and his colleagues have demonstrated that it is possible to use a liquid to transfer power via a network of fluidic channels, and they plan build a working prototype chip by 2014. If successful, we could end up with Watsons in our pockets, powered by a battery akin to that found in a cellphone.
SOURCE : NEW SCIENTIST MAGAZINE NOVEMBER 2011

Sunday, November 20, 2011

Ethical and Plentiful Stem Cells From Milk

Embryonic-like stein cells have been discovered in breast milk in large numbers. This is the first time such cells have been found in an adult. If the cells live up to their potential we may soon have stem cells for medical therapy, without destroying any embryos.
Back in 2008, Peter Hartrnann at the University of Western Australia in Crawley and his colleagues announced they had discovered stem cells in breast milk. Crucially, these cells have now been turned into the kind that represent all three embryonic germ layers —the endoderm, mesoderm and  ectoderm — a defining property of embryonic stem cells (ESCs). "They can become bone cells, joint cells, pancreatic cells that produce their own insulin, liver cells that produce albumin, and neuronal cells," says Foteini Hassiotou of Hartmann's team, who led the recent work.
The cells also express the majority of protein markers that you would expect to find in ESCs. "What is really amazing is that these cells can be obtained in quite large amounts in breast milk," Hassiotou adds.
She says the stein cells make up around 2 per cent of cells in breast milk, although the number varies according to how long the woman has been producing milk and how full her breasts are. Hassiotou will present the work early next year at the 7th International Breastfeeding and Lactation Symposium in Vienna, Austria.
Many researchers remain sceptical. "Perhaps there are some mammary gland stem cells that can be coaxed to have a broader potential, but! very much doubt that embryonic-like cells normally exist in the breast," says Robin Lovell-Badge of the UK's National Institute for Medical Research in London.
The real test will be to inject these cells into mice and see if they form teratomas — tumours containing tissue or structures derived from all three germ layers. "That's the gold standard for whether you have a true pluripotent cell," says Chris Mason of University College London. Hassiotou says they will start these tests in coming weeks.
Embryonic-like stem cells have been found in amniotic fluid and the umbilical cord, but never before in adults. Other adult stem cells exist, such as those that can generate blood or turn into bone, fat and cartilage cells. But these stem cells cannot generate as many cell types as the breast milk cells appear to. "If they are truly embryonic, this would be another way of getting stem cells that would not raise ethical concerns," says Mason.
Even if they do not turn out to be ESCs, these breast milk cells could still have great potential for regenerative medicine. "It might be possible to grow these cells then bank them so that if or when t he mother develops some disease later in life, such as diabetes, her cells may be defrosted and differentiated into pancreatic beta cells," says Lyle Armstrong of Newcastle University, UK, although he cautions that more tests are needed to determine exactly what these cells are.
SOURCE : NEWS SCIENTIST MAGAZINE NOVEMBER 2011

Pull Out Photons From Empty Space

You can get something from nothing — as long as you are moving close to the speed of light. The discovery confirms a 41-year-old prediction on how to pull energy from empty space and produce light.
The phenomenon relies on the long-established fact that empty space is not at all empty, but fizzing with particles that pop in and out of existence. This is down to the laws of quantum mechanics, which say that even a vaccum cannot have exactly zero energy but must exhibit small fluctuations of energy. These fluctuations show themselves as pairs of short-lived particles.
The presence of these "virtual" particles, usually photons, has long been proved in experiments demonstrating the standard Casimir effect, in which two parallel mirrors set close together I' will feel a pull towards each other. This happens because the small space between the mirrors limits the number of virtual photons that can appear in this region. Since there are more photons outside this space, the radiation pressure on the mirrors from the outside is larger than the pressure between them, which pushes the mirrors together.
Now Chris Wilson at Chalmers University of Technology in Gothenburg, Sweden, and his colleagues have gone a step further, pulling photons out of the void in a process called the dynamical Casimir effect. "It was a difficult technical experiment," says Wilson. "We were very happy when it worked."
The effect needs only a single metal mirror, but it must move at close to the speed of light through the sea of virtual photons in empty space. Because the mirror is a conductor, the photons — which are electromagnetic particles — will absorb some of its kinetic energy. They then radiate this extra energy by producing pairs of real photons.
Clearly, moving a mirror at close to light speed is impractical. So the researchers used a superconducting electrical circuit with an oscillator that rapidly alters the distance an electron must travel through the circuit.
The electron's movement is determined by the location at which the circuit's electric field falls to zero. To control the circuit's characteristics, the team used a superconducting quantum interference device. With this SQUID they were able to change the distance from the electron to the zero-field location so quickly that the electron appeared to move at a quarter of the speed of light. This was fast enough for the circuit to emit real photons (Nature, DOI: 10.1038/ natureto561). "Particles were produced in pairs, coming right out of the vacuum," Wilson says.
"This is a significant breakthrough," says Diego Dalvit, a physicist at the Los Alamos National Laboratory in New Mexico. The energy of virtual photons is cosmologists' best guess of what lies behind the dark energy that is causing the universe's expansion to accelerate. The experiment will open possibilities for doing table-top experiments of cosmology", Dalvit says.
SOURCE : NEW SCIENTIST MAGAZINE NOVEMBER 2011

Thursday, November 17, 2011

Malaria's Nemesis

FORTY years ago a secret military project In communist China yielded one of the greatest drug discoveries in modern medicine. Artemisinin remains the most effective treatment for malaria today and has saved millions of lives. Until recently, though, the drug's origins were a mystery.
"I was at a meeting in Shanghai in 2005 with all of the Chinese inalariologists and tasked who discovered artemisinin," says Louis Miller, a malaria researcher at the US National Institutes of Health in Rockville, Maryland. "I was shocked that no one knew." Miller and his NIH colleage Xinzhuan Su began digging into the drug's history. After reviewing letters, researchers' original notebooks and transcripts from once-secret meetings, they concluded the major credit should go to pharmacologist Tu Youyou. Two months ago In received America's top medical accolade, the Lasker award. Now 80, Tu still runs a lab in Beijing where she continues to study artemisinin. Shortly before receiving the award she met me at a hotel near New York's Central Park.
Joining us was her son-in-law, Lei Mao, a physician living in North Carolina, who served as interpreter. Tu is a diminutive figure with short, jet black hair that curls in wisps around her cars. Reading glasses dangle from a chain around her neck. On responding to any kind of praise she is softly spoken and painfully modest. Talking about her research, however, she speaks with an urgency and passion undimmed by passing years. Ttt carried out her work in the 19605 and 705 at the height of China's Cultural Revolution, a government-imposed attempt to forge a new kind of society according to its notion of socialism. It was a chaotic and frightening time, when scientists and other intellectuals were seen as class enemies and arbitrarily sent to work in the countryside for "re-education". 
Scientific publication was forbidden. Yet China had a pressing need that trumped any political cause. One of its few allies, North Vietnam, was at war with South Vietnam and its US ally, and malaria was rampant in the region. At the time t he primary treatment was a drug called chloroquine, but the malaria parasite was rapidly evolving resistance. The country was losing more soldiers to malaria than to American bullets. China's leader Mao Zedong set up a secret drug discovery project, known only as 523, for the date it was launched: 23 May 1967. Within a couple of years hundreds of scientists had tested thousands of synthetic compounds without success and it was common knowledge that a similar programme in the US had drawn a blank too. With no synthetic drugs forthcoming. attention turned to China's traditional medicines. The government asked the Academy of Traditional Chinese Medicine in Beijing to appoint one of its researchers to scour China's herb garden for a cure. 
The academy chose Tu, a mid-career scientist who had studied both Chinese and western medicine and knew enough about both to realise it would not be an easy job. "By the time I started my search over 240,000 compounds had been screened in the US and China without any positive results," she says. Soon after joining project 523, Tu was sent to Hainan province, a region in the far south long plagued by malaria, to observe the effects of the disease firsthand. As 'fuss husband had been banished to the countryside at the time, she had to entrust her 4-year-old daughter to the care of a local nursery. On Tu's return to Beijing six months later, her daughter did n't recognise her a ml hid from the "strange woman" who came to take her home. But Ti seems to bear no bitterness. "The work was the top priority, sol was certainly willing to sacrifice my personal life," she says. And her time in Hainan had made a big impression. "I saw a lot of children who were in the latest stages of malaria." Tu says. "Those kids died very quickly." She and three assistants reviewed more than 2000 recipes for traditional Chinese remedies in the academy's library. 'They made 380 herbal extracts and tested them on mice. One of the compounds did indeed reduce the number of malaria parasites in the blood. It was derived from sweet wormwood (Artemisia a arum), a plant common throughout China, which was in a treatment for "intermittent fevers"— a hallmark of malaria. 
The team carried out further tests, only to be baffled when the compound's powers seemed to melt away. Tu reread the recipe, written more than 1600 years ago in a text appositely titled "Emergency Prescriptions Kept Up One's Sleeve". The directions were to soak one bunch of wormwood in water and then drink the juice. Tu realised that their method of preparation, boiling up the wormwood, might have damaged the active ingredient. So she made another preparation using an ether solvent, which boils at 35°C. When tested on mice and monkeys, it proved too per cent effective. "We had just cured drug-resistant malaria," Tu says. "We were very excited." But would it work in humans—and was it safe? Tu volunteered to be the first test subject. "As the head of this research group, I had t he responsibility," she says. After suffering no ill effects, Tu began clinical trials with labourers who had contracted malaria in the forest. Within 30 hours their fevers had subsided and parasites were gone from their blood. Tu's work wasn't published until Pm, after the turmoil of the Cultural Revolution had died down. 
As was customary, the authors remained anonymous; in such an egalitarian society the group was considered more Important than the individual. The discovery of artemisinin remains a point of pride for China, and some argue it shows the worth of scouring herbal lore for other botanical buried gems. The drug now helps tens of millions of people a year, and it is still obtained from sweet wormwood, grown in China. Vietnam and cast Africa. Research is ongoing to breed strains with higher yields of the active compound. n the past decade the first resistance to artemisinin has emerged, in Cambodia.The drug still works but it takes longer, typically four days instead of two. To stop resistance from spreading further doctors now only use artemisin in in combination with another antimalarial; it is harder for the parasite to evolve resistance to two drugs simultaneously. As Tu says, malaria researchers have to remain vigilant.
"It is scientists responsibility to continue fighting for the healthcare of all humans?' And despite the importance of her work, she is modest. "What I have done was what I should have done as a return for the education provided by my country," she says. She expressed gratitude at the Lasker award ceremony, with her husband, daughter and granddaughter at her side. But that was just the icing on the cake: "I feel more reward when I see so many patients cured?"
SOURCE : NEW SCIENTIST MAGAZINE NOVEMBER 2011

Why Isn't NASA Hunting For Life?

Even the most ardent fans of the Red Planet must occasionally wish for more than just hints of water popping up in ever-new places. So why not send a robot to hunt directly for little green men? One word: Viking. NASA's Viking landers did just that in 1976, laying out a tasty solution of nutrients to attract any microbes that might be living in a soil sample, like cookies left on a plate for Santa. The nutrients were laced with radioactive carbon, so if the solution was digested, a radiation monitor above the sample would detect the resulting gas. Intriguingly, radioactive carbon was detected, but then another experiment found no evidence of organic compounds in the soil - there were no alien bodies.
"They were hoping to find signs of life but the results came back basically negative - there is no life as we know it," says Ralph Milliken at the University of Notre Dame in Indiana. The US did not send another mission to Mars for 20 years. The $2.5 billion Curiosity rover will hunt for organic molecules and isotopic hints of life, but NASA is still shying away from the L word. "NASA cannot say to taxpayers that they put $2.5 to $3 billion to search for life, and then say, 'We have found no life - thank you, bye bye',' says Michel Cabane, leader of one of Curiosity's organics-sniffing instruments, who is based at the Pierre and Marie Curie University in Paris, France.
"If you project the message that you are hunting for life, even though it is very important to many of us, and you return with a null or ambiguous answer, people would be disappointed," says jack Mustard of Brown University in Providence, Rhode Island, who is a former chair of NASA's advisory panel on Mars. In any case, he and others say the problem may simply be too hard to solve. "If I posed the  question 'prove that life existed in Earth's past' to you, it would be tough," Mustard says. "Geologists would say, we'll go find a fossil. But bodies are not always preserved on Earth."
He points out that Curiosity and other missions that touch down on the planet are only exploring a limited region for a limited time. Bethany Ehlmann at the California Institute of Technology in Pasadena agrees. "Think locally, not globally - that's a slight perversion of what the environmental movement thinks we should do here on Earth;' she says. Curiosity's landing site may once have been a lake, but other intriguing sites suggest life might have found a refuge in hydrothermal springs below the surface. "Environments during the first billion years of Mars history varied substantially," she says. "Now that we know there's this diversity out there, it becomes harder to say that the evidence says 'Mars did not have life'."

SOURCE : NEW SCIENTIST MAGAZINE NOVEMBER 2011

Wednesday, November 16, 2011

Science Now Makes It Possible To Attribute Some Types Of Weather Event To Climate Change

In the aftermath of hurricane Katrina in 2005, a vigorous debate raged as to whether it was a "normal " natural disaster or a consequence of global warming. Al Gore depicted the devastation of New Orleans in his movie An Inconvenient Truth and linked it to climate change. I became involved during a case before the High Court in London challenging a UK government decision to distribute the movie to schools. I was asked to provide expert written evidence on the extent to which the film correctly represented scientific understanding at the time.
I liked the film and thought that Gore's presentation of the causes and likely effects of climate change was broadly accurate. As the Intergovernmental Panel on Climate Change (IPCC) concluded in its most recent assessment report: "warming of the climate system is unequivocal, as is now evident from observations of increases in global average air and ocean temperatures, widespread melting of snow and ice, and rising global average sea level". And as data continues to pile up, the evidence gets ever stronger that human-induced emissions of greenhouse gases are the main cause of the observed warming over the past century.
But hurricanes are difficult. Climate models predict that they will become more intense. At the same time, considerable uncertainty remains. We only have about 40 years of reliable observational records, which precludes a clear determination of their variability. Given that different aspects of climate change could act to increase or decrease hurricane activity, whether or not Katrina can be ascribed to global warming is a challenge beset by difficulty.
 It is not surprising, then, that in the aftermath of Katrina many scientists were reluctant to make definitive statements about its links with climate change. The same has happened after many other extreme weather events such as floods and droughts. When pressed, scientists often say that instances of extreme weather are consistent with the expected effects of climate change. But such statements are problematic. They can be misinterpreted to imply that every extreme flood or drought is due to climate change when this is manifestly not the case. And when events occur that climate change might make less likely, such as the 1 record-breaking cold snap in the UK last December, it doesn't follow that climate predictions are inconsistent or wrong.
A clearer way of thinking about weather and climate is to consider the odds. After the European heat wave of 2003,1 worked with Myles Allen and Daithi Stone of the University of Oxford to show that human influence had very likely more than doubled the probability of such extreme temperatures. Since then, the concept that human influence could have "loaded the dice" in favour, or against, the occurrence of a particular heatwave, flood or drought has become widely accepted by scientists and seems a relatively straightforward message to communicate to the public. But this doesn't mean that we are yet able to reliably quantify the changed odds of all extreme weather events.
What we need is an attribution system, operated regularly like the weather forecast and made available to the public. Its purpose would be to deliver rapid and authoritative assessments of the links, if any, between recent extreme weather events and human-induced climate change. In the event of, say, a severe flood, the system would provide estimates of the extent to which the event was made more or less likely by human-induced climate change. It would also take into account alternative natural explanations such as the El Nino Southern Oscillation, a large-scale climate pattern in the tropical Pacific Ocean that affects weather worldwide.
We expect such a service would be of great interest to anyone who wants to know whether a given event could be attributed to climate change, from politicians and journalists to homeowners and insurance companies. Are we capable of delivering? Attribution is difficult and it will be important not to undermine the credibility of a system by prematurely attributing events. However, climate science has advanced to the point where it is possible to assess some types of weather event. For example, the European heatwave of 2003 was consistent with an increased risk of extreme weather caused by climate change, whereas the cold US temperatures of 2008 were not — instead being linked to the La Nina phase of the El Nino.
For other events, such as hurricane Katrina and last year's devastating Pakistan floods and Moscow heatwave, the cause remains uncertain. But the development of an attribution system should help drive further improvements in the forecasting models by continually confronting real world examples of extreme weather.
We at the Met Office — the UK's national weather service — are keen to take this idea forward, and have begun to put together an international collaboration of scientists called the Attribution of Climate-related Events Initiative, or ACE for short. Our aim is to understand when we can reliably estimate the odds of particular types of extreme weather event and for which types of events further improvements are required. We hope to have a prototype attribution system up and running in two years.
Should another category five hurricane make landfall on the US mainland its attribution will be tough. But scientific understanding is developing all the time. Were an attribution system established and its strengths and limitations well understood, a future judge, journalist or local resident, interested in who — or indeed what — to blame, would know where to go.
SOURCE : Peter A. Stott - NEW SCIENTIST MAGAZINE NOVEMBER 2011
Related Posts Plugin for WordPress, Blogger...

 
Design by Free WordPress Themes | Bloggerized by Lasantha - Premium Blogger Themes | Affiliate Network Reviews