Wednesday, December 14, 2011

Piracy Bill Walks The Plank

So that's what a digital revolt looks like. A million-and-a-half emails and almost 90,000 phone calls to US Congress. Public complaints from Google and Facebook. Even a few thousand old-fashioned letters to the US House of Representatives.
This internet ire, marshalled under the banner of American Censorship Day on 16 November, came in opposition to the proposed Stop Online Piracy Act (SOPA), legislation aimed at tackling the online trade in copyrighted movies and music.
Claims that the act, if passed, will "break the internet" helped persuade several big companies, including a trade group which represents Apple and Microsoft, to withdraw their support. Then, last week, SOP& backers in the House said they were open to changing the bill. Internet Activists 1, Big Media 0. But elsewhere the media barons appear to be winning. Over the past few years, several countries have debated or enacted laws that in the name of tackling piracy, have handed more power to large companies.
In the process, say activists. the movie and music industries have gained the ability to censor websites. The recent revolt was louder because SOPA is one of the more radical new proposals. It would give copyright holders the legal right to have sites which they deem to be peddling stolen content shut down a controversial power the European Court of justice has just ruled against Concern here is less about blatant piracy, which gets limited sympathy from activists, and more about sites on which copyrighted content is used in creativeways.
 YouTube, for example, is packed with satirical remixes of songs and films. "If SOPA were enacted, just one such mash-up could bring down an entire site," notes Eric Goldman, a technology lawyer at Santa Clara University in California. '"Talk about collateral damage," he says," The bill also gives copyright holders the right to force search engines to expunge infringing sites f rom search results." Google and others know that it is often impossible to determine whether a site is engaging in piracyor creative reuse or some combination of the two. That'sone reason why the search engine teamed up with Facebook and other sites to run a full-page advert opposing the bill in The New York Times.
Other moves by copyright advocates have been less crude and more successful. This July, five big US internet service providers committed to repeatedly caution - and then potentially disconnect - subscribers who share copyrighted material. The measure had limited opposition, but Goldman and others warn that it is not sufficiently overseen.
That's a fear shared across the Atlantic, where British activists have warned that any proposals to speed up processing of industry requests will erode courts' ability to assess claims of copyright breaches, In Ireland, judges have already been sidelined. After a legal battle in 2009 with a recording industry group, eircom, the country's largest ISP said it would no longer contest blocking requests from the group. None have yet been submitted. There is a lot of copyright theft online, and content creators have a right to demand protection. Yet the reusers of content, from music remixers to bloggers, are also creators. Striking a balance between the two will prove important if politicians want to stop the angry emails.

Sunday, December 11, 2011

Leaks, Hacks and Science

The words "science" and "censorship"clo not sit easily together. And yet over the past decade, science has come to occupy an increasingly important role in debates over free speech. This is partly due to public clashes between science and politics, from the censoring of climate science in the US under the Bush administration to David Nutt's dismissal as the UK government's adviser on drugs after voicing his views on the safety of ecstasy.
But it also reflects a revolution in access to information which has exposed every sector of society to an unprecedented level of scrutiny. From WikiLeaks to phone hacking, the tension between openness, privacy and confidentiality has become one of the defining issues of our time. Scientists have unexpectedly found themselves at the heart of this debate, as the latest round of leaked climate email s makes abundantly clear.
In recognition of this trend, the award-winning magazine Index on Censorship, which explores challenges to freedom of speech, has dedicated its latest issue, "Dark Matter", to science. One well-documented clash between science and censorship is in the use of libel actions to try to silence scientists and science writers; the journal Nature and Richard Dawkins are among the most recent to face suits. Scientists and science writers have emerged from some of these battles as free speech champions Wilinshurst and NASA climate scientist lames I lansen. There have also been striking incidents within science itself, perhaps most notoriously during the original "climategate scandal at the Climate Research Unit of the University of East Anglia in Norwich, UK.
The hacked emails revealed a reluctance to comply with freedom of information requests and possible attempts to conceal data. The information commissioner recently ruled that UEA should release its data, and partly in response to climategate. the UK's Royal Society has set up an investigation into openness. Not surprisingly there are debates about the proper course of action. Our special issue explores two opposing views. Fred Pearce, the leading chronicler of climategate, makes the argument for open access for the benefit of science and public discourse. Michael Halpern of the US Union of Concerned Scientists warns about freedom of information being deployed as a form of harassment.
He is calling on legislators to consider whether there is sufficient protection of academic free speech. This view has been echoed in the UK by Royal Society president Paul Nurse, as well as in the House of Lords during a debate on the proposed Protection of Freedoms legislation. The bill includes an amendment to the Freedom of Information Act which will oblige public authorities to release data sets in reusable electronic form and extend the range of FOI to the wider public sector. Two of the academics in the Lords. historian Paul flew and philosopher Onora O'Neill, raised concerns about the consequences for research. flew has suggested including an exemption for unpublished research (which already exists in Scottish WI legislation), warning of the possible harm that may be caused if data is released before it has been peer-reviewed.
However, even if an exemption is included in the bill, the combination of hackers, leakers and the sheer momentum of the open-access movement is likely to limit its scope, particularly for politically sensitive research. The leak of a further 5000 climategate cmails last week is a case in point. So there may be no other choice but to embrace full transparency. Any discussion about access to information cannot ignore the suppression of data within the drugs and medical devices industry. Lack of transparency in drug trials has left doctors dangerously ignorant of potential side effects. This is nothing new, but the demand for openness here too may become irresistible. As Deborah Cohen reports In our Issue, Thomas Jefferson of the Cochrane Collaboration believes that open access should be the default setting for drug trials once a drug is registered. Yet despite the backing of all the most eminent scientific institutions for openness there has been limited success. For now the focus remains on libel.
The pressing need for reform has resulted in an unprecedented campaigning alliance between free speech groups and science. For the past two years, the organisation Index on Censorship has been working on this with Sense about Science and the writers' association English PEN. There is no doubt that libel's chilling effect on scientific research and discourse has been a pivotal factor in the success of the campaign. While politicians are suspicious of giving any further freedom to the media, when presented with evidence of the extent to which scientists and science writers have been silenced and bullied by individuals, interest groups and industry, they have found it impossible to ignore. Reform that makes it less easy to use the law as a tool of intimidation and that introduces a robust public interest defence will be of critical importance for the future open discussion of issues of scientific concern. As Wilmshurst and Singh have demonstrated in their own costly and exhausting libel battles, all too often the fight for free speech depends on the courage of individuals. Both the law and the culture within the science establishment have to change in order to safeguard open debate. Freedom of expression depends on it.

Airbursts Trigger Martian Landslide

THE surface of Mars may be cold and desolate, but it is not unchanging.  New images show that avalanches of dust scour dozens of Martian sites each year. Without the abundant water and plate tectonics that keep Earth's surface in motion, the surface of Mars is much slower to change. But in one way it is more active.
While Earth's atmosphere shields us from asteroids smaller than 30 metres across, which burn up or shatter too high above the ground to have much effect on us, Mars's atmosphere is just I per cent the density of Earth's. Even rocks less than a metre across make it to the ground and gouge out craters. NASA's Mars Reconnaissance Orbiter spots about 20 new craters between 1 and 50 metres across on Mars each year -scars that were not present in earlier images.

Now closer scrutiny of these images has found thousands of small avalanches near i6 of the craters. The avalanches appear as dark streaks on the hilly terrain that surrounds the craters (a similar but more dramatic avalanche is shown in the image). They show up only in areas where there is a lot of light-coloured dust on the ground.To form, it seems the surface's dust coating was shaken loose and slid downhill, revealing the darker rocks beneath, says a team led by Kaylan Burleigh of the University of Arizona in Tucson (Icarus, DOI: 10.10 /6/ jicarus.2011.10.026).
The team carried out computer simulations that showed that, surprisingly, the avalanches do not seem to be caused by meteorites hitting the ground, but by the shock wave generated by a rock's passage through the atmosphere. This spreads across an area about a million times larger than the craters. "It was astonishing that a relatively small impact could affect a large area," says team member lay Mclosh of Purdue University in West Lafayette, Indiana.
 In one case, a cluster of 20-metre-wide craters is surrounded by thousands of dust avalanches in an area 4 kilometres square. The many small avalanches give the whole area a darker hue, like a giant black eye around the craters— except for a narrow light zone shaped like a curved dagger. That light zone is telling. As a rock tears through the atmosphere at supersonic speed it generates a shock wave, before triggering a second blast when it hits the ground.
The team's simulations show that the second shock interferes with the first, reinforcing it in some places and cancelling it out in others. Where it is cancelled out, a narrow curved strip of relatively undisturbed ground is left behind - just like the light zone seen around the crater cluster, says Melosh.
The Martian surface may be the best place in the solar system for recording the effects of these shock waves, since fewer impactors are blocked by its atmosphere than on Earth, says Mark Boslough of Sandia National Laboratory in Albuquerque, New Mexico. Rut they do occasionally causc devastation closer to home. The atmospheric shock wave from a 30 to 50-metre asteroid or comet levelled 2000 square kilometres of forest in Siberia in 1908. Studying shock waves on Mars might help us predict their effects on Earth, says Boslough.

Thursday, December 8, 2011

Ravens Use SticksTo Attract Attention

How do you capture a raven's heart? Arrest its attention by showing it a twig or stone. Ravens use referential gestures — one of the foundations of human language —to initiate relationships. From an early age we learn to use referential gestures such as pointing to direct another's attention. "People think that this pointing forms the basis of language," says Simone Pika at the Max Planck Institute for Ornithology in Seewiesen, Germany. "It has also been linked with mental-state attribution — the idea that you understand what I am pointing out." Apes raised in captivity can learn to use referential gestures to communicate with their human caregivers.
 Now Pika and Thomas Bugnyar at the University of Vienna, Austria, have recorded common ravens (Corvus corax) using them for the first time. The researchers observed seven pairs of wild ravens showing and offering stones, twigs and moss to each other — by holding the object  in their beaks — in an apparent attempt to grab the attention of another bird and initiate a relationship. Importantly, the ravens made these gestures only when another bird was watching, and the items they show and offer are not food. They usually gesture only to members of the opposite sex (Nature Communications, DOI: io.1o38/ncomms1567). Like humans, ravens form monogamous pairs that will defend a territory and raise their young together. They  even develop a repertoire of vocalisations that are exclusive to the couple. This high degree of cooperation may be what prompted the evolution of referential gestures in both humans and ravens, Pika says. "If communication is governed by cooperation, then this could be what prompted the evolution of language."
Rachel Shaw of the University of Cambridge says that the conclusions, although fascinating, should be viewed with caution. Although it might look like the birds are attempting to redirect the attention of another bird, the behaviour might simply be a mating or nesting ritual triggered by a peak in hormones, she says. Alex Kacelnik at the University of Oxford would like to know whether the ravens have as much flexibility as humans in their range of gestures and responses,” If both sender and receiver use a small, rigid set of targets, and fixed actions for responding, then the interactions could have more in common with classic avian communication systems than with human attention-sharing .”


Tilt The Head To Pick Up Brainwaves

Getting a more accurate picture of someone's brainwaves could simply be a case of lying them down. The boost this gives to the electrical signals that can be read from the brain could improve diagnosis of brain disorders and enhance control of brain-machine interfaces.
 Electroencephalography, or EEG, is a relatively cheap, non-invasive way to measure brain activity using a cap of electrodes. But the signal it picks up can be weak, as it must pass through a layer of cerebrospinal fluid (CSF) and the skull before it reaches the scalp and electrodes. It was assumed that the skull was the biggest obstacle in the signal's path. But Justin Rice at The City College of New York wondered whether the depth of the CSF might also be a problem. To investigate, one member of his team took 16 MRI images of his own brain. For half of the scans, he lay on his back, while in the other half he lay on his stomach. As suspected, the brain shifted slightly with gravity.
"The brain is heavy — it's going to move up and down," says Rice. What's more, the depth of the CSF layer changed depending on the researcher's position. "There was an average 1.55 millimetre difference in the thickness" between facing up and down, says Rice. The group then used EEG to monitor the brain activity of 14 volunteers as they carried out five visual tasks. Each repeated the task three times — once sitting down, once lying on their back and once lying on their front. Because the visual cortex is at the back of the brain, Rice's group expected to see a stronger signal when each person was lying on their back — allowing the brain to drop towards the back of the skull, thinning the CSF layer here. Sure enough, in four of the five tasks, this position boosted the EEG signal by around 30 per cent. In the fifth task, the signal was up to 200 per cent stronger. Rice presented the work at the Society for Neuroscience annual meeting in Washington DC in November.
CSF appears to have an impact on the signal because of its conductivity. "The current takes the path of least resistance, [moving laterally] through the CSF rather than the skull," says Rice. A thinner CSF layer means that more current reaches the skull, creating a stronger signal. The simplicity of head-tilting is likely to make it an attractive option. "People buy huge copper rooms to limit interference with the signal, and they cost hundreds of thousands of dollars," Rice says. This is much more cost effective." The discovery should be taken into account by clinicians, too, says Rice.
"With neurodegeneration or just normal ageing, the brain shrinks, resulting in a thicker layer of CSF," he says. "This could result in a weaker EEG signal!' Enhanced signals could also be useful in brain-machine interfaces that allow people to move robotic limbs or wheelchairs by thought alone. The same is true for thought-controlled computer games. Jonas Obleser, a neuroscientist at the Max Planck Institute for Human and Cognitive Brain Sciences in Leipzig, Germany, says the findings are a "worthwhile and creative demonstration".


Fight HIV with Muscle Antibodies

HIV doesn't play by the rules: instead of dodging the immune system it attacks it head on. Now it seems our best hope for a vaccine against the killer virus might also involve tearing up the rule book — by fighting an infection without help from the immune system. Using this approach, mice can keep HIV at bay even when given loo times the virus that would be needed to cause a lethal infection. Conventional vaccines work by exposing the body to safe versions of a pathogen or parts of it, which primes the immune system to fight off future infection.
But like other failed attempts to tackle HIV (see page 4) this approach has yet to deliver significant success — perhaps in part because HIV targets and ultimately weakens cells of the immune system that we rely upon to mount a strong defence . David Baltimore of the California Institute of Technology in Pasadena, California, and colleagues are among a group of  researchers who have decided on a dramatic change of tack. Instead of trying to hone the immune system, Baltimore's team has ignored it altogether.
Their approach — part vaccine, part gene therapy — is to turn muscles into factories that churn out potent antibodies against HIV. Because muscle isn't on HIV's hitlist, it will continue to generate antibodies even after an HIV infection, making the strategy potentially better than one which tweaks the immune system to produce the antibodies. "We produce a similar effect to a vaccine, but without ever calling on the immune system to do any of the work," says Alejandro Balazs, a member of Baltimore's team also at Caltech.
The team loaded a harmless, cold-related virus called adeno-associated virus (AAV) with genes that make potent antibodies to HIV. Then they used them to "infect" the leg muscles of mice with genes that pump out the antibodies. "The idea here is to basically supply the body with its own factory for making anti-HIV antibodies," says Baltimore. The mice continued to make the antibodies throughout their lives, and stayed healthy despite the researchers best efforts to overwhelm them with HIV. "We expected that at some dose, the antibodies would fail to protect the mice, but there was no infection even when we gave mice 100 times more HIV than would be needed to infect seven out of eight mice," says Balazs (Nature, DOI: 10.1038/naturei0660).
Because the mice in the experiment were equipped with human immune systems, Baltimore's team could check that the therapy fought off HIV before the virus was able to weaken the conventional immune system. They suspect that people would react in the same way to the vaccine/gene therapy approach — but they won't know for sure until they begin clinical trials. Baltimore says such a trial could start in one to two years. "As soon as we manufacture clinical grade materials, get regulatory approval and organise a trial, we hope to get going," he says. Another team led by Philip Johnson at the Children's Hospital of Philadelphia, Pennsylvania, could beat them to it.
Johnson and his colleagues used almost exactly the same strategy two years ago to protect macaques against Sly, the monkey equivalent of HIV (New Scientist, 22 May 2009, p 12). "We're gratified to hear that our work in the macaques has been confirmed in a humanised mouse model using HIV," says Johnson. "We're moving ahead with our plans to test the concept in human trials." For the trial, Johnson's team will also be using AAV injected into muscle, loaded with the gene for making a potent antibody.
Baltimore's trial has confirmed something else. that the potent antibodies produced by the mouse muscles in the new trial are exceptionally formidable against HIV. Called "broadly neutralising antibodies", they were first isolated from people with HIV in 2009. Lab tests show they are typically active against at least of go per cent of all known strains of HIV. "The results of this study are further evidence that broadly neutralising antibodies could confer high-level protection against HIV infection," says Wayne Koff, chief scientific officer at the International AIDS Vaccine Initiative. Koff says that we now know of 20 broadly neutralising antibodies, with 17 new ones reported only this August.
 Although the AAVs injected into the mice each carried genes to make only one antibody, people could be given broader protection by injecting their muscles with several AAVs that each make a different antibody, Baltimore says. "There's no reason why we couldn't make two or more antibodies by using multiple AAVs simultaneously." Lucy Dorrell of the Weatherall Institute of Molecular Medicine in Oxford, UK, says that one of the major obstacles conventional HIV vaccines face is priming the body to make broadly neutralising antibodies, so a method that delivers them "off the peg" has great potential. "However, the key issues are whether the vaccine will work as well in people, and whether it will be safe to use," she says.
 Koff stresses that, encouraging though the new results are, they should not be used as an excuse to abandon the quest for a conventional vaccine that primes the immune system. "This latest approach should certainly be studied further, but doesn't negate the need to continue research for an HIV vaccine," he says. "All approaches should be supported in efforts to prevent  and control HIV, which still infected an estimated 2.7 million people last year alone." Cate Hankins, chief scientific adviser at UNAIDS, agrees, pointing out that in September at the AIDS Vaccine 2011 conference in Bangkok, Thailand, models based on sexual behaviour showed that RV 144, the best performing conventional vaccine so far, could still prevent thousands of infections, even though it reduces the overall risk by just 31 per cent.


Climate’s Dark Dawn

AS THE latest round of United Nations climate negotiations began in Durban, South Africa. on Monday. expectations could scarcely have been lower. A globally binding deal is further away than ever. That makes considerable warming from climate change inevitable. In the last few weeks major reports by the International Energy Agency and the UN Environment Programme (UN EP) have concluded that we can still meet the UN's target ofl imiting warming to 2°C above preindustrial levels. But climate scientists are far less optimistic.
Many say the chance to avoid a 2°C rise has been and  gone, and we must now prepare for the damage to come. To have a fair chance of keeping below 2°C, global emissions would have to peak by 2020 or so before falling. There's no sign of that: they made their biggest-ever leap in zoto. Many countries promised to cut their emissions at the 2009 UN climate summit in Copenhagen, Denmark, but modelling carried out by climate consultancy Ecofys, based in the Netherlands, shows that even if those cuts were implemented in full we would still see 3.5°C of warming by 2100. To meet the 2°C target, even bigger cuts are needed.
According to UNEP, nations must emit the equivalent of no more than  44 gigatormes of carbon dioxide each year by 2020, but current pledges are 6 to 12 gigaton nes short. A UNEP report published last week says we can bridge this "emissions gap" by combining faster uptake of renewable energy, improved energy efficiency, and cuts to other greenhouse gases. A second UNEP report points out that it is much easier to cut short -lived greenhouse gases like methane, and fine atmospheric particles like soot from inefficient stoves.
Cutting these emissions could keep the thermostat from rising by 2°C until the middleof the century, buying us time to deal with CO,. It is the inertia in our society that is the problem, says the International Energy Agency in its 2011 World Energy Outlook report. The lifespan of existing power plants and factories commits us to 8o per cent of the total emissions that will take us to 2 °C. Construction over the next five years commits us to the rest, so unless we switch our investments from fossil fuels to low-carbon technologies within five years, 2°C of warming is inevitable. The reality is that the 2°C target is technically and economically feasible, but politically impossible. Saleemul Hug of the International Institute for Environment and Development says that countries would have to go to a war footing to do it.
If we compares the situation to the second world war, when nations like the UK transformed their economies to deal with an overwhelming threat. This single-minded commitment can work miracles, but no country has any such plans. The UK's secretary of state for energy and climate change, Chris Huhne, says the deadline loran international deal is 2015. Other countries, like the US and India, want to delay even discussing a deal until then, leaving scant time to the desired emissions peak in 2020. And as Durban talks got under way this week, Canada announced it would not be participating in any successor to the Kyoto protocol.
What should we do if we cannot hit emissions targets? First, do not give up on cutting emissions, says Brian Hoskins of Imperial College London. We don't fully understand theclimate, so we might emit more than is currently deemed "safe" and stay under 2°C by sheer luck. And don't change the 2°C target. It's too early, says Corrine Le Quer& director of the Tyndall Centre for Climate Change Research in the UK. The next IPCC report, due in 2013, could show that society can cope with a warmer world. If it does, a small increment in the target in ight be just i liable, she says, but until then shifting goalposts would be premature and send the wrong message. "I haven't seen anything to suggest that 2 °C is less dangerous now than it was when it was adopted," Le Quere says. At all costs. I loskins adds, we must avoid 4°C. Indeed, this could wipe out the Amazon rainforest and halt the Asian monsoon. Finally, some form of geoengineering may be necessary. "We are going to have to look at CO, removal," says Tim Lenton of the University of Exeter, UK. Trees are already being planted to act as carbon sinks, and prototype technologies exist for sucking CO, from the atmosphere. Hoskins says they could be essential later in the century to keep temperatures down.

Tuesday, November 29, 2011

Do Lipid Rafts Exist?

The contention that molecular platforms known as lipid rails sail on the cell's outer, or plasma. membrane has kept researchers debating for more than a decade. Although many scientists argue that rafts either don't exist or have no biological relevance, their supporters insist the idea remains afloat. Cell biologist Kai Simons. now at the Max Planck Institute of Molecular Cell Biology and Genetics in Dresden, Germany. and his colleague Dina lkonen christened the term "lipid raft" in a I 997 Nature paper that detailed the concept. At the time, the main model of the plasma membrane portrayed it as a sea of lipids through which proteins drifted with little or no organization.
But the duo proposed that two kinds of lipids. cholesterol and sphingolipids, huddle together in the membrane, producing stable formalions they called rafts. One line of evidence thr that concept, the team noted, was the goop left behind in test tube studies when certain detergents dissolve the plasma membrane. This so-called detergent-resistant membrane oozes with cholesterol. sphingolipids, and select membrane proteins.
Rafts serve the cell, the hypothesis suggested, because they gather in one place the proteins necessary for a particular task, such as importing material or relaying a message across the plasma membrane. Proposed passengers on the rafts included glycosylphos-ph ati dy I in os ito I (GPO-anchored prole ins. which adhere to the outer layer of the plasma membrane and perform functions such as receiving signals and helping cells stick together. The idea roiled the cell biology community. "Right away. there were two camps;' Simons says.
"One camp didn't believe a word.- But plenty of scientists hopped aboard. More than 3000 papers later, the activities attributed to lipid rafts include promoting drug resistance in cancer cells and serving as escape hatches for viruses such as the ones that cause flu. Possibly the most debated hypothesis invoked rafts to explain the activation of the T cell receptor, the cell surface protein that spurs these immune cells to action when a pathogen is on the loose in the body. Incorporating the receptor into a raft helps switch it on. studies have suggested, possibly by allowing the receptor to hobnob with other proteins necessary for stimulating the T cell or because those proteins need the raft environment to work. Members of both camps concur that the raft concept was compelling and galvanized investigation into membrane organiza-tion.
"The raft hypothesis is brilliant in some ways," says biophysical chemist Jay Groves of the University of Calitbrnia, Berkeley. "My personal opinion is that the very idea of rafts enriches scientific research' says biophysicist Sarah Keller of the University of Washington, Seattle, "whether or not rafts exist in either specific cases or more generally!' But how solid is the proof there are rafts? Skeptics abound. and they've scored some hits on the original raft evidence. Membrane biologist Michael Edidin of Johns Hopkins Uni-versity in Baltimore, Maryland, says the field has fallen victim to what he calls the "sins of detergent extraction!' Too many researchers have assumed that detergent-resistant mem-branes are genuine rafts, even though studies reveal that extraction can disrupt their compo-sition.
 "The idea of these isolatable islands of raft lipids is probably not viable," says membrane biologist Ken Jacobson of the Univer-sity of North Carolina, Chapel I fill. According to the raft hypothesis, certain lipids naturally sort themselves to create the organized pockets of proteins that make up rafts. But many researchers don't buy that mechanism for inducing order in the mem-brane. It is too passive, especially when the plasma membrane is constantly churning, says Satyaj it Mayor, a membrane biologist at the National Centre for Biological Science in Bangalore, India. Instead, he says. his group's research points to a more active pro-cess in which "the cell is using energy to con-struct regions in the membrane." Groves says the original hypothesis gave lipids too much credit—and proteins too little.
"Proteins define their own environment. Lipids almost completely follow their behavior," he says. Critics have also griped because the vital statistics of lipid rafts, such as their size and life span on a cell membrane, have proven so difficult to pin down. In an early study, Simons and colleagues estimated the d iam-eter of rafts at about 50 nanometers, or more than 3000 sphingolipid molecules across. In a 2006 attempt to sharpen the raft definition, a group of membrane researchers suggested a size range of 10 nanometers to 200 nano-meters, and other estimates have come in higher or lower. Rafts still have their supporters, how-ever. Akihiro Kusumi, a membrane biophysicist at Kyoto University in Japan. says that if researchers specify raft criteria, such as size. and spell out which isolation techniques they use, they can demonstrate structures that qualify as rafts.
For his part. Simons acknowledges the failings of detergent extraction but counters that new cell imaging techniques are adding to the evidence for rafts. Researchers using one form of super-resolution microscopy, known as stimulated emission depleted microscopy, found in 2009 that sphingolipids and GPI-anchored proteins tarried in certain molecular clusters in the membrane, as if they briefly joined rafts. Cell biologists say it's important to resolve the lipid raft debate eventually because the plasma membrane controls what enters and exits cells and how they send and receive sig-nals. Although researchers have proposed several alternatives for how the plasma membrane organizes itself, none of them has caught on. But if a better explanation rises to the surfitce, cell biologists will have to give some of the credit to rafts.

Monday, November 28, 2011

To Self-Diagnose, Spit On iPhone

Handheld gadgets could one day diagnose infections at the push of a button by using the supersensitive touchscreens in today's smartphones. Many believe that in the future collecting samples of saliva, urine or blood could be performed using a cheap, USB-stick-sized throwaway device called a lab-on-a-chip. The user would inject a droplet of the fluid in the chip, and micropumps inside it would send the fluid to internal vessels containing reagents that extract target disease biomarker molecules. The whole device would then be sent to a lab for analysis.
But Hyun Gyu Park and Byoung Yeon Won at the Korea Advanced  Institute for Science and Technology in Daejeon think touchscreens could improve the process by letting your phone replace the lab work. Park suggests the lab-on-a-chip could present a tiny droplet of the sample to be pressed against a phone's touchscreen for analysis, where an app would work out whether you have food poisoning, strep throat or flu, for example. The idea depends on a method the pair have devised to harness the way a touchscreen senses a fingertip's ability to store electric charge — known as its capacitance.
The capacitive sensitivity of touchscreens is far higher than what is needed to sense our fingers as we play games or tap out tweets. "Since these touchscreens can detect very small capacitance changes we thought they could serve as highly sensitive detection platforms for disease biomarkers," says Park. So the pair began proof-of-concept tests to see if the touchscreens in our pockets could play a role in diagnosing our ailments.
First they took three solutions containing differing concentrations of DNA from the bacteria that causes chlamydia and applied droplets from each to an iPhone-sized multitouch display. They found that the output from the screen's array of crisscrossed touch-sensing electrodes could distinguish between the capacitances caused by each concentration using droplets of only in microlitres (Angewandte Chemie International Edition, DOI: ia1002/anie2m105986).
 The technology is not yet able to identify individual pathogens but Park sees the display's ability to differentiate between concentrations as a first step towards this. However, before the idea can be rolled out the built-in software on touchscreens that eliminates false-touch signals caused by moisture or sweat would need modifying.
Park also plans to develop a film that can be stuck on a touchscreen to which the biomarkers will attach. "Nobody wants direct application of bio-samples onto their phone," he says. "This is potentially possible," says Harpal Minhas, editor of the journal Lab On A Chip. "But any changes to current production-line touchscreens would need to demonstrate huge financial benefits before they are implemented!' And DNA sequencing, rather than concentration measurement, is more likely to be necessary for disease diagnosis, he adds.

Sunday, November 27, 2011

Extreme Weather, Time To Prepare

An international scientific assessment finds for the first time that human activity has indeed driven not just global warming but also increases in some extreme weather and climate events around the world in recent decades. And those and likely other weather extremes will worsen in coming decades as greenhouse gases mount, the report finds.
But uncertainties are rife in the still-emerging field of extreme events. Scientists cannot attribute a particular drought or flood to global warming, and they can say little about past or future trends in the risk of high-profile hazards such as tropical cyclones. Damage from weather disasters has been climbing, but the report can attribute that trend only to the increasing exposure of life and property to weather risks. Climate change may be involved, but a case cannot yet be made.
Despite the uncertainties, the special report from the Intergovernmental Panel on Climate Change (IPCC) released 18 Novem-ber stresses that there is still reason for taking action now. The panel recommends "low-regrets measures," such as improvements in everything from drainage systems to early warning systems. Such measures would ben-efit society in dealing with the current climate as well as with almost any range of possible future climates.
The report takes a cautious, consensus-based approach that draws on the published literature. Headlines and even some scien-tists may point to the current Texas drought or the 2003 European heat wave as the result of the strengthening greenhouse. But the report finds that extreme weather and climate events are far too rare to blame any one of them on global warming. A 29-page summary released  for policymakers has one sentence on the sub-ject: "Attribution of single extreme events to anthropogenic climate change is challenging?'
The report does find "evidence ... of change in some extremes." These are generally lower-profile changes. For example, the report finds that it is likely that the number of cold days and nights has decreased since 1950. In many regions "there is medium con-fidence that the length or number of warm spells, or heat waves, has increased." And the frequency of heavy precipitation events has changed in some regions, with increases being more likely than decreases.
There is no sign that any of these climate changes has been driving the obvious rise in economic losses from weather- and climate-related disasters, the report finds. Instead, it says, "the major cause of the long-term increases in economic losses" has been an increase in the number of dangerously placed people and their increasing wealth. More and more people have been living in the path of disastrous weather, whether poor people with nowhere else to live but low-lying deltas or the rich flocking to the coastlines.
Advocates and some scientists have pushed mounting disasters as reasons for action to rein in global warming. But "as com-pelling as disasters are," says climate policy analyst Roger Pielke Jr. of the University of Colorado, Boulder, "I've never thought disas-ters were an appropriate use" for advocating reduction of greenhouse emissions. "I give some credit to the IPCC," he says.
The report does find reasons to take certain kinds of action. It points to evidence that at least some of the recent changes can be attributed to humans. "It is likely that" human influences have raised the lowest and highest temperatures in a day on a global scale. And the intensification of extreme precipitation can likely be attributed to human influence. Based on climate model results and basic physics, these and perhaps other trends are likely to continue and accelerate as the green-house strengthens. Tropical cyclone maximum wind speeds are likely to increase, the report says, droughts will intensify in some regions, and sea level will continue to rise, flooding low-lying coastal areas.
Even with trends in extreme events continuing, In many regions, the main drivers for future increases in economic losses due to some climate extremes will be socioeconomic in nature," according to the report. That is, the main driver will be increasing exposure of rich and poor to climatic hazanis, with the poor being more vulneralie than the rich. But whatever the drivers of future losses and what-ever the uncertaindes, low-regrets actions can be taken now, acconling to the report. "Even with substantial uncertainties about extremes and extreme events that may lie ahead," says Thomas Wilbanks of Oak Ridge National Laboratory in Tennessee, a report lead author. "there are things that we can—and should—be doing now to increase our resilience."
The report lists actions that it says would improve human well-being in the short term while laying a foundation for tackling the changes that appear to be in the offing. Plan-ning land use, managing ecosystems, and improving water supplies and irrigation systems all provide "chances to make the world more livable while decreasing risk" from future climate changes, said Christopher Field of Stanford University in Palo Alto, California, a co-chair for the report. Rajendra Pachauri, chair of the 1PCC, added his hope that that message and the rest of the report would be well received at the 2011 United Nations Climate Change Conference that starts 28 November in Durban, South Africa.


Sauna's Boost For Heart and Homour

That warm, fuzzy feeling you get from sitting in a sauna isn't in your imagination — and it may also help your heart. People with chronic heart failure who took saunas five times a week for three weeks improved their heart function and the amount of exercise they could do. Meanwhile, neurons that release the "happiness molecule" serotonin respond to increases in body temperature, perhaps explaining the sauna's pleasurable effects.
Heart failure occurs when the heart is unable to supply enough blood to the body, resulting in shortness of breath and difficulty exercising. Previous studies have hinted that saunas might boost health. To investigate, Takashi Ohori at the University of Toyama in Japan and colleagues asked 41 volunteers with heart failure to take 15-minute saunas five times per week, using a blanket for 30 minutes afterwards to keep their body temperature about 1°C higher than normal.
Sauna treatment increased the heart's ability to pump blood, and boosted the distance participants could walk in 6 minutes from 337 metres to 379 metres. The team also noticed improved function of the endothelium - the membrane lining the inside of the heart that releases factors controlling the diameter of blood vessels, and clotting.
The researchers also found more circulating endothelial progenitor cells - adult stem cells that can turn into endothelial cells (The American Journal of Cardiology, DOT: 10.1016/ Lamjcard.2011.08.014). In a separate study, the same group temporarily cut off blood supply to rats' hearts to mimic a heart attack, then gave them a sauna every day for four weeks. Later examination saw fewer of the changes to the heart's chambers that usually occur after heart attacks in rats not exposed to a sauna. In addition, the sauna rats showed increases in endothelial nitric oxide synthase, an enzyme that regulates blood pressure and the growth of new blood vessels (AJP: Heart and Circulatory Physiology, DOI: io.1152/ajpheartoolo3.2on).
"We think that repeated saunas trigger pathways that produce nitric oxide and other signalling molecules that eventually reduce resistance to the pumping capacity of the heart," says Tofy Mussivand at the University of Ottawa Heart Institute in Ontario, Canada, who was not involved in the research. Heating might have other benefits, says Christopher Lowry of the University of Colorado at Boulder. He has identified a group of serotonin-releasing neurons in a region of the brain called the dorsal raphe nucleus, which fire in response to increases in body temperature.
They seem to initiate cooling, but these neurons also project into a region of the brain that regulates mood, which may account for the pleasure of a sauna. Intriguingly, these same neurons feed into the sympathetic nervous system. Activation of the SNS boosts blood pressure and heart rate, but "by heating up the skin you inhibit the sympathetic nervous system, which is probably a good thing if you've had a heart attack", says Lowry. Mussivand cautions against people with heart failure rushing to the nearest spa, though. "Cardiologists currently don't recommend that heart failure patients should be exposed to heat, so this has to be done under medical supervision," he says.


Friday, November 25, 2011

Alzheimer’s Damage Reserved With A Jolt

Brain shrinkage in people with Alzheimer's disease can be reversed in some cases - by jolting the degenerating tissue with electrical impulses. Moreover, doing so reduces the cognitive decline associated with the disease. "In Alzheimer's disease it is known that the brain shrinks, particularly the hippocampus," says Andres Lozano at Toronto Western Hospital in Ontario, Canada.
What's more, brain scans show that the temporal lobe, which contains the hippocampus, and another region called the posterior cingulate use less glucose than normal, suggesting they have shut down. Both regions play an important role in memory. To try to reverse these degenerative effects, Lozano and his team turned to deep brain stimulation - sending electrical impulses to the  brain via implanted electrodes.
The group inserted electrodes into the brains of six people who had been diagnosed with Alzheimer's at least a year earlier. They placed the electrodes next to the fornix - a bundle of neurons that carries signals to and from the hippocampus - and left them there, delivering tiny pulses of electricity 130 times per second.
Follow-up tests a year later showed that the reduced use of glucose by the temporal lobe and posterior cingulate had been reversed in all six people (Annals of Neurology, DOT: 10.1002/ ana.22089). The researchers have now begun to investigate the effects on the hippocampus. At the Society for Neuroscience annual meeting in Washington DC last week they announced that while they saw hippocampal shrinking in four of the volunteers, the region grew in the remaining two participants.
"Not only did the hippocampus not shrink, it got bigger - by 5 per cent in one person and 8 per cent in the other," says Lozano. It's an amazing" result, he adds. Tests showed that these two individuals appeared to have better than expected cognitive function, although the other four volunteers did not. Though Lozano is not sure exactly how the treatment works, his team's recent work in mice suggests that the electrical stimulation might drive the birth of new neurons in the brain.
Deep brain stimulation in mice also triggers the production of proteins that encourage neurons to form new connections. The researchers are now embarking on a trial involving  around 50 people, but John Wesson Ashford at Stanford University, California, wonders how practical the approach will be when there are millions of people with Alzheimer's. Lozano points out that around 90,000 people worldwide with Parkinson's disease have already received deep brain stimulation. The incidence of Alzheimer's is only five times that of Parkinson's, he says. "If it can be used in Parkinson's, it can be used in Alzheimer's."

Humanity’s First Word? Duh!

You may think humanity's first words are lost in the noise of ancient history, but an unlikely experiment using plastic tubes and puffs of air is helping to recreate the first sounds uttered by our distant ancestors.
Many animals communicate with sounds, but it is the variety of our language that sets us apart. Over millions of years, changes to our vocal organs have allowed us to produce a rich mix of sounds. One such change was the loss of the air sac — a balloon-like organ that helps primates to produce booming noises.
All primates have an air sac except humans, in whom it has shrunk to a vestigial organ. Palaeontologists can date when our ancestors lost the organ, as the tissue attaches to a skeletal feature called the hyoid bulla, which is absent in humans. "Lucy's baby", an Australopithecus afarensis girl who lived 3.3 million years ago, had a hyoid bulla; but by the time Homo heidelbergensis arrived on the scene 600,000 years ago, air sacs were a thing of the past.
To find out how this changed the sounds produced, Bart de Boer of the University of Amsterdam in the Netherlands created artificial vocal tracts from shaped plastic tubes. Air forced down them produced different vowel sounds, and half of the models had an extra chamber to mimic an air sac. De Boer played the sounds to 22 people and asked them to identify the vowel. If they got it right, they were asked to try again, only this time noise was added to make it harder to identify the sound. If they got it wrong, noise was reduced.
He found that those listening to tubes without air sacs could tolerate much more noise before the vowels became unintelligible. The air sacs acted like bass drums, resonating at low frequencies, and causing vowel sounds to merge; Lucy's baby would have had a greatly reduced vocabulary. Even simple words — such as "tin" and "ten" —would have sounded the same to her.
Observations of soldiers from the first world war corroborate de Boer's findings. Poison gas enlarged the vestigial air sacs of some soldiers, who are said to have had speech problems that made them hard to comprehend.
De Boer's study provides clear evidence supporting the idea that the need to produce complex sounds to communicate better made air sacs shrink, says Ann MacLarnon of the University of Roehampton in London. More sounds meant more information could be shared, giving those who lacked air sacs a better chance of survival in a dangerous world.
De Boer found that air sacs also interfered with the workings of the vocal cords, making consonants trickier. Only once they had gone could words like it perpetual", requiring rapid changes in sound, be produced.
What, then, might our ancestors' first words have been? With air sacs, vowels tend to sound like the "u" in "ugg". But studies suggest it is easier to produce a consonant plus a vowel, and "d" is easier to form with "u". "Drawing it all together, I think it is likely cavemen and cavewomen said (duh before they said `ugg'," says de Boer.

Our Ancestor, The Mega-Organism

ONCE upon a time, 3 billion years ago, there lived a single organism called LUCA. It was enormous: a mega-organism like none seen since, it filled the planet's oceans before splitting into three and giving birth to the ancestors of all living things on Earth today. 
This strange picture is emerging from efforts to pin down the last universal common ancestor — not the first life that emerged on Earth but the life form that gave rise to all others. The latest results suggest LUCA was the result of early life's fight to survive, attempts at which turned the ocean into a global genetic swap shop for hundreds of millions of years. Cells struggling to survive on their own exchanged useful parts with each other without competition — effectively creating a global mega-organism. 
It was around 2.9 billion years ago that LUCA split into the three domains of life: the single-celled bacteria and archaea, and the more complex eukaryotes that gave rise to animals and plants (see timeline, opposite) It's hard to know what happened before the split. Hardly any fossil evidence remains from this time, and any genes that date that far back are likely to have mutated beyond recognition. 
That isn't an insuperable obstacle to painting LUCA's portrait, says Gustavo Caetano-Anolles of the University of Illinois at Urbana-Champaign. While the sequence of genes changes quickly, the three-dimensional structure of the proteins they code for is more resistant to the test of time. So if all organisms today make a protein with the same overall structure, he says, it's a good bet that the structure was present in LUCA. He calls such structures living fossils, and points out that since the function of a protein is highly dependent on its structure, they could tell us what LUCA could do. 
"Structure is known to be conserved when sequences aren't," agrees Anthony Poole of the University of Canterbury in Christchurch, New Zealand, though he cautions that two very similar structures could conceivably have evolved independently after LUCA. 
To reconstruct the set of proteins LUCA could make, Caetano-Anolles searched a database of proteins from 420 modern organisms, looking for structures that were common to all. Of the structures he found, just 5 to ii per cent were universal, meaning they were conserved enough to have originated in LUCA (BMC Evolutionary Biology, DOT: 10.1186/1471-2148-11-140). 
By looking at their function, he concludes that LUCA had enzymes to break down and extract energy from nutrients, and some protein-making equipment, but it lacked the enzymes for making and reading DNA molecules.
 This is in line with unpublished work by Wolfgang Nitschke of the Mediterranean Institute of Microbiology in Marseille, France. He reconstructed the history of enzymes crucial to metabolism and found that LUCA could use both nitrate and carbon as energy sources. Nitschke presented his work at the UCL Symposium on the Origin of Life in London on 11 November.
If LUCA was made of cells it must have had membranes, and Armen Mulkidjanian of the University of Osnabruck in Germany thinks he knows what kind. He traced the history of membrane proteins and concluded that LUCA could only make simple isoprenoid membranes, which were leaky compared with more modern designs (Proceedings of the International Moscow Conference on Computational Molecular Biology, 2011, p 92).
LUCA probably also had an organelle, a cell compartment with a specific function. Organelles were thought to be the preserve of eukaryotes, but in 2003 researchers found an organelle called the acidocalcisome in bacteria. Caetano-Anolles has now found that tiny granules in some archaea are also acidocalcisomes, or at least their precursors. That means acidocalcisomes are found in all three domains of life, and date back to LUCA (Biology Direct, DOI: 101186/1745-6150-6-50).
So LUCA had a rich metabolism that used different food sources, and it had internal organelles. So far, so familiar. But its genetics are a different story altogether. For starters, LUCA may not have used DNA. Poole has studied the history of enzymes called ribonucleotide reductases, which create the building blocks of DNA, and found no evidence that LUCA had them (BMC Evolutionary Biology, DOT: 10.118 6/1471-2148- 10-383). Instead, it may have used RNA: many biologists think RNA came first because it can store information and control chemical reactions (New Scientist, 13 August, p 32).
The crucial point is that LUCA was a "progenote", with poor control over the proteins that it made, says Massimo Di Giulio Df the Institute of Genetics and Biophysics in Naples, Italy. Progenotes can make proteins using genes as a template, but the process is so error-prone that the proteins can be quite unlike what the gene specified. Both Di Giulio and Caetano-Anolles have found evidence that systems that make protein synthesis accurate appear long after LUCA. "LUCA was a clumsy guy trying to solve the complexities of living on primitive Earth," says Caetano-Anolles.
He thinks that in order to cope, the early cells must have shared their genes and proteins with each other. New and useful molecules would have been passed from cell to cell without competition, and eventually gone global. Any cells that dropped out of the swap shop were doomed. It was more important to keep the living system in place than to compete with other systems," says Caetano-Anolles. He says the free exchange and lack of competition mean this living primordial ocean essentially functioned as a single mega-organism.
"There is a solid argument in favour of sharing genes, enzymes and metabolites," says Mulkidjanian. Remnants of this gene-swapping system are seen in communities of microorganisms that can only survive in mixed communities. And LUCA's leaky membranes would have made it easier for cells to share.
"It's a plausible idea," agrees Eric Alm of the Massachusetts Institute of Technology. But he says he "honestly can't tell" if it is true.
Only when some of the cells evolved ways of producing everything they needed could the mega-organism have broken apart. We don't know why this happened, but it appears to have coincided with the appearance of oxygen in the atmosphere, around 2.9 billion years ago. Regardless of the cause, life on Earth was never the same again.


Thursday, November 24, 2011

A Collection Of Nothings Means Everything To Mathematics

THE mathematicians' version of nothing is the empty set. This is a collection that doesn't actually contain anything, such as my own collection of vintage Rolls-Royces. The empty set may seem a bit feeble, but appearances deceive; it provides a vital building block for the whole of mathematics. It all started in the late 1800s.
While most mathematicians were busy adding a nice piece of furniture, a new room, even an entire storey to the growing mathematical edifice, a group of worrywarts started to fret about the cellar. Innovations like non-Euclidean geometry and Fourier analysis were all very well - but were the underpinnings sound? To prove they were, a basic idea needed sorting out that no one really understood. Numbers. Sure, everyone knew how to do sums.
Using numbers wasn't the problem. The big question was what they were. You can show someone two sheep, two coins, two albatrosses, two galaxies. But can you show them two? The symbol "2"? That's a notation, not the number itself. Many cultures use a different symbol. The word "two"? No, for the same reason: in other languages it might be deux or zwei oribtatsu. For thousands of years humans had been using numbers to great effect; suddenly a few deep thinkers realised no one had a clue what they were. An answer emerged from two different lines of thought: mathematical logic, and Fourier analysis, in which a complex waveform describing a function is represented as a combination of simple sine waves.
These two areas converged on one idea. Sets. A set is a collection of mathematical objects - numbers, shapes, functions, networks, whatever. It is defined by listing or characterising its members. "The set with members 2, 4, 6, 8" and "the set of even integers between i and 9" both define the same set, which can be written as {2, 4, 6, 8}.
Around 1880 the mathematician Georg Cantor developed an extensive theory of sets. He had been trying to sort out some technical issues in Fourier analysis related to discontinuities — places where the waveform makes sudden jumps. His answer involved the structure of the set of discontinuities. It wasn't the individual discontinuities that mattered, it was the whole class of discontinuities.
How many dwarfs?
One thing led to another. Cantor devised a way to count how many members a set has, by matching it in a one-to-one fashion with a standard set. Suppose, for example, the set is {Doc, Grumpy, Happy, Sleepy, Bashful, Sneezy, Dopey}.
To count them we chant "1, 2, 3..." while working along the list: Doc (i), Grumpy (2), Happy (3), Sleepy (4), Bashful (5), Sneezy (6) Dopey (7). Right: seven dwarfs. We can do the same with the days of the week: Monday (0, Tuesday (2), Wednesday (3), Thursday (4), Friday (5), Saturday (6), Sunday (7). Another mathematician of the time, Gottlob Frege, picked up on Cantor's ideas and thought they could solve the big philosophical problem of numbers.
The way to define them, he believed, was through the process of deceptively simple process of counting. What do we count? A collection of things — a set. How do we count it? By matching the things in the set with a standard set of known size. The next step was simple but devastating: throw away the numbers.
You could use the dwarfs to count the days of the week. Just set up the correspondence: Monday (Doc), Tuesday (Grumpy)... Sunday (Dopey). There are Dopey days in the week. It's a perfectly reasonable alternative number system. It doesn't (yet) tell us what a number is, but it gives a way to define "same number". The number of days equals the number of dwarfs, not because both are seven, but because you can match days to dwarfs. What, then, is a number? Mathematical logicians realised that to define the number 2, you need to construct a standard set which intuitively has two members. To define 3, use a standard set with three numbers, and so on.
But which standard sets to use? They have to be unique, and their structure should correspond to the process of counting. This was where the empty set came in and solved the whole thing by itself. Zero is a number, the basis of our entire number system (see "From zero to hero", page 41). So it ought to count the members of a set. Which set? Well, it has to be a set with no members. These aren't hard to think of: "the set of all honest bankers", perhaps, or "the set of all mice weighing 20 tonnes". There is also a mathematical set with no members: the empty set.
It is unique, because all empty sets have exactly the same members: none. Its symbol, introduced in 1939 by a group of mathematicians that went by the pseudonym Nicolas Bourbaki, is 0. Set theory needs 0 for the same reason that arithmetic needs o: things are a lot simpler if you include it. In fact, we can define the number o as the empty set. What about the number 1? Intuitively, we need a set with exactly one member. Something unique. Well, the empty set is unique. So we define ito be the set whose only member is the empty set: in symbols, {0}. This is not the same as the empty set, because it has one member, whereas the empty set has none.
Agreed, that member happens to be the empty set, but there is one of it. Think of a set as a paper bag containing its members. The empty set is an empty paper bag. The set whose only member is the empty set is a paper bag containing an empty paper bag. Which is different: it's got a bag in it (see diagram). The key step is to define the number 2. We need a uniquely defined set with two members. So why not use the only two sets we've mentioned so far: 0 and {0}? We therefore define 2 to be the set {0, {0}1. Which, thanks to our definitions, is the same as fo, Now a pattern emerges. Define 3 as {o,1, 2}, a set with three members, all of them already defined. Then 4 is 10,1, 2, 31, 5 is 1, 2, 3, 41, and so on. Everything traces back to the empty set: for instance, 3 is {0, {0}, {0, {0}1} and 4 is {0, {0}, {0, {0}1, {0, {0}, {0, {0}111.
You don't want to see what the number of dwarfs looks like. The building materials here are abstractions: the empty set and the act of forming a set by listing its members. But the way these sets relate to each other leads to a well-defined construction for the number system, in which each number is a specific set that intuitively has that number of members. The story doesn't stop there. Once you've defined the positive whole numbers, similar set-theoretic trickery defines negative numbers, fractions, real numbers (infinite decimals), complex numbers... all the way to the latest fancy mathematical concept in quantum theory or whatever. So now you know the dreadful secret of mathematics: it's all based on nothing.

Wednesday, November 23, 2011

A Burger Every Few Days To Keep Climate Change At Bay

Meat is bad: bad for you, bad for the environment. At least that's the usual argument. Each year, the doors to the UN climate negotiations, which kick off again in Durban, South Africa, on 28 November, are assailed by demonstrators brandishing pro-vegetarian placards. The fact is that livestock farming accounts for a whopping 15 per cent of all greenhouse gas emissions. We can't all go veggie, so just how much meat is it OK for an eco-citizen to eat?
It's not just the demonstrators who are concerned about food's impact on the climate. This week, a major report concludes that food production is too close to the limits of a "safe operating  space" defined by how much we need, how much we can produce, and its impact on the climate.
Meat is a major contributor to that: 80 per cent of agricultural emissions come from meat production, and the problem is getting worse. As people get richer, the demand for protein gets stronger, says Molly Jahn, a former undersecretary at the US Department of Agriculture, and one of the authors of Achieving Food Security in the Face of Climate Change, commissioned by the Consultative Group on International Agricultural Research ([OAR). It's unrealistic to expect everyone to give up meat entirely, and many of the world's poor need to increase their meat consumption to overcome malnutrition and food insecurity.
The solution is to eat less meat rather than no meat. In 2007, Colin Butler of the Australian National University in Canberra estimated that the average person consumed 100 grams of meat a day, or about one burger (a quarter-pounder is 113 g). The rich eat 10 times more than the poor - in other words, some people get 10 burgers a day while others get none. Butler showed that if every person in the world ate 50 g of red meat and 40 g of white meat per day by 2050, greenhouse gas emissions from meat production would stabilise at 2005 levels - a target cited in national plans for agricultural emissions. That's about one burger and one small chicken breast per person every two days.
Butler's 2007 figures didn't take into account the fact that we throw out a lot of the animal mass produced because we consider it inedible. Western countries are the biggest offenders: while many cultures are not fazed by a meal of brains or testicles, Butler estimates that Americans and Australians throw out up to half the cow mass they produce.
At New Scientist's request, he updated his calculations. He estimates that globally we discard between 5 and 10 per cent of the animal. This means we can only allow ourselves 80 to 85 g of red and white meat, or one burger and one chicken fillet every three days. That's an upper limit Emissions may need to be cut further. Our allowance would drop further if more people were as wasteful as the Americans and Australians. And, according to CGIAR, in addition to the waste between the abattoir and the plate, one-third of all produced food is spoiled because of poor refrigeration, pests and bulk packaging that ; encourages consumers to buy more ; than they can eat. All of which eat into Flour meat allowance.

Tuesday, November 22, 2011

Hothouse Earth Is On The Horizon

An era of ice that has gripped Earth's poles for 35 million years could come to an end as extreme global warming really begins to bite. Previously unknown sources of positive feedback — including "hyperwarming" that was last seen on Earth half a billion years ago— may push global temperatures high enough to send Earth into a hothouse state with tropical forests growing close to the poles.
Climate scientists typically limit themselves to the 21st century when predicting how human activity will affect global temperatures. The latest predictions are bolder, though: the first systematic forecasts through to 2300 are beginning to arrive. They follow four possible futures, including one in which we rapidly cut emissions and another in which we burn fossil fuels into the 22nd century (Climatic Change, DO!: 10.1007/ s1o584-011-0157-y).
Chris Jones of the UK Met Office in Exeter says that unpublished results suggest the "burn everything" scenario could see atmospheric carbon dioxide levels reach 2000 parts per million—the figure today is 388 ppm. That pulse of CG, could lead to a global temperature rise of 10°C. Temperatures this high were last seen in the Eocene, 34 million years ago, says Paul Pearson of Cardiff University in the UK. Conditions were so different back then that the Canadian High Arctic was populated by plants that are now found in the south-eastern US (Proceedings of the Royal Society B, DO!: 10.1098/ rspb.2011.1704).
The Eocene marked the end of a hothouse that had begun in the Cretaceous (see chart). Throughout this time there was no ice at the poles; Antarctica was once populated by dinosaurs. Might the predicted rise in temperature be enough to see a return to an ice-free world?
The poles will warm much more than the tropics, says Tim Lenton of the University of Exeter, UK, so the Arctic could well lose all its ice. But Antarctic ice would probably survive, thinks Andrew Watson of the University of East Anglia, in Norwich, UK, because Antarctica is isolated from the rest of the continents.
In fact, Antarctica may have gained its ice when it became cut off from Australia during the Eocene and lost the warming influence of equatorial currents. Plate tectonic models predict that Antarctica will remain isolated from the other continents for at least the next 250 million years (New Scientist, 17 September, p 16). Even so, Antarctica's icy future may not be secure. The long-term climate models that go up to the year 2300 are missing key positive feedbacks that could send global temperatures towards levels high enough to melt even an isolated Antarctica.
In particular, the release of methane from melting Arctic permafrost has not yet been factored in. Methane is a potent greenhouse gas, but remains in the atmosphere for only to years on average before it reacts with hydroxyl radicals in the air to form CO,. However, a large release of methane from melting permafrost could swamp the hydroxyl supply, allowing the methane to linger in the atmosphere for 15 years or more, further amplifying the warming (Global Biogeochemical Cycles, DOI: to.1029/2010GB003845). Some feedbacks never before considered might also come into play. Pearson says that in the future oceans may store less carbon. Normally some atmospheric carbon is lost at sea, buried in the carcasses of tiny marine animals. But sediment from the Eocene contains little carbon, suggesting that this process failed during the last hothouse (Paleoceanography, DOT: 10.1029/2005PA0o123o).
To work out why, Pearson looked at fossils of foraminifera, microscopic shelled marine animals. The tiny shells contain a chemical record of the position the animals occupied in the water column when they were alive. He found that Eocene foraminifera lived closer to the ocean surface than they do today, suggesting there was little food to sustain deeper-dwelling species.
Pearson thinks the warmer temperatures allowed bacteria at the ocean surface to metabolise faster, recycling carbon before it could sink and feed foraminifera living at depth. "if we warm the planet now, we switch on our bacteria," he said last month at a Royal Society discussion meeting in London.
A warming climate will also see trees and other large plants spreading north into the Arctic, says Bette Otto-Bliesner of the US National Center for Atmospheric  Research in Boulder, Colorado, who also attended last month's Royal Society event. Plants are darker than snow, so they absorb more of the sun's radiation. When Otto-Bliesner plugged the effect into a climate model of the Arctic, it got 3°C warmer.
Then there's hypenvarming. Ed Landing of the New York State Museum in Albany coined the term to describe the spiralling temperatures seen during the Cambrian period as a result of rising sea levels. Vast areas of the continents were covered with shallow seas during the Cambrian, which began 542 million years ago, because sea levels were sometimes tens of metres higher than today. Sea water absorbs more of the sun's heat than land, so swamping the continents caused the planet to warm up even more. Sea temperatures reached 40°C and oxygen levels in the water crashed (Palaeogeography, Palaeoclimatology, Pa laeoecology, DOI: to.1p16/j.palaeo.2o11.o9.005).
Something similar could happen again today. "These effects will operate as sea level rises to an appreciable degree and floods continental areas," agrees Thomas Algeo of the University of Cincinnati in Ohio. However, the effect today may not be as strong as it was in the Cambrian, says Lee Kump of Pennsylvania State University in University Park. There were no land plants back then, so the continents were more reflective and flooding them had a bigger effect.
Pearson and Landing's processes have not yet been plugged into any climate models so we do not know how significant they will be to our future. Pearson emphasises that hothouse Earth is far from inevitable. "We can prevent this happening," he says. But as researchers dig deeper into the factors that influence global climate, it is becoming increasingly clear that global warming might be about to get much more extreme.

See Beyond The Light To Find Future Disease

DEEP in the heart of the cell, your DNA may be undergoing subtle changes that could lead to a devastating disease several years down the line. New microscopy techniques are now lifting the lid on this inner world, potentially offering an early-warning system for cancer or Alzheirner's long before the diseases begin to bite.
Full-blown disease may be preceded by a long build-up. For example, a change in chromatin — the complex of DNA and proteins that packages DNA into the cell nucleus— is one of the earliest events to occur after exposure to carcinogens or ultraviolet rays. Changes sometimes happen years before symptoms of a tumour manifest themselves.
 However, tracking those changes has been frustratingly beyond the reach of medicine. They involve tweaks to structures that are less than 400 nanometres across, which is smaller than the wavelength of the visible light used in ordinary optical microscopy.
"When you have two structures that are smaller than the wavelength of light, you can't really tell them apart and everything is merged into one big blur," says Vadim Backman of Northwestern University in Evanston, Illinois. "We're missing all that complexity!' To make sense of the blur, Backman has ditched standard microscopes in favour of a method called partial wave spectroscopic (PWS) microscopy.
PWS looks at how a light beam interacts with a cell. As the beam travels through the cell it reflects off different structures within according to their density. The pattern from the reflected light is used to reconstruct the nanoscale detail inside the cell.
"It's almost like you have a cat in a black box. Instead of trying to X-ray it, you hear it miaow and so you know it is a cat," says Backman, who presented his work at the Frontiers in Cancer Prevention Research meeting in Boston last month. PWS is one of many new techniques for studying cells at the nanoscale.
It is particularly good at detecting changes in density in complexes like chromatin. So far, Backman has used PWS to show that apparently healthy cells taken from people with lung, colon, pancreatic, ovarian and oesophageal cancer have unusual chromatin densities not seen in cells from people who are cancer-free. What's more, such changes are relatively easy to detect because they often occur in normal cells as well as those that are or will become cancerous.
For example, Backman used PWS to identify which of 135 smokers had lung cancer and which were cancer-free by analysing cells swabbed from the inside of the cheek (Cancer Research, DO!: 10.1158/0008-5472. can-10-1686). Similarly, he found that a swab of rectal cells could identify people with colon cancer, and a cervical swab could detect women with ovarian cancer. "It is a very creative and promising method," says Igor Sokolov of Clarkson University in Potsdam, New York, who is using another nanoscale technique called atomic force microscopy to look for differences between healthy and cancerous cervical cells.
"Anything that provides new information about cellular structure at the nanoscale will potentially be advantageous for both diagnostics and further understanding of diseases." The hope is that PWS could be used to screen the general population for early signs of cancer. Backman also has preliminary evidence that PWS could be used to diagnose autoimmune diseases such as inflammatory bowel syndrome and to investigate the changes in cells that cause Alzheimer's disease to develop.


Brain Doping

MOST of us want to reach our full potential. We might drink a cup of coffee to stay alert, or go for a run to feel on top of the job. So where's the harm in taking a pill that can do the same thing?
So-called cognitive-enhancing drugs are usually prescribed to treat medical conditions, but they are also known for their ability to improve memory or focus. Many people buy them over the internet, which is risky because they don't know what they are getting. We also know next to nothing about their long-term effects on the brains of healthy people, particularly the young. But some scientists believe they could have a beneficial role to play in society, if properly regulated.
So who's taking what? The BBC's flagship current affairs show Newsnight and New Scientist ran an anonymous online questionnaire to find out. I also decided to try a cognitive enhancer for myself.
 The questionnaire was completed by 761 people, with 38 per cent saying they had taken a cognitive-enhancing drug at least once. Of these, nearly 40 per cent said they had bought the drug online and 92 per cent said they would try it again. Though not representative of society, the survey is an interesting, anecdotal snapshot of a world for which there is little data.
 The drugs people said they had taken included modafinil, normally prescribed for sleep disorders, and Ritalin and Adderall, taken for ADHD. The range of experiences is striking. One respondent wrote: "It helps me extend my concentration. I can study a topic for six hours, for example, that would have me bored to tears in two." Another wrote: "Did not help me do anything but feel anxious and excited, could not sit still even 15 hours later."
 When asked about the drugs' potential impact on society, people reported concerns beyond safety, for example warning that the drugs might create a two-tier education system in which some can afford the drugs and others can't. They voiced wider concerns too, such as: "If society has come to the point that we have to take cognitive enhancers to function or perform to certain expected levels, then it is a society that has placed performance over happiness and health."
Laurie Pycroft, a student at the University of Oxford, talked to Newsnight about his experiences with modafinil. "I've taken it a few times, primarily for its ability to increase wakefulness and allow me to concentrate and stay awake for very extended periods of time. I don't take it very often but if I want to stay awake for zo or 30 hours working on an essay it's very useful," he said.
Keen to learn more, I contacted Barbara Sahakian, a neuroscientist at the University of Cambridge. She and her team work with people who have conditions such as Alzheimer's and Parkinson's disease. One area of their research is testing whether cognitive-enhancing drugs such as modafinil help. Sahakian thinks these drugs could play a wider role in society.
Her most recent research showed that sleep-deprived surgeons performed better on modafinil. "I do think we've undervalued [the drugs]. As a society we could perhaps move fonvard if we all had a form of cognitive enhancement that was safe," she told me. Before I could self-experiment with the drug! had to satisfy Sahakian's colleague James Rowe that there were no risks. We also had trained medical staff nearby. I took a tablet on two separate days without knowing which one was modafinil and which was a placebo. I then did an hour or so of tests involving memory, strategy, planning and tests of impulsiveness.
On the second day I felt more focused and in control and thought I performed better in the tests. That was the day I had been given modafinil. Rowe summed up my performance: "What we've seen today is some very striking improvements.., in memory and, for example, your planning abilities and on impulsivity." It's human nature to want to push against our limitations, but what about the risks? Before sanctioning a drug as a cognitive enhancer for healthy people, regulators would require long-term safety studies so they could weigh up the risks and benefits.
Pharmaceutical companies are not rushing to carry out such studies, but Sahakian is calling for such work to be done before someone comes to harm. Some cognitive enhancers, such as Ritalin, are controlled drugs. Modafinil is not, so it is legal to buy it online, though it is illegal to supply it without a prescription. The UK government, through the Medicines and Healthcare products Regulatory Agency, told Newsnight that tackling the illegal sale and supply of medicines over the Internet is a priority. It's not just students who claim to find the drug beneficial.
Anders Sandberg of the Future of Humanity Institute at the University of Oxford talks openly about using cognitive-enhancing drugs. He is about to start a study in Germany to compare the effects of a range of cognitive enhancers, including two hormones —ghrelin, which promotes hunger, and oxytocin, which is associated with empathy—to test their powers at what he calls "moral enhancement".
"Once we have figured out how morality works as an emotional and mental system there might be ways of improving it," he told me. The bottom line is that cognitive-enhancing pills are a reality and people are using them. But how comfortable are we with the knowledge that some of our children's classmates might be taking such drugs to perform better at school, or that one candidate for a job interview might use modafin i I to outshine the others? And who was the real me, the one on modafinil, or the one not? Perhaps we should start thinking these questions through, before a drug offering far more than a few percentage points of enhancement comes our way.
Related Posts Plugin for WordPress, Blogger...

Design by Free WordPress Themes | Bloggerized by Lasantha - Premium Blogger Themes | Affiliate Network Reviews