Friday, March 15, 2013

Witness The Early Universe (Through a Telescope)


Radio astronomy: The patchwork array

Science isn't worked within a void.  As with any project involving multiple nations, cultural sensitivity is imperative.  Read about how people overcame serious obstacles to build an array of telescopes that enable us to look back in time. -A.T.

After years of delays and cost overruns, an international collaboration is finally inaugurating the world's highest-altitude radio telescope.

Eric Hand for Nature
13 March 2013


Eyes on the sky at the Atacama Large Millimeter/submillimeter Array.
STÉPHANE GUISARD/ESO

The car toils upwards along the sinuous road, its engine tuned for the thin air. The clumps of cactus and grass along the road soon give way to bone-dry lifelessness. By the time the car reaches 4,000 metres above sea level, Pierre Cox has a bit of a headache. By the time it reaches the 5,000-metre-high Chajnantor plateau — one of the highest, driest places on Earth, and one of the best for astronomy — the altitude is affecting his bladder. Cox, the incoming director of the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile, is about to glimpse the giant telescope dishes he will soon be responsible for. But first he must find a toilet.

Cox slides out of the car and staggers into ALMA's glass and steel operations centre. The current director, Thijs de Graauw, a trim 71-year-old Dutchman, follows Cox inside and sits down. For him, journeys like this occur weekly — if not daily — but he knows that they are no joke. First-timers get a mandatory medical screening before being allowed up to the plateau, and regular shift workers pad around the building with tubes in their noses and oxygen tanks on their backs. “Everyone okay?” De Graauw asks the group of astronomers who have accompanied Cox to ALMA on this December day. “No victims yet?”

Cox re-emerges from the toilet, puts on wraparound sunglasses and, slightly dizzy, heads outside with the group. Scattered across the surrounding plain of brown volcanic soil are dozens of huge white radio antennas, looking as out of place as the stone statues on Easter Island. High on this cold and lonely plateau, they are gathering photons from the cold and lonely parts of the Universe — the dimly glowing clouds of dust and gas where stars are born. Their signals are then combined into images that have a resolution better than that of the Hubble Space Telescope.

The stillness of the tableau breaks as the dishes begin to tilt and swivel in unison. “My goodness,” says Cox, hushed by the sight of so much metal moving so quickly and quietly.

But the choreography is not quite uniform. Clustered tightly in the middle of the array are 12 dishes, each 7 metres across, and four 12-metre dishes, from Japan. Spaced farther out are 25 dishes, each 12 metres across and fitted together like pie slices, from the United States. And scattered among those are the first of 25 dishes from Europe, each 12 metres across — top-of-the-line carbon-fibre devices pivoting on silky-smooth gearing.

The last of those European antennas will not be installed until the end of 2013, when ALMA will finally reach its full complement of 66 dishes. Rather than wait until then, however, the project held a formal inauguration ceremony on 13 March to celebrate the collaboration that made it all possible. A total of 19 countries have contributed to ALMA, through three primary partners: the European Southern Observatory (ESO); the National Astronomical Observatory of Japan; and the US National Radio Astronomy Observatory (NRAO) in Charlottesville, Virginia, funded by the US National Science Foundation (NSF).

Less celebrated have been the difficulties of keeping this unwieldy confederation on track — with power shared among three independent organizations that have different cultures and norms. Nor is anyone likely to cheer about how the lack of unity caused the US$1.4-billion project to come in several years late, well over cost and downsized from its original ambitions. Its successive directors have had to be diplomats and negotiators as much as scientists.

But ALMA is not unique in that respect. International mega-projects are becoming increasingly common in astronomy. Witness the Square Kilometre Array, a proposal to build 3,000 radio dishes with a total collecting area approaching 1 square kilometre in Australia and South Africa (see Nature 484, 154; 2012). As costs for such ambitious projects cross the billion-dollar threshold, nations are finding that they cannot go it alone — a situation for which ALMA might serve as a valuable object lesson. “I think it's the largest science project ever where nobody was in charge,” says Ethan Schreier, president of Associated Universities Incorporated (AUI), a radio-astronomy research-management company based in Washington DC, which operates the NRAO. “But we have made it work.”

Family ties
Each of the three primary partners came to Chile in its own way, with pilot projects dating from the 1980s. Of particular interest for Europe was the infrared glow from the dust that shrouds many of the Universe's first galaxies. This glow can be used to estimate the size, brightness and number of stars hidden within — key questions for astronomers trying to piece together the history of galaxy formation. Shifted to longer wavelengths by the expansion of the Universe, this glow reaches telescopes on Earth as millimetre-wave radiation — and can be detected day or night, as long as there isn't much atmospheric water vapour in the way.

To get at the earliest (and thus most distant and faint) of these dusty galaxies, European astronomers needed a large collecting area. They proposed an array of 16-metre dishes on the salt flats of the Atacama Desert, more than 1 kilometre lower than the Chajnantor plateau. US astronomers were more interested in star formation within our Galaxy, and wanted the better image quality that would come with an array of 8-metre dishes placed more closely together. They also wanted to push into the shorter wavelengths of the unexplored submillimetre band, where they could study chemical-emission lines from molecules in interstellar gas clouds. They pushed hard for Chajnantor, which was high and dry enough for submillimetre observations, and flat enough for the dishes to be moved into various configurations.

Pooling resources was an obvious move for the two projects, and in 1997, ESO director Riccardo Giacconi and NRAO director Paul Vanden Bout signed a joint resolution to pursue a compromise — a facility of 64 dishes, each 12 metres across. “Riccardo and I signed this document with no authority whatsoever,” says Vanden Bout. The official backing from the ESO and the NSF wouldn't come for another six years.


ALMA (ESO/NAOJ/NRAO)/L. CALÇADA (ESO)

Japan joined the partnership in 2004 and committed to building 16 dishes in the centre of the array. The more widely spaced US and European dishes would provide high-resolution detail in a narrow field of view. But the compact array would give a more complete view of large objects such as the Galactic Centre or the sprawling, dusty clouds where the Milky Way forms its new stars.

The patchworked nature of ALMA's creation is reflected in its organizational structure. Partner agencies have been loath to relinquish control over budgets (or anything else), so the coordinating body that manages array operations — the Joint ALMA Observatory (JAO) — has no formal authority. For example, when Chile created a science preserve on the Chajnantor plateau (in return for 10% of ALMA's observing time), officials signed the lease with AUI and the ESO, not the JAO.

Cultural sensitivity
ALMA directors quickly learn that management works best through persuasion, not proclamation. “You have to seduce,” de Graauw says. Cultural sensitivity is also required. On conference calls, de Graauw says, his Japanese colleagues would say nothing until he solicited them directly for comments. Alison Peck, deputy project scientist for ALMA, learned a similar lesson about setting deadlines. “In Japan, it's really not okay to miss a deadline,” she says. “In the United States, you can usually make reasonable excuses and ask for an extension. In Europe they worry about it even less.”

ALMA's motley nature is apparent even in the 12-metre telescope dishes, the array's biggest single cost. From the beginning, the technical requirements were “truly daunting”, says Tony Beasley, a former ALMA project manager and current head of the NRAO. Each dish needed a motor that could accurately point at celestial targets to within 0.6 arcseconds (about the same apparent size as a bacterium at arm's length); a reflecting surface with an accuracy of 25 micrometres (about one-quarter of the width of a human hair); and structural materials that could maintain that precision in the face of Chajnantor's wicked winds and subzero temperatures.


“I think it's the largest science project ever where nobody was in charge, but we have made it work.”


The cheapest way to meet those requirements would have been for the ESO and the NRAO to share a single design and a single contractor. But the NRAO went with a small US firm — Vertex, which was later bought by General Dynamics of Falls Church, Virginia — and the ESO held out for a European consortium led by Thales Alenia Space, based in Paris. The delays associated with going to separate contracts came just as prices for commodities such as steel were rocketing because of demand in China, leading to a dramatic escalation in ALMA's cost. As a result, in 2005, the project was 'descoped' — the NRAO and the ESO would each contribute only 25 antennas rather than 32, resulting in a loss in array sensitivity (see Nature 439, 526–528; 2006). Even with the descope, US and European contributions to ALMA would grow from $650 million to $1 billion.

Japan, meanwhile, had contracted its dishes to Mitsubishi Electric, based in Tokyo. The three companies maintained separate assembly sites at the ALMA operations support facility (OSF), a cluster of buildings where most staff members live and work. (The OSF was built at 2,900 metres, in part because it costs less to hire Chilean workers for altitudes lower than 3,000 metres.)

It is too early to tell whether one design will outperform the others. The ESO's carbon-fibre dishes change pointing position with fewer errors, but it is uncertain how well the advanced internal gearing will hold up to weather over time. So far, all the antennas are performing to specifications. But having three different designs will saddle ALMA with extra operations costs far into the future, says Neal Evans, an astronomer at the University of Texas at Austin and chair of the ALMA board. “You'll need different spare parts, and you'll need people that know how to maintain each of the designs,” he says.

Ambitious targets
Despite all the headaches, antennas are steadily accumulating on the plateau. In 2007, the JAO team raised glasses of water to celebrate the first linking of two dishes using the correlator — a computer that connects dish signals to create a composite view of the sky. (Why no champagne? The altitude impairs judgement, even at 2,900 metres, so ALMA has a strict no-alcohol policy; workers are subject to random breathalyser tests on the buses connecting local towns to the OSF.)

In September 2011, with 16 dishes in place, ALMA began its inaugural observing period with the 100 or so projects that had risen to the top of its 'cycle 0' proposal competition. Most of the observation targets were relatively nearby objects in our Galaxy. Results ranged from the detection of sugar-related molecules in a nearby star system to an exceptionally sharp image of the gas clumps that will collapse into giant stars (see Nature 492, 319–320; 2012).

But the targets will soon become more ambitious. The mathematics of radio arrays implies an inverse relationship between antenna spacing and image resolution: the longer an array's 'baselines' (the distances between pairs of antennas), the smaller its field of view and the higher its resolution. The number of baselines, which determines how 'filled in' an ALMA image is, has grown simply through the addition of antennas. But the observatory can also change baselines by moving the antennas around the plateau — with the help of two German-built transporters nicknamed Otto and Lore (see 'ALMA, small and large').



The Antennae galaxies as observed by ALMA (red and yellow) and the Hubble Space Telescope.
ALMA (ESO/NAOJ/NRAO); NASA/ESA HUBBLE SPACE TELESCOPE

In January, ALMA began its cycle 1 observations with 32 of the 12-metre dishes working at baselines of up to 1 kilometre, combined for the first time with some of the smaller Japanese dishes in the centre of the array. By the time cycle 2 begins, in early 2014, ALMA astronomers hope to have 40 dishes working at baselines of up to 2 kilometres. The resulting high resolution will help astronomers to understand star formation in distant galaxies seen very early in their lives, when the Universe was young and its chemical composition was different. “ALMA could very well open up a whole new field of star formation,” says Linda Tacconi, an astronomer at the Max Planck Institute for Extraterrestrial Physics in Garching, Germany.

ALMA will also be able to pinpoint how far away, and therefore how old, an object is. Usually, that measurement is a two-step process. Researchers first need time at a radio-astronomy facility to locate the object — a distant galaxy, say — then must spend many hours on an optical telescope to split the faint light up into its spectral components and identify emission or absorption lines caused by the presence of various elements and molecules. Measuring how far those wavelengths have been stretched by cosmic expansion allows observers to estimate how far away the object is.

ALMA can do all of the above within minutes. Already, ALMA observations have shown that strangely bright early galaxies were in fact multiple smaller galaxies that had been lumped together by an earlier optical survey (A. Karim et al. Preprint at http://arxiv.org/abs/1210.0249; 2013). The discovery was a relief to theorists, who had been unable to work out how such bright, huge galaxies could have formed so early in the Universe.

Once ALMA reaches baselines of 10 or more kilometres, astronomers will be able to turn their attention to stars forming in our Galaxy. The observatory has already detected gas flows in the disk surrounding a newborn star, crossing a gap that indicates the presence of a giant planet (S. Casassus et al. Nature 493, 191–194; 2013). Eventually, for some of the star systems closest to Earth, ALMA astronomers could have a shot at seeing the whirlpools of gas in which planets themselves are coalescing.

The deep unknown
But most of these projects will have been preordained — interesting stars, clouds or galaxies already seen in different parts of the spectrum by other telescopes. Many astronomers think that ALMA needs to forge a new path. They are calling for a 'deep-field survey' — a time exposure of a patch of sky for hundreds of hours, long enough to image extremely faint objects in that field and, possibly, to glimpse the formation of the Universe's first galaxies. “I think it's something that has to be done,” says Leonardo Testi, ALMA project scientist for the ESO. “If you only follow up on something else, then you are only looking after things you already know.”

“ALMA could very well open up a whole new field of star formation.”
The question is whether the JAO is strong enough to marshal ALMA's partners to do a deep-field survey. The Hubble Space Telescope has done several such surveys over the past two decades — but only because Hubble directors have allocated large chunks of discretionary time to the projects, thereby circumventing the fierce competition for observation slots. ALMA directors have very little discretionary time — almost every data-taking moment has been allocated. To do a deep-field survey, the partners would have to donate the time — a tough sell for a facility now receiving around six proposals for each available slot.


The problem highlights a complaint common among JAO staff: none of the partners can call the shots. Europe and the United States have equal shares, both larger than Japan's, but no one has a majority. “There's no tiebreaker,” says Al Wootten, an astronomer at the NRAO.

Yet Beasley doubts that the process would have been any smoother if Europe or the United States had taken the lead. For smaller projects, he says, with stakes on the order of millions of dollars, minority funding partners might accept some decisions that run counter to their interests. But with a project the size of ALMA, in which even a minority stake is hundreds of millions of dollars, funding agencies will fight to protect their interests. “No one is going to lose any significant decisions at that point,” he says.

Beasley says that it would be better to create a strong central authority at the outset, and persuade funding agencies to grant it budgetary powers. There are precedents, particularly the treaty-governed European research institutes such as CERN, a particle-physics facility in Geneva, Switzerland, and the ESO itself, whose member states pay dues each year. And in 2011, the Square Kilometre Array created the SKA Organisation — a non-profit company based at the Jodrell Bank Observatory near Manchester, UK — which might give it the authority missing from the JAO.

But it is hard to imagine a funding agency such as the NSF — which answers to the US Congress — ceding control. So in the near term, big astronomy is likely to be governed by loose confederations, and the success of future mega-projects will depend on the savvy and sweat of the people within. Anyone who has served as ALMA director would know something about this. Each charmed rather than shouted his way to success. De Graauw had his courtliness; Massimo Tarenghi, director from 2003 to 2008, a certain puckishness. Cox's weapon might be positivity: the new director seems always to be grinning. Tarenghi hopes that those smiles will stay after Cox takes the helm in April. “The person that suffers most is the poor director,” he says wryly.

Coming down the mountain
By the end of the one-hour tour of ALMA, Pierre Cox is in fact suffering — from oxygen deprivation. Yet he still seems to be on cloud nine. “I'm infinitely grateful. I'm honoured. I'm thrilled,” he says. “This is one of the coolest places I've ever been.” He gets in the car for the downhill journey and slips an oxygen saturation meter over his finger. First it reads 70%, then 76%. Not good. The driver, who has already put oxygen tubes in his own nose, calmly hands Cox a pressurized can of oxygen. Cox takes a squirt in his mouth and checks his numbers again. More than 90%. Much better.

Maybe it's the rush of oxygen to the brain, but Cox becomes an enthusiastic chatterbox. The high-redshift Universe will be just the beginning, he says. He won't be completely satisfied following up on the objects others have already spotted. “There will always be surprises,” he says. The car passes a vicuña (a relative of the llama) standing sentinel at the lip of a gully. ALMA's dishes have vanished behind the edge of the plateau. The OSF appears in the distance below, white rooftops shimmering as the desert heats up for the day. By 4,000 metres, the air is getting thicker. The oxygen has a soporific effect. De Graauw's head begins to nod. Cox yawns loudly. Inexorably, his eyelids close.

Behind him, on an isolated plain at the top of the world, the eyes of ALMA remain open, alert to the earliest glimmers of the Universe.

Nature 495, 156–159 (14 March 2013) doi:10.1038/495156a

Project Superhero


Choose Your Own Sixth Sense

DIY superpowers for the cyborg on a budget.

Sixth Sense
Illustration by Alex Eben Meyer
Imagine for a moment that you could choose any superpower you wanted. If you’re the demonstrative sort, you might be tempted by something dramatic, such as Hulk-like strength or the ability to fly. Or perhaps you’d prefer something a little more discreet, like a self-healing body or the power to read minds.
But if you’re a certain type of pragmatist, you’ll dismiss all of the above as a mere parlor game. Why waste time dreaming about things that are impossible (for now, at least) when you can have a more modest superpower today, at a reasonable price?
That’s the premise behind a small but growing subculture of DIY biohackers, body hackers, grinders, and self-made cyborgs, who are taking advantage of widely available technologies such as tracking chips, LEDs, magnets, and motion sensors to imbue themselves with a sixth sense of sorts. They range from professionals such as Kevin Warwick, the publicity-friendly Reading University professor behind Project Cyborg, to spiky-haired cyberpunks such as Lepht Anonym, whose taste in surgical tools runs to vegetable peelers. Call them “practical transhumanists”—people who would rather become cyborgs right now than pontificate about the hypothetical far-off future.
So what kind of sixth sense could you acquire today if you were in the market? Anything from infrared vision to an internal compass to a sort of “spidey sense” that alerts you when something is approaching from behind. And the cost can run from the tens of thousands of dollars to as little as a few bucks, as long as you have a scalpel and a hearty tolerance for risk and pain.
The concept of implanting bionic devices is by no means radical or new in the medical field—just ask anyone with a pacemaker or an insulin pump. But the notion of healthy people sticking gadgets in their bodies for fun, profit, or sensory augmentation is a more recent phenomenon. It’s an offshoot of the transhumanist movement, which took root in California in the 1980s among a set of philosophers, dreamers, and technophiles who believed that emerging technologies could reshape humanity for the better. But while the transhumanists held conferences, wrote books, formed think tanks, and sparred with bioethicists, a few who shared their vision began to wonder where the action was.
In 1998, Warwick, a professor of cybernetics, had a doctor surgically implant a simple radio-frequency identification transmitter in his upper left arm, in an experiment that he called Project Cyborg. The chip didn’t do a whole lot—it mainly just tracked him around the halls of the university and turned on the lights to his lab when he walked in. But Warwick was thrilled and the media were enchanted, declaring him the world’s first cyborg. (Others bestow the title onSteve Mann of the University of Toronto, who has been wearing computers and cameras on his head for decades.) He later followed up with more complex implants, including a 100-electrode chip that transmitted signals from his wrist to a computer.
Warwick’s initial RFID implant was a turning point in the history of transhumanism not because it represented a great technological leap, but because it required no technological leap at all. What he did, anyone could do. To some, that made him a charlatan. To others, it makes him a hero.
What it undeniably did was pave the way for people with far fewer resources to experiment with enhancements of their own—often without the aid of medical professionals. One of the most extreme examples is Anonym, a tattooed young woman from Scotland who describes herself as a “scrapheap transhumanist.” In a memorable appearance at a conference in Berlin in December 2010, Anonym described her first foray into grinding thusly: “I sat down in my kitchen with a vegetable peeler, I shit you not, and I decided to put things in my hands. … The first time I ever sat down, it went horribly, horribly wrong. The whole thing went septic, and I put myself in the hospital for two weeks.” For most people, that would be ample motivation to swear off grinding for good. But Anonym learned lessons and kept at it, successfully implanting an RFID chip before moving on to other implants like a temperature sensor and a neodymium magnet that would vibrate in response to alternating current. Her exploits, in turn, inspired others.
For Tim Cannon, a mild-mannered 33-year-old software developer from Pittsburgh, it was the magnet idea that touched a nerve. “I’ve been a science fiction fan since I was a kid,” he told me. “I’ve just always been interested in nerdy kind of stuff.” When Cannon first saw Anonym, his first thought was, ‘Oh no, the revolution started without me!’ ” Within a month, he had enlisted a professional tattoo artist to install a polymer-coated magnet in his left ring finger. The process was a lot cleaner than Anonym’s DIY approach, though Cannon says it would have been far more pleasant with a little anesthetic.
So what’s it like having a sense of magnetism? At first it was a little jarring, Cannon says, to feel his finger buzz like a cellphone on vibrate when it came within a foot of a refrigerator. But over time he has developed an intuitive sense of what’s giving off current, and of what sort (vibrations mean alternating current, a tug means direct). And his little superpower, humble as it is, has come in handy around the house on a few occasions, like when the battery light started flickering on his friend’s laptop. “I went over and hovered my hand over the power brick, hovered my hand over the laptop, repeated that a couple of times, and when I got back to the laptop I felt it kind of sputtering—pop, pop—and I noticed that coincided with the battery light coming on. I said, ‘Hey man, your power bridge is bad.’ ” He says his friend now calls him “the laptop whisperer.”
Cannon and a few like-minded friends formed a collective called Grindhouse Wetwares, with the tagline, “What would you like to be today?” They’ve built such things as a range-finding sensor that makes their fingers pulse based on how far away the nearest walls are. “You can just sweep it over a room and get an idea for the contours of the room with your eyes closed,” Cannon says. “It’s kind of like a sonar sense.” The group has also experimented with implantable biomedical tracking devices and a gizmo called the “thinking cap,” which zaps the brain with electricity in an effort to heighten the user’s focus. (This risky-sounding procedure, known as transcranial direct current stimulation, has actually been shown to boost cognitive performance in several studies, though it may also have its downsides.)
In Barcelona, a nonprofit called the Cyborg Foundation is pushing a more artistic (and less cringe-inducing) vision of sensory extension. It was founded by Neil Harbisson, an artist and musician who was born with achromatopsia, the inability to see colors. Since 2004, Harbisson has worn a device he calls the eyeborg, a head-mounted camera that translates colors into soundwaves and pipes them into his head via bone conduction. Today Harbisson “hears” colors, including some beyond the visible spectrum. “My favorite color is infrared,” he told me, because the sound it produces is less high-pitched. (This prize-winning short film featuring Harbisson is well worth watching.)
The Cyborg Foundation’s co-founder, Moon Ribas, is working on a sensor that can be attached to the back of her head that will vibrate to alert her when someone is approaching from behind. Mariana Viada, the Cyborg Foundation’s communications manager and an outdoorswoman, is looking into an internal compass that could tell her at all times which way is true north. “People ask me why I would want to extend my senses, and I simply answer, ‘Why not?’ ” Viada says. “There is so much out there to discover.”
As low-tech as these types of devices are, Cannon thinks they’re laying the groundwork for more powerful (and pervasive) human enhancements in the future. And he thinks there will be money in it—but he says Grindhouse Wetwares has no interest in becoming a startup beholden to venture capitalists. “We think that in order to preserve ownership of our bodies, we need to make sure this is open-source. If you think Apple has a problem with you jailbreaking your iPhone, wait until they’re responsible for your heart.”

Wednesday, March 13, 2013


The Body Electric
How much energy can you extract from a dance?
By Emma Roller|Posted Wednesday, March 13, 2013, at 2:40 PM
Knee Brace.
Wouldn't it be cool if you could charge your phone while you walk?

Courtesy of Bionic Power
In The Matrix, human bodies are plugged into an elaborate grid where their energy is harvested to power our robot overlords. That nightmare scenario has its downsides, but practically speaking, human bodies do produce a lot of energy that goes untapped. Researchers are coming up with ways to convert that energy into electricity—power that could be used to charge your cell phone, transmit a wireless signal, or power a medical implant.

The basis for some of this technology has been around for more than 130 years, starting with an experiment conducted in 1880 by brothers Pierre and Jacques Curie. They found that putting pressure on certain types of crystals could produce electricity. Piezoelectricity, or “pressure-driven electricity,” is created from ceramics or crystals such as quartz, zinc oxide, and titanium dioxide. Pressure redistributes these materials’ positive and negative charges. A camping stove or push-button lighter works by pressing down on a piezoelectric ceramic, which produces enough energy to spark a flame.
Energy harvesting is at work in most of the technologies we think of as renewable, such as solar power and wind power. But instead of the sun’s rays or the force of the wind, ambient energy harvesting captures our bodies’ kinetic energy.

Henry Sodano, a material sciences engineer at the University of Florida, has been researching human-action applications for piezoelectricity for 10 years. He developed a backpack equipped with shoulder straps made from piezoelectric materials. As the pack jostles up and down, the force exerted on the straps gets converted into electricity. Long-range hikers could use this to power small electrical devices on the trail.
The population that could benefit enormously from this type of technology is the military. Soldiers in the field are constantly grappling with their power sources, sometimes literally. They carry up to 28 pounds worth of batteries on a mission, according to Sodano—and that’s on top of body armor, ammunition, and other equipment. Using an energy-harvesting device would allow soldiers to power two-way radios, GPS devices, and headlamps without the added weight of batteries. The backpack technology isn’t being sold commercially, and like much of the technology in the field, it’s still in the development process.

Another energy-harvesting device the military is testing is a specialized knee brace developed by Max Donelan, a biomedical physiologist at Simon Fraser University in British Columbia. Donelan is also the chief science officer at Bionic Power, the company that makes the brace, which was spun off from Simon Fraser University in 2007. Bionic Power has R&D contracts with the military in both Canada and the United States, but the company is still one to two years away from putting the technology to use in the field. The brace is connected to a gearbox and a generator that converts the motion of the knee into electricity. Donelan says one minute of walking with the knee brace can generate enough energy for a 30-minute cell phone conversation.

“If you really want to get a lot of power from the body, you want to go to the powerhouses of the body,” Donelan says, such as the muscles that work with the knee joint.
Donelan compares his knee brace to a hybrid car’s regenerative braking. In a conventional car, the brakes act against the motor. In a hybrid car, the brakes reverse the motor and allow it to act as a generator. The system produces electricity that is stored in the car’s battery.

Donelan’s carbon-fiber knee brace will run roughly $1,000, not including the generator and battery. And while that is a prohibitive cost for most grid-dwellers, he says there’s a strong financial argument you can make to the military. Aside from the issue of weight, delivering batteries to the field can get expensive quickly—a 30-cent AA battery might have racked up $30 in external costs by the time it gets to its destination in Afghanistan.

Ambient energy harvesting has a lot of possible applications that just aren’t feasible for other types of renewable energy. One example is in the wake of a natural disaster, when rescue workers need quick access to power. Another is in developing countries without sophisticated power grids, where harvested human energy could be used to power anything from cell phones to coolers storing vaccines. This technology isn’t adults-only, either: The company Uncharted Play has invented an energy-harvesting soccer ball for children in developing countries to use. The ball stores the energy from getting kicked around during the day to power a built-in LED light at night.

While piezoelectric technologies don’t scale up as effectively as, say, a field of wind turbines, they do scale down. They can behave properly at the atomic level, and being able to generate electricity on the nano-scale has huge benefits for medicine, according to Amir Manbachi, a graduate student in clinical engineering at the University of Toronto. The body’s mechanical energy could be harvested to power permanent medical devices such as a pacemaker or a middle ear implant, thereby eliminating the need to perform invasive surgery to replace a battery every few years.

“The problem is that if you are doing a surgery like putting an implant in someone’s head, there’s no battery that provides energy for 20 years,” Manbachi says. “If we can come up with better ways of powering these implants, it’s going to change the whole medical industry.”
What works on the battlefield or the operating table isn’t necessarily practical for day-to-day uses. “Really the only market for these things is when you’re not attached to the grid,” Donelan says. “It’s unlikely that most people are going to wear [a knee brace] around New York City on a typical day to charge their cell phones.”

Nonetheless, one London-based start-up is working to make products that harvest ambient energy at a (somewhat) larger scale. Pavegen makes special tiles that absorb energy from pedestrians’ footfalls. CEO Laurence Kemball-Cook, an industrial design engineer, founded the company in 2009. He says Pavegen doesn’t publicly disclose how the technology works, but says the company uses a “hybrid” system of piezoelectricity and other harvesting technology.

During the 2012 Olympic Games in London, Pavegen installed tiles at a Tube station and captured almost 1 million footsteps, according to the company’s website. How much energy did that produce? Roughly 1.2 kilowatt hours. To put that number in perspective, 1.2 kilowatt hours would power one standard 100-watt incandescent light bulb for 12 hours, or a more energy-efficient 23-watt compact fluorescent bulb for 52 hours.

Kemball-Cook defends energy harvesting power, or what he calls “microgeneration,” despite its limitations compared to other forms of renewable energy. In a TED Talk he gave, Kemball-Cook said the average person has around 150 million footsteps in her lifetime, a total amount of energy that he said would power the average house for around three weeks. But when asked about how practical installing Pavegen tiles on a larger scale would be, Kemball-Cook was vague. “It’s a matter of scale, and you can’t scale in a day. ... You don’t want to sell 50,000 products in the first week of your business,” he says.

Another footfall-heavy environment Pavegen has taken advantage of is music festivals. In 2011, Pavegen set up an installation at Bestival, a music festival on the Isle of Wight. According to Pavegen, the installation captured 250,000 footsteps and helped charge 1,000 cell phonesat the event—though it doesn’t say how much of a charge the installation gave. The company is planning to harvest the energy of 2,000 dancing people to help power an outdoor concert in Singapore. “I can’t guarantee that every single thing in the entire concert is going to be powered by Pavegen,” Kemball-Cook says, a bit optimistically. “But it’s going to be a serious amount of power.”

Products that harvest footfall energy aren’t limited to flooring. Tom Krupenkin, a mechanical engineering at the University of Wisconsin, is marketing a shoe insert that harvests and stores footfall energy to power personal electronics. Using his prototype, Krupenkin says it would take roughly two hours of walking to charge an average smart phone. He and his research partner, J. Ashley Taylor, are working with a “large shoe manufacturer” in the hopes of marketing their product to the general public in one to two years.

Harvesting the body’s energy isn’t a viable alternative to large-scale renewable energy options like wind and solar, but it has extremely useful applications in specific fields. What ambient energy harvesting can also do very effectively is show people how much we can rely on our own bodies to produce the energy we need. “We use so much more power than what we can produce on our own,” Donelan says. “That sounds so dire, but to put a positive spin on it, the way you use human power is not by having people produce electricity, but to use their own power to use less of it.”

Superman's New Phone Booth

WEDNESDAY, MAR 13, 2013 02:08 PM PDT

New York pay phones’ new calling

Sleek digital kiosks, complete with wifi and weather forecasts, will replace the city's outdated telephone booths

BY JILLIAN STEINHAUER
This article originally appeared on Hyperallergic.

New York pay phones' new calling

The winners of a city-sponsored contest to redesign New York’s payphones have been announced, and it looks like the clunky yet iconic — and these days, often broken — booths of decades past will soon be replaced by slim, digital screens offering wifi, summaries of weather conditions, a chance to pay your parking tickets, and much more.

Smart Sidewalks

From more than 100 submissions, a panel of judges selected six winners in the categories of connectivity, creativity, visual design, functionality, and community impact (two-way tie). The designs vary in terms of their physical look and the specifics of what the kiosks can do, but the overall aesthetic is definitely sleek and minimal. Smart Sidewalks, for instance, the winner for Best Functionality, is a half-foot-wide strip “that folds up from the sidewalk,” according to the designers. It features a touchscreen above ground that would let users send emails, get subway directions, and more — “a location tethered smart-phone,” they say — and a below-the-sidewalk component that would include a sensor to read weather data and a system to collect and filter storm runoff.
Windchimes, one of the winners for Best Community Impact, picks up on this idea of weather sensors, ostensibly “providing real-time and hyper-local records of the city’s rain levels, pollution and other environmental conditions” through a spare, wooden, folded booth. The other community impact winner, The Responsive City, also echoes the form of the old-school phone booth with a curved exterior shell. That design offers to let you pay or dispute your parking tickets or call a cab right from where you stand.
Windchimes’s design for a phone booth
The winner of the Best Visual Design is Beacon, a slim, two-part tower that offers the city ad space on top and gives users access to community message boards, among other functions, on bottom. This one’s solely voice- and gesture-controlled, though, which seems like a disaster waiting to happen in New York City, even if the creators promise “directional microphones and noise canceling speakers …, and an array of sensors to track gestures.” One cool feature is that the screens would adapt for big city events, like becoming mileage markers for the NYC marathon, and would also offer helpful info in case of emergencies, such as directions to nearby shelters (the creations promise “uninterruptible power supply … on a regular and consistent basis, even during a blackout”).
Despite the Best Visual Design win, the Beacon, I think, leaves something to be aesthetically desired: the upper and lower halves of the tower are set in a perpendicular twist — a functional but, in my opinion, sort of dull and unimaginative look. Same with the winner for Best Connectivity, NYFi, which is just a tall, glowing rectangular slab. NYFi does two different heights, though, depending on whether the kiosk is in a residential or commercial district, and it promises to be “a hub for free wireless internet access” as well as a catch-all for buying MetroCards, paying for parking, and other amenities. My favorite design, hands down, is the Loop, winner of the Creativity Award, which features a scalable loop emerging from the ground that encases the user in a stylish mini-phone booth — a bit of privacy, plus a nod to its predecessor.
Loop

The city will award a special Popular Choice award to one of these six winners, so you can vote for the one you like best through tomorrow at 5 pm on the City of New York’s Facebook page. After that, the next step will be for the city to solicit a Request for Proposals from companies interested in working on and building the new payphones … all 11,000 of them.

New MRI method fingerprints tissues and diseases

New MRI method fingerprints tissues and diseases

The 7th Link of Dependent Origination - When it Begins

The 7th link of what we call dependent arising is the final cause in what keeps us toiling in un-ending samsara.  It is defined by Buddhist scholars as "feeling " or "a mental factor that experiences pleasure, pain and neutral feeling. Pleasure leads to a strong desire for more while pain generates an avoidance desire."  Essentially this means that we will do whatever it takes to keep the pleasure flowing while avoiding pain. Sounds normal, logical.  Except this, we will do whatever it takes, up to and including hurting other people to get what we want and avoid what we don't.

Science has recently (as in this year) discovered that this process begins in infants as young as 9 months old.  On the one hand, you can stop beating yourself up for your lack of willpower.  It is almost innate, this desire to be with people who think and look like us.  On the other hand, don't you have some work to do?  Because ultimately you must not stop working the path until you've conquered this habit.  People who think and look like us are at best a tool of comfort, at worst an obstacle to our true wish-giving jewels. 

Be like the Terminator.  You must not stop until you win.

Babies Prefer Individuals Who Harm Those That Aren't Like Them

Mar. 12, 2013 — Infants as young as nine months old prefer individuals who are nice to people like them and mean to people who aren't like them, according to a new study published in Psychological Science, a journal of the Association for Psychological Science.


In our social lives, we tend to gravitate toward people who have things in common with us, whether it's growing up in the same town, disliking the same foods, or even sharing the same birthday. And research suggests that babies evaluate people in much the same way, preferring people who like the same foods, clothes, and toys that they like.

This preference helps us to form social bonds, but it can also have a dark side. Disliking people who are different than us may lead us to mistreat them, and excuse -- or even applaud -- cases in which others mistreat people who are different than us.\

Are the roots of such tendencies present in infancy? To find out, psychological scientist Kiley Hamlin, now a professor at the University of British Columbia, conducted two studies as a graduate student at Yale University with her advisor Karen Wynn and colleagues.

The researchers had 9- and 14-month-old infants choose which food they preferred: graham crackers or green beans. The infants then watched a puppet show in which one puppet preferred graham crackers, while another preferred green beans. That is, one puppet demonstrated that its food preference was the same as the infant's, while the other demonstrated that its food preference was different from the infant's.

After the puppets chose their foods, infants then watched another puppet show, in which either the similar puppet or the dissimilar puppet dropped its ball and wanted it back. On alternating events, infants saw that one character always helped the ball-less puppet by returning the ball to him, while another character always harmed the ball-less puppet by stealing the ball away.

Finally, infants were given the chance to choose between the helper (giving) and harmer (stealing) puppets.

Unsurprisingly, infants' choices revealed that almost all the infants in both the 9- and 14-month-old groups preferred the character who helped the similar puppet over the character who harmed the similar puppet. Previous research has shown that infants like people who are nice to totally unknown individuals, so it makes sense that they would also like people who are nice to individuals who are similar to them.

Far more surprising was that almost all the infants at both ages preferred the character who harmed the dissimilar puppet over the character who helped him. Infants' preference for those who harmed dissimilar others was just as strong as their preference for those who helped similar ones.

According to Hamlin, these findings suggest that "like adults, infants incorporate information about not only what people do (e.g., acting nicely or meanly) but also whom they do it to (e.g., a person who is liked or disliked) when they make social evaluations."

The researchers confirmed these results in a second experiment, which included a neutral puppet that had demonstrated no food preference and no helpful or harmful behaviors.

This time, the 14-month-olds -- but not the 9-month-olds -- preferred the character that harmed the dissimilar puppet over the neutral puppet, and the neutral puppet over the helper of the dissimilar puppet. These results suggest that when a dissimilar individual is in need, 14-month-olds generate both positive feelings toward those who harm that individual and negative feelings toward those who help him. The researchers suggest that between 9 and 14 months, infants develop reasoning abilities that lead to these more nuanced social evaluations.

These results highlight the fundamental mechanisms that underlie our interactions with similar and dissimilar people.

"The fact that infants show these social biases before they can even speak suggests that the biases aren't solely the result of experiencing a divided social world, but are based in part on basic aspects of human social evaluation," says Hamlin.

But the exact reasons for infants' biased evaluations are still unknown.

"Infants might experience something like schadenfreude at the suffering of an individual they dislike," Hamlin notes. "Or perhaps they recognize the alliances that are implied by social interactions, identifying an 'enemy of their enemy' (i.e., the harmer of a dissimilar puppet) as their friend."

Hamlin emphasizes that even if these kinds of social biases are "basic," it doesn't mean that more extreme outcomes, like xenophobia and intergroup conflict, are inevitable.

"Rather, this research points to the importance of socialization practices that recognize just how basic these social biases might be and confront them head-on," she concludes.

Co-authors on this research include Neha Mahajan of Temple University, Zoe Liberman of the University of Chicago, and Karen Wynn of Yale University.

This research was supported by National Science Foundation Grant BCS-0921515 and National Institutes of Health Grant R01-MH-081877 to Karen Wynn.

Monday, March 11, 2013

Quadriplegic Feeds Herself Chocolate

Post-Modern Telekinesis


Mind Plus Machine
Brain-computer interfaces let you move things with a thought.
By Will Oremus|Posted Monday, March 11, 2013, at 12:05 PM

A man wears a brain-machine interface, equipped with electroencephalography (EEG) devices and near-infrared spectroscope (NIRS) optical sensors in a special headgear to measure slight electrical current and blood flow change occuring in the brain.
A man wears a brain-machine interface, equipped with electroencephalography
Photo by Yoshikazu Tsuno/AFP/Getty Images

Behind a locked door in a white-walled basement in a research building in Tempe, Ariz., a monkey sits stone-still in a chair, eyes locked on a computer screen. From his head protrudes a bundle of wires; from his mouth, a plastic tube. As he stares, a picture of a green cursor on the black screen floats toward the corner of a cube. The monkey is moving it with his mind.

The monkey, a rhesus macaque named Oscar, has electrodes implanted in his motor cortex, detecting electrical impulses that indicate mental activity and translating them to the movement of the ball on the screen. The computer isn’t reading his mind, exactly—Oscar’s own brain is doing a lot of the lifting, adapting itself by trial and error to the delicate task of accurately communicating its intentions to the machine. (When Oscar succeeds in controlling the ball as instructed, the tube in his mouth rewards him with a sip of his favorite beverage, Crystal Light.) It’s not technically telekinesis, either, since that would imply that there’s something paranormal about the process. It’s called a “brain-computer interface.” And it just might represent the future of the relationship between human and machine.

Stephen Helms Tillery’s laboratory at Arizona State University is one of a growing number where researchers are racing to explore the breathtaking potential of BCIs and a related technology, neuroprosthetics. The promise is irresistible: from restoring sight to the blind, to helping the paralyzed walk again, to allowing people suffering from locked-in syndrome to communicate with the outside world. In the past few years, the pace of progress has been accelerating, delivering dazzling headlines seemingly by the week.

At Duke University in 2008, a monkey named Idoya walked on a treadmill, causing a robot in Japan to do the same. Then Miguel Nicolelis stopped the monkey’s treadmill—and the robotic legs kept walking, controlled by Idoya’s brain. At Andrew Schwartz’s lab at the University of Pittsburgh in December 2012, a quadriplegic woman named Jan Scheuermann learned to feed herself chocolate by mentally manipulating a robotic arm. Just last month, Nicolelis’ lab set up what it billed as the first brain-to-brain interface, allowing a rat in North Carolina to make a decision based on sensory data beamed via Internet from the brain of a rat in Brazil.

So far the focus has been on medical applications—restoring standard-issue human functions to people with disabilities. But it’s not hard to imagine the same technologies someday augmenting capacities. If you can make robotic legs walk with your mind, there’s no reason you can’t also make them run faster than any sprinter. If you can control a robotic arm, you can control a robotic crane. If you can play a computer game with your mind, you can, theoretically at least, fly a drone with your mind.

It’s tempting and a bit frightening to imagine that all of this is right around the corner, given how far the field has already come in a short time. Indeed, Nicolelis—the media-savvy scientist behind the “rat telepathy” experiment—is aiming to build a robotic bodysuit that would allow a paralyzed teen to take the first kick of the 2014 World Cup. Yet the same factor that has made the explosion of progress in neuroprosthetics possible could also make future advances harder to come by: the almost unfathomable complexity of the human brain.

From I, Robot to Skynet, we’ve tended to assume that the machines of the future would be guided by artificial intelligence—that our robots would have minds of their own. Over the decades, researchers have made enormous leaps in AI, and we may be entering an age of “smart objects” that can learn, adapt to, and even shape our habits and preferences. We have planes that fly themselves, and we’ll soon have cars that do the same. Google has some of the world’s top AI minds working on making our smartphones even smarter, to the point that they can anticipate our needs. But “smart” is not the same “sentient.” We can train devices to learn specific behaviors, and even out-think humans in certain constrained settings, like a game of Jeopardy. But we’re still nowhere close to building a machine that can pass the Turing test, the benchmark for human-like intelligence. Some experts doubt we ever will: Nicolelis, for one, argues Ray Kurzweil’s Singularity is impossible because the human mind is not computable.

Philosophy aside, for the time being the smartest machines of all are those that humans can control. The challenge lies in how best to control them. From vacuum tubes to the DOS command line to the Mac to the iPhone, the history of computing has been a progression from lower to higher levels of abstraction. In other words, we’ve been moving from machines that require us to understand and directly manipulate their inner workings to machines that understand how we work and respond readily to our commands. The next step after smartphones may be voice-controlled smart glasses, which can intuit our intentions all the more readily because they see what we see and hear what we hear.
The logical endpoint of this progression would be computers that read our minds, computers we can control without any physical action on our part at all. That sounds impossible. After all, if the human brain is so hard to compute, how can a computer understand what’s going on inside it?

It can’t. But as it turns out, it doesn’t have to—not fully, anyway. What makes brain-computer interfaces possible is an amazing property of the brain called neuroplasticity: the ability of neurons to form new connections in response to fresh stimuli. Our brains are constantly rewiring themselves to allow us to adapt to our environment. So when researchers implant electrodes in a part of the brain that they expect to be active in moving, say, the right arm, it’s not essential that they know in advance exactly which neurons will fire at what rate. When the subject attempts to move the robotic arm and sees that it isn’t quite working as expected, the person—or rat or monkey—will try different configurations of brain activity. Eventually, with time and feedback and training, the brain will hit on a solution that makes use of the electrodes to move the arm.

That’s the principle behind such rapid progress in brain-computer interface and neuroprosthetics. Researchers began looking into the possibility of reading signals directly from the brain in the 1970s, and testing on rats began in the early 1990s. The first big breakthrough for humans came in Georgia in 1997, when a scientist named Philip Kennedy used brain implants to allow a “locked in” stroke victim named Johnny Ray to spell out words by moving a cursor with his thoughts. (It took him six exhausting months of training to master the process.) In 2008, when Nicolelis got his monkey at Duke to make robotic legs run a treadmill in Japan, it might have seemed like mind-controlled exoskeletons for humans were just another step or two away. If he succeeds in his plan to have a paralyzed youngster kick a soccer ball at next year’s World Cup, some will pronounce the cyborg revolution in full swing.

Schwartz, the Pittsburgh researcher who helped Jan Scheuermann feed herself chocolate in December, is optimistic that neuroprosthetics will eventually allow paralyzed people to regain some mobility. But he says that full control over an exoskeleton would require a more sophisticated way to extract nuanced information from the brain. Getting a pair of robotic legs to walk is one thing. Getting robotic limbs to do everything human limbs can do may be exponentially more complicated. “The challenge of maintaining balance and staying upright on two feet is a difficult problem, but it can be handled by robotics without a brain. But if you need to move gracefully and with skill, turn and step over obstacles, decide if it’s slippery outside—that does require a brain. If you see someone go up and kick a soccer ball, the essential thing to ask is, ‘OK, what would happen if I moved the soccer ball two inches to the right?’” The idea that simple electrodes could detect things as complex as memory or cognition, which involve the firing of billions of neurons in patterns that scientists can’t yet comprehend, is far-fetched, Schwartz adds.

That’s not the only reason that companies like Apple and Google aren’t yet working on devices that read our minds (as far as we know). Another one is that the devices aren’t portable. And then there’s the little fact that they require brain surgery.  

A different class of brain-scanning technology is being touted on the consumer market and in the media as a way for computers to read people’s minds without drilling into their skulls. It’s called electroencephalography, or EEG, and it involves headsets that press electrodes against the scalp. In an impressive 2010 TED Talk, Tan Le of the consumer EEG-headset company Emotiv Lifescience showed how someone can use her company’s EPOC headset to move objects on a computer screen.

Skeptics point out that these devices can detect only the crudest electrical signals from the brain itself, which is well-insulated by the skull and scalp. In many cases, consumer devices that claim to read people’s thoughts are in fact relying largely on physical signals like skin conductivity and tension of the scalp or eyebrow muscles.

Robert Oschler, a robotics enthusiast who develops apps for EEG headsets, believes the more sophisticated consumer headsets like the Emotiv EPOC may be the real deal in terms of filtering out the noise to detect brain waves. Still, he says, there are limits to what even the most advanced, medical-grade EEG devices can divine about our cognition. He’s fond of an analogy that he attributes to Gerwin Schalk, a pioneer in the field of invasive brain implants. The best EEG devices, he says, are “like going to a stadium with a bunch of microphones: You can’t hear what any individual is saying, but maybe you can tell if they’re doing the wave.” With some of the more basic consumer headsets, at this point, “it’s like being in a party in the parking lot outside the same game.”

It’s fairly safe to say that EEG headsets won’t be turning us into cyborgs anytime soon. But it would be a mistake to assume that we can predict today how brain-computer interface technology will evolve. Just last month, a team at Brown University unveiled a prototype of a low-power, wireless neural implant that can transmit signals to a computer over broadband. That could be a major step forward in someday making BCIs practical for everyday use. Meanwhile, researchers at Cornell last week revealed that they were able to use fMRI, a measure of brain activity, to detect which of four people a research subject was thinking aboutat a given time. Machines today can read our minds in only the most rudimentary ways. But such advances hint that they may be able to detect and respond to more abstract types of mental activity in the always-changing future.