Wednesday, April 24, 2013

Bodhisattva Thinking and You


What Are the Top 10 Things That We Should Be Informed About in Life?
 by Justin Freeman for Slate, pastor, former police officer


Image by building.co.uk/

1.  Realize that nobody cares, and if they do, you shouldn't care that they care. Got a new car? Nobody cares. You'll get some gawkers for a couple of weeks—they don't care. They're curious. Three weeks in, it'll be just another shiny blob among all the thousands of others crawling down the freeway and sitting in garages and driveways up and down your street. People will care about your car just as much as you care about all of those. Got a new gewgaw? New wardrobe? Went to a swanky restaurant? Exotic vacation? Nobody cares. Don't base your happiness on people caring, because they won't. And if they do, they either want your stuff or hate you for it.

2. Some rulebreakers will break rule No. 1. Occasionally, people in your life will defy the odds and actually care about you. Still not your stuff, sorry. But if they value you, they'll value that you value it, and they'll listen. When you talk about all of those things that nobody else cares about, they will look into your eyes and consume your words, and in that moment you will know that every part of them is there with you.
Spend your life with rulebreakers. Marry them. Befriend them. Work with them. Spend weekends with them. No matter how much power you become possessed of, you'll never be able to make someone care—so gather close the caring.

3. Money is cheap. I mean, there's a lot of it—trillions upon trillions of dollars floating around the world, largely made up of cash whose value is made up and ascribed to it, anyway. Don't engineer your life around getting a slightly less tiny portion of this pile, and make your spirit of generosity reflect this principle. I knew a man who became driven by the desire to amass six figures in savings, so he worked and scrimped and sacrificed to get there. And he did ... right before he died of cancer. I'm sure his wife's new husband appreciated his diligence.

4. Money is expensive. I mean, it's difficult to get your hands on sometimes—and you never know when someone's going to pull the floorboards out from under you—so don't be stupid with it. Avoid debt on depreciating assets, and never incur debt in order to assuage your vanity (see rule No. 1). Debt has become normative, but don't blithely accept it as a rite of passage into adulthood — debt represents imbalance and, in some sense, often a resignation of control. Student loan debt isn't always avoidable, but it isn't a given—my wife and I completed a combined 10 years of college with zero debt between us. If you can't avoid it, though, make sure that your degree is an investment rather than a liability—I mourn a bit for all of the people going tens of thousands of dollars in debt in pursuit of vague liberal arts degrees with no idea of what they want out of life. If you're just dropping tuition dollars for lack of a better idea at the moment, just withdraw and go wander around Europe for a few weeks—I guarantee you'll spend less and learn more in the process.

5. Learn the ancient art of rhetoric. The elements of rhetoric, in all of their forms, are what make the world go around—because they are what prompt the decisions people make. If you develop an understanding of how they work, while everyone else is frightened by flames and booming voices, you will be able to see behind veils of communication and see what levers little men are pulling. Not only will you develop immunity from all manner of commercials, marketing, hucksters and salesmen, to the beautiful speeches of liars and thieves, you'll also find yourself able to craft your speech in ways that influence people. When you know how to speak in order to change someone's mind, to instill confidence in someone, to quiet the fears of a child, then you will know this power firsthand. However, bear in mind as you use it that your opponent in any debate is not the other person, but ignorance.

6. You are responsible to everyone, but you're responsible for yourself. I believe we're responsible to everyone for something, even if it's something as basic as an affirmation of their humanity. However, it should most often go far beyond that and manifest itself in service to others, to being a voice for the voiceless. If you're reading this, there are those around you who toil under burdens larger than yours, who stand in need of touch and respect and chances. Conversely, though, you're responsible for yourself. Nobody else is going to find success for you, and nobody else is going to instill happiness into you from the outside. That's on you.

7. Learn to see reality in terms of systems. When you understand the world around you as a massive web of interconnected, largely interdependent systems, things get much less mystifying—and the less we either ascribe to magic or allow to exist behind a fog, the less susceptible we'll be to all manner of being taken advantage of. However:

8. Account for the threat of black swan events. Sometimes chaos consumes the most meticulous of plans, and if you live life with no margins in a financial, emotional, or any other sense, you will be subject to its whims. Take risks, but backstop them with something—I strongly suspect these people who say having a Plan B is a sign of weak commitment aren't living hand to mouth. Do what you need to in order to keep your footing.

9. You both need and don't need other people. You need others in a sense that you need to be part of a community—there's a reason we reflexively pity hermits. Regardless of your theory of anthropogenesis, it's hard to deny that we are built for community, and that "we" is always more than "me." However, you don't need another person in order for your life to have meaning—this idea that Disney has shoved through our eyeballs, that there's someone out there for all of us if we'll just believe hard enough and never stop searching, is hokum ... because of arithmetic, if nothing else. Establish your own life—then, if there's a particular person that you can't help but integrate, believe me, you'll know.

10. Always give more than is required of you.

Tuesday, April 23, 2013

It IS Better

The World Is Actually More Peaceful Than Ever

In the aftermath of an awful tragedy, it's hard to remember that political violence is in fact diminishing greatly.


The world is actually more peaceful than ever
Photo by Jessica Rinaldi
BY MICHAEL LIND for Salon.com

In the aftermath of the Boston Marathon bombings, it is important to keep things in perspective, by emphasizing what the mass media tend to neglect — namely, the fact that the world has become much more peaceful in recent decades and is getting more peaceful all the time.

It does not diminish the horror of mass casualty attacks on civilians, in this and other countries, to point out that today’s terrorist incidents provide a counterpoint to a declining arc of political violence worldwide. Both violence among states and violence within states have diminished dramatically in the last few generations.

If we look at battle deaths in the last century, the spurts in the Cold War, associated with the Korean, Indochina and Soviet-Afghan wars, were dwarfed by the huge spikes of slaughter associated with the world wars. And with the end of the Cold War came a steep decline in political violence worldwide — mainly because the two sides no longer kept local conflicts going by arming and supplying opposing sides from Latin America to Africa to Asia and the Middle East.

Has escalating terrorism succeeded the conventional conflicts of the past? No. The al-Qaida attacks on the U.S. on 9/11 were exceptional in the number of their victims. The results obtained by the Boston Marathon terrorists, who killed only three individuals while maiming scores of others, are more typical.  According to the RAND Database of Worldwide Terrorism Incidents, in the seven years following 2001 the average number of deaths from international terrorism was 582. What is more, many suicide bombings and other terrorist attacks are carried out by locals and take place as part of intra-state wars or in countries or regions occupied by foreign forces like Chechnya, Northern Ireland, Palestine, Iraq and Afghanistan. While some of them are carried out by transnational terrorists, many of these incidents do not necessarily fit the category of “international terrorism.”

Indeed, the two defining acts of political violence in the post-Cold War world — Saddam’s invasion of Kuwait and bin Laden’s attack on New York and Washington, D.C. — look ever more like anomalies. Saddam violated the cardinal rule of post-1945 international relations, which proscribed the kind of direct conquest and annexation of foreign territories and populations that had been widely considered legitimate before World War II. The strength of this norm is evident from the fact that so many regimes with otherwise different constitutions and objectives united in condemning the violation of Kuwait’s sovereignty.

Likewise, all regimes, whether liberal or despotic, can imagine their own cities and populations as victims of the kind of mass-casualty terrorism practiced by al-Qaida and other stateless terrorists. During the Cold War, Soviet-backed “freedom fighters” were the anti-Soviet alliance’s “terrorists” and vice versa. But while al-Qaida’s attack on the U.S. enjoyed some support in some Muslim populations, no country hailed bin Laden as a freedom fighter.

What unites opposition to wars of conquest and shared dread of stateless terrorism is the self-interest of states as organizations that seek to monopolize violence, as the German philosopher Max Weber argued. Another philosopher, the 17th-century theorist Thomas Hobbes, would not have been surprised by the correlation between the strengthening of the control of states over their own territories and populations and the decline of violence of all kinds, including homicide per capita (which has been declining in the U.S. for generations). Libertarians may decry the power of police surveillance technologies in the U.S. — but would they really prefer to be terrorized by the James Gang and Al Capone?

As wars among states have declined, most political violence in the world consists of struggles within states. Often this takes the form of insurgencies by ethnic groups against other ethnic groups that dominate the government.

We trivialize these conflicts by calling them “ethnic conflicts,” as though they were turf wars among different groups of hyphenated Americans in the New York of West Side Story. It is more accurate to call them “nationality conflicts” and to recognize that they arise from the fact that there are more nations in the world than there are states. In much of the zone of former European colonization from Africa through the Middle East to Asia, two or more nationalities coexist within arbitrary borders drawn by long-dead French, British or Russian colonial administrators.

In some cases, peaceful coexistence among the constituent nations of a multinational state can be achieved by means of respect for minority rights, or, if that is not enough, by constitutional provisions for partial autonomy for ethnically identified regions. “Asymmetrical federalism” is alien to the U.S. tradition; we have never wanted to have German-speaking or Spanish-speaking states. But asymmetrical federalism has kept Anglophone and Francophone Canada together to date, and works well enough in Belgium (Flemings and Walloons) and Switzerland, with its German, French, Italian and Romansch cantons.

In other cases, it may be best for nations with incompatible differences to divorce, by partitioning a former multinational state into two or more nation-states. Sometimes this has been accompanied by sickening massacres and heartbreaking transfers of population, as in the former Yugoslavia. But in other places, the process has been peaceful, as in the partition of Norway and Sweden more than a century ago, and the bloodless breakup of the former Czechoslovakia not long ago. Americans, taught to “celebrate diversity,” tend to confuse the voluntary diversity of the U.S. — a country of descendants of voluntary immigrants, with the exception of descendants of native Americans, African slaves and some Mexican families in the Southwest — with the involuntary diversity of different nationalities yoked together arbitrarily by some now-extinct European colonial empire.

Partition often promotes peace among now-separated nationalities because the global prohibition against political violence across borders is much stronger than the prohibition against violence committed by regimes or insurgent groups within the borders of a single state. Contrast the different responses of the international community to Saddam’s invasion of Kuwait, which involved crossing a recognized international border, with the response to his suppression of Iraqi Kurds and other minorities within Iraq (itself an artificial state cobbled together by British imperialists from parts of the Ottoman Empire after World War I). My guess is that there will be even less political violence a few generations from now — in part because there will be more nation-states.

If the world today is far safer than it was only a few decades ago, and generally more peaceful than it has ever been in human history, then why don’t we feel safer than we do? Partly it is because of the continuing genuine threat of terrorist incidents like the Boston bombing, which are unnerving because they can happen in places like our own neighborhoods, far from the few remaining war-torn regions of the earth.  Partly it is the intrinsic sensationalism of the media, which prefers headlines about “the Long War against super-empowered terrorists” to “global political violence in historic decline.”

And partly it is what I think of as the Law of the Conservation of Anxiety: As big worries recede, we blow up lesser worries to compensate. So we stopped worrying about global nuclear war in the 1990s, only to panic about the Y2K computer glitch, and turned Osama bin Laden and his allies from criminals who failed most of the time but got spectacularly lucky once into world-historic figures on the scale of Hitler and Stalin waging “World War IV.” Needless to say, threat inflation is encouraged by producer interests, like the computer technicians who were paid well to address the Y2K issue, and lobbyists for the military-industrial complex who argued that a military buildup and invasion of Iraq and Afghanistan would pretty much eliminate the threat of things like the Boston Marathon bombing. But one swallow does not make a spring, and a small number of successful terrorist attacks, horrific as they are, do not augur global anarchy.

Friday, April 19, 2013

The Angel Bag

Warriors throughout the ages have been known to carry a bag of tricks.  And weapons.  A bodhisattva warrior maintains a bag of compassion.  So if you ever need something, look to the angel wiho always seems to have just what you need.

Tinelle - mints, water, protein/granola bars, hand lotion, coconut oil, an extra lip moisturizer, and pens
































David: water, band-aids, vows

Narelle: awesome grab bag for the weary


































Sal: MEXICAN, not American, Coke! There is a difference!!!
Snickers, to satisfy the hunger. 
Dental Floss which can also double as zip line rope
Notebooks to write down secret agent messages. 
Essential Ninja Gear
Warrior Bracelet
& A vow book

Tuesday, April 16, 2013

I Wanna Hold Your Hand


The Bionic Hand with the Human Touch


By George Webster, CNN. Photo from Threatquality.com
February 1, 2013 

What do will.i.am and Iron Man have in common? They're both rather partial to bionic limbs.
In his latest music video, "Scream & Shout," a human hand can be glimpsed clasping what appears to be a sophisticated robot hand stolen from the set of a high-budget sci-fi film.

What many of the video's 97 million (and counting) viewers may not realize, however, is that the appendage in question is a genuine prosthetic hand that, its makers claim, has taken us one step closer to truly simulating the real thing.

Unlike conventional prosthetics, the i-limb Ultra boasts five individually-powered articulating digits, as well as a fully rotatable thumb and wrist, enabling the user to perform a variety of complex grips.

"The first generation (of prosthetics) had what I call a pincer grip -- the fingers are reflexed so they do not change shape and they move in one plane," explains David Gow, the British inventor and engineer behind i-limb. "Whereas what we produced is something that rotates at the knuckles."

The i-limb is the latest in "myoelectric prosthesis" -- a process that uses electrical sensors to detect tiny muscular movements in the residual limb, which are then translated by an on-board computer into natural, intuitive movement of the mechanized hand.

In practice, this requires the wearer to learn a language of muscle movements around the wrist, which correspond to a vast array of pre-programmed hand and finger motions.

Although it requires a fair bit of concentration to begin with, Gow says that -- much like playing an instrument -- the mechanism is intuitive once muscle memory takes over.

As well as the practical benefits afforded by the added range of grips, Gow believes the i-limb carries a significant psychological advantage because finger movements are what most people associate with the human hand.

"When you have only one shape for the hand and it is not a particularly everyday natural one, it looks strange," he says. People who've used the i-limb "say they see the digit as what gives them the sense of having a hand back."

Donald McKillop lost his right arm in an accident at home 35 years ago. He was one of the first amputees to try an initial version of the i-limb back in 2007.

"Every day I'm finding new things with it -- it's absolutely amazing ... It's the hand I thought I'd never have again," he said.

But of course, although it can rotate 360 degrees, the i-limb is still far from measuring up to the real thing.
"We can't approach the subtlety of skin, sensation of temperature, touching things yet," admits Gow. "But we've broken through the barrier of making a hand that looks like a medical device."

As things stand, the i-limb is also prohibitively expensive. Including fitting and training, a hand costs in the region of $100,000.

Perhaps in part because of this, most of the 4,000 or so i-limb users in the world are war veterans from Iraq and Afghanistan. Gow notes, however, that the potential market is huge: there are presently an estimated two million upper limb amputees across the globe.

Touch Bionics, the company that Gow founded to produce the i-limb, is looking to take its share. Gow, who left the company in 2009, says sales are accelerating and 2012 saw turnover reach over $16 million.

For the Scotland-based engineer, who abandoned his career in the defense industry to dedicate himself to the study of prosthetics, the i-limb is much more than a business.

"I have seen hundreds of people ... I have seen the father that says 'thank you' on behalf of his son," Gow says, wiping a tear from his eye. "That means an awful lot because you don't, as an engineer, get many moments where you articulate human emotions about these things."

Friday, April 12, 2013

Tech + Music Fest - E = Bliss

Are you too cool for Coachella?  Stop reading now and go meditate then.

If you know me, you are aware that my two great loves are tech and music.  When they come together, I explode!  Sometimes literally.  I once got totally wasted on some fantastic E while allowing a fence to hold me up as the sounds of Daft Punk wove through my thick skull (in French, even better) .  I almost died five days later but that's not music's fault.  Don't do drugs, especially when Buddhism tastes so much better.

Lama Marut says the only thing people ever really say on cell phones is, "Where are you?"
Wouldn't it be great to have an app that's part personal gps, part Where's Waldo for you friends to be able to  find you in a mass of people?  Someone tell me if this works at Diamond Mountain.

Here ya' go, happy Friday...


Photo by beatcrave.com
Marketplace Tech for Friday, April 12, 2013

If you were going to the Coachella Valley Arts and Music Festival this weekend, what would make your must-see list? Nick Cave and the Bad Seeds, Bat for Lashes, and Koolfog -- that last one's not a band, it's a misting system to keep cool outside.

"The technology is based on evaporative cooling," says Bryan Roe, president of Koolfog. "We're putting water into there and it's evaporating and when that evaporation takes place, the heat exchange brings the air temperatures down, so you can get 20, 30, 35 degrees cooling in an outdoor space."

The process, which creates a fog-ish mist, is much more efficient than standard methods of cooling air in hot climates.

If you're going to Coachella, you'll also want to pack some other tech supplies to capture the moment and stay connected, which brings us to our summer concert app guide.Click on the links below and blow your mind.  In a good way.

1. Official Coachella 2013 app: The official festival app is free and works on iOS and Android. It provides artist info, set times, a detailed event map, and a grid which helps you pick and schedule the shows you want to see.

2. Shazam: Hear a song you like, but not sure of the name? Shazam lets you hold up your phone, record live, and then identifies the mystery tune.

3. Vine: Capture concert moments six seconds at a time. Vine lets you create and share short video spots. Free on iOs.

4. Gifboom: Or, if you are more of the Tumblr pursuasion and prefer GIF''s to video, try creating your own festival GIF's with this app, which is available on iOS and Android.

5. Vyclone:  Co-create videos with your concert buds. This app synchronizes footage from several phones to create multiple angle movies.

6. zLocation: Like Hansel and Gretel for your phone. zLocation lets you drop GPS markers so you can wind your way back to your car, misting station, or other chosen meeting spots.

7. Glympse:  Share your location with friends and never get lost in the crowd. This app allows you to send your location to friends and set how long they can track you.

8. Water Your Body: Use your phone to stay hydrated. This app, which cost $1 on iOS and Android, lets you monitor your water intake and needs.

Monday, April 8, 2013

Singularity and Your Brain


Ray Kurzweil on the surprising simplicity of the human brain


Image from thatsreallyposible.com
Interview by David Brancaccio on Marketplace Tech Friday, April 5, 2013

The federal government wants to spend $100 million to unravel the complex  of the human brain. But there's someone else who's been thinking a lot about the brain: The legendary inventor and futurist Ray Kurzweil. Kurzweil has done pioneering work in optical character readers, flatbed scanners, electronic keyboards for musicians, and beyond. He has thought a lot about the ways technology and human beings are becoming more intertwined -- and the future of that connection.

Kurzweil, who holds The National Medal of Technology and Innovation and is director of engineering at Google, joined Marketplace Tech host David Brancaccio to talk about his latest book, "How to Create A Mind," and an irony of the human mind: While the brain's work is complex, it's based on simple components.

On the brain’s complexity:

Kurzweil: "It’s complex and it’s not complex. There’s tremendous amount of redundancy and interchangeability. So one region that’s called V1, that generally recognizes very simple visual features like the cross part of a capital, what happens to it in a congenitally blind person who’s not getting any visual images? It actually gets harnessed by the frontal cortex which deals with very high level concepts like humor and beauty and starts to deal with high level concepts and language, showing the complete interchanability, because high level concepts and language and simple feature of visual images are at opposite extreme ends of the spectrum of complexity. So the brain is complicated but it’s not a level of complexity that we are unable to understand."

On how the brain builds itself:

Kurzweil: "The complexity of [brain] connections actually comes from the complexity of our own experience. Because not only does our brain create our thoughts but our thoughts create our brain. This hierarchy which starts with very simple visual and auditory features and goes all the way up to humor and beauty and irony, we create those connections from the moment we are born or even earlier. It’s true you are what you eat, but it’s even more true that you are what you think. Our ability to see inside the human brain is growing exponentially. You and I’ve talked about exponential growth, and one of the things that’s improving exponentially is spatially resolution of brain scanning. And we can now see inside a living brain and see it real time create these new connections and see these connections firing, and see our thoughts create our brain and then we can use that information to create these biologically inspired models and build intelligent machines using similar principles."

On whether technology helps or hurts our brain potential:

Kurzweil: "The controversy existed when I went to college -- there were these little devices that looked like cell phones but were called calculators. And controversy was that kids weren’t going to learn arithmetic. And guess what, kids don’t know arithametic as well today, but the calculators have not gone away. These are properly brain extenders. We are much smarter that we were decades ago. I've been managing work teams for 45 years and I can now have a group of 2-3 people a few weeks accomplish what used to take a group of 100 or 200 people years. We’re definitely more productive and intelligent. We now have access to all of human knowledge with a few key strokes.

I rely on Wikipedia and Google and all these brain extenders. But they’re not going away and they are part of who we are. We create tools in order to extend our reach. A thousand years ago we couldn’t reach the fruit hanging from a higher branch so we created a tool to extend our physical reach and we now extend our mental reach. And we ultimately will very directly make ourselves smarter by computers directly in our brains. Even though the computers now are, for the most part, not in our brains, even if we interface with them through our fingers and our eyes, they’re still really extensions of our brain."

On how far we from artificial intelligence:

Kurzweil: "People see fantastic things that seemed like science fiction just a few years ago and they’re taking place now. There’s a sense of this acceleration and exponential growth.  When I talked about computers and artificial intelligence reaching human levels, when I talked about that even around the year 1999, that was a very non mainstream position. We had a conference and took a poll of artificial intelligence experts and the consensus view was that AI at that level was centuries away.

Today it’s a very common view that 2029 is reasonable. Some people may quibble with that date. Some people think I’m pessimistic. If you look at IBM’s Watson system, which got a higher score on Jeopardy than the best players put together -- that’s very impressive. That’s still not human level intelligence in terms of the flexibility and scope, but that should very much give us confidence that we’re on track to accomplish it by 2029."

Friday, April 5, 2013

Eye for an Eye Until We're All Blind


One obstacle to overcoming mental afflictions is righteousness in its most hideous form: retribution.  Forgiveness is not a divine act or a favor that you bestow on someone.  It's salvation for your own small self on the way to becoming your big self.  If eye for an eye worked as a preventative measure against crime, we'd be in paradise by now.

Right?


Surgical Paralysis Ordered in Saudi Arabia as Punishment for Teenage Assault

Spine-for-a-spine punishment has mother 'frightened to death'

Saudi police stand guard outside the Grand Mosque in Mecca, Saudi Arabia, Feb. 7, 2007.re
Saudi police stand guard outside the Grand Mosque in Mecca, Saudi Arabia,.
By STEVEN NELSON for US News April 4, 2013

Ali Al-Khawahir, 24, is awaiting court-ordered surgical paralysis in Saudi Arabia for an assault he committed when he was 14 years old, according to news reports.

Al-Khawahir has reportedly spent 10 years in prison since stabbing a friend in the spine during a fight. The wound left his friend paralyzed. The Saudi legal system allows eye-for-an-eye punishments.

The convicted man's mother told Arabic-language newspaper Al-Hayat that the family is seeking help raising $270,000 in "blood money," which in Saudi Arabia can be requested by a crime's victim – or victim's family in cases of murder – in exchange for punishment.

"We don't have even a tenth of this sum," she said, according to a translation by The Guardian.

"Ten years have passed with hundreds of sleepless nights," his mother told Al-Hayat, according to the English-language Saudi Gazette. "My hair has become grey at a young age because of my son's problem. I have been frightened to death whenever I think about my son's fate and that he will have to be paralyzed."

Amnesty International condemned the sentence as "outrageous" in a statement released this week. "Paralysing someone as punishment for a crime would be torture," said Ann Harrison, the organization's Middle East and North Africa deputy director. "That such a punishment might be implemented is utterly shocking."

Tooth extractions, said Amnesty, have also been ordered in Saudi Arabia.

Israeli news website Ynet reports that 13 years ago a Saudi hospital gouged out an Egyptian man's eye as punishment for an acid attack that injured another man. A similar sentence for an Indian man six years later was set aside after international outrage.

If victims do not seek "blood money" or perpetrators cannot afford to pay the amount requested, the sentence is carried out.

Saudi Arabia's legal system is a perpetual object of scorn. In March, seven men were executed for committing jewelry heists and armed robberies. One of the men said he was 15 at the time he was arrested, claimed he was tortured into confessing and said the defendants did not have legal representation during court proceedings.

Monday, April 1, 2013

The Computerized Cell as Warrior

Biological Computer: Stanford Researchers Discover Genetic Transistors That Turn Cells Into Computers

Biological Computers
By Aaron Sankin for Huffingtonpost.com

Researchers at Stanford University announced this week that they've created genetic receptors that can act as a sort of "biological computer," potentially revolutionizing how diseases are treated.

In a paper published in the journal "Science" on Friday, the team described their system of genetic transistors, which can be inserted into living cells and turned on and off if certain conditions are met. The researchers hope these transistors could eventually be built into microscopic living computers. Said computers would be able to accomplish tasks like telling if a certain toxin is present inside a cell, seeing how many times a cancerous cell has divided or determining precisely how an administered drug interacts with each individual cell.

Once the transistor determines the conditions are met, it could then be used to make the cell, and many other cells around it, do a specific thing--like telling cancerous cells to destroy themselves.

"We're going to be able to put computers into any living cell you want," lead researcher at the Stanford School of Engineering Drew Endy explained to the San Jose Mercury News. "We're not going to replace the silicon computers. We're not going to replace your phone or your laptop. But we're going to get computing working in places where silicon would never work."

The team demonstrated their work using E. Coli bacteria, an organism commonly used in genetic research.

Traditional computers use millions of tiny transistors, which control the flow of electrons in the form of the zeros and ones that make up binary code. Multiple transistors working together can form something called a "logic gate," which serves as the basic building block of all computations performed by computers the world over.

The researchers' biological transistors, which they've dubbed "transcriptors," use enzymes to control the flow of RNA proteins along a strand of DNA, just like a computer would use silicon transistors to control the flow of electrons.

In addition to changing the way people think about the human body, biological computers made using these transcriptors could be used to learn more about an litany of other living systems.

"For example, suppose we could partner with microbes and plants to record events, natural or otherwise, and convert this information into easily observed signals," Endy told the Independent. "That would greatly expand our ability to monitor the environment."

Extreme Tech reports:

You need more than just...[logic] gates to make a computer, though. You also need somewhere to store data (memory, RAM), and some way to connect all of the transcriptors and memory together (a bus). Fortunately, as we've covered a few times before, numerous research groups have successfully stored data in DNA--and Stanford has already developed an ingenious method of using the M13 virus to transmit strands of DNA between cells...In short, all of the building blocks of a biological computer are now in place.
This isn't to say that highly functional biological computers will arrive in short order, but we should certainly begin to see simple biological sensors that measure and record changes in a cell’s environment. Stanford has contributed the...gate design to the public domain, which should allow other research institutes, such as Harvard's Wyss Institute, to also begin work on the first biological computer.

The researchers have published some of their findings under a public domain license, in the hopes that other scientists will more easily be able to build off their discoveries.

Thursday, March 28, 2013

Robot Snake to the Rescue

One of the drawbacks of being a superhero is that the "super" in you is often confronted by the "super" in the opposing side.  Case in point, firefighters have to not only kill the fire, but they have to find the person or persons trapped by flames or collapsed from smoke inhalation.  And they need to do this without getting hurt themselves because, you know, there's going to be another fire tomorrow. Now they can rely on a slimy cylinder other than their hoses.

2000+ years since the snake ruined Eden, he's come back to redeem himself in robotic form. - A.T.


Marketplace Tech for Wednesday, March 27, 2013

What this country needs is a good robot snake, right?

A robot snake -- just like the name suggests -- is a long, segmented metal rig. When you toss it, it automatically wraps around whatever it hits. If the snake metaphor bugs you, think of it as a robot grappling hook that can also shimmy up poles.

Howard Choset, a professor at the Robots Institute at Carnegie Mellon in Pittsburgh which developed the snake, calls it "perching behavior."

"We throw the robot in the air, we have a smart way of processing the sensors, and then on impact we can then command the robot curl around whatever it just hit," explains Choset.

Here's just one of the many applications for a snake robot that likes to hug things. A firefighter doing search and rescue might throw one over a wall of flame. But Howie Choset's team is thinking bigger.

"It is worth noting that the basic science behind that capability will apply to other mechanisms, not just flying snake robots," says Choset. "We have some ideas on how to control helicopters, how to better control satellites -- anything that has to orient and fly at the same time."

These things are formally called "hyper-redundant mechanisms," a class that also includes not just snakes but elephant nose robots and monkey tail robots. Click on the audio player above to hear more about robotic snake applications.

See the robot snake in action in the video below:




Support Marketplace Tech

Tuesday, March 26, 2013

Would You Choose This Superpower?


Harry Potter-Like Invisibility Cloak Works (in a Lab)


Photo credit: the piper,Somerville, MA
by Clara Moskowitz for LiveScience March 25, 2013

A miniature version of Harry Potter's invisibility cloak now exists, though it works only in microwave light, and not visible light, so far.

Still, it's a nifty trick, and the physicists who've created the new cloak say it's a step closer to realizing the kind of invisibility cloak that could hide a person in broad daylight.

The invention is made of a new kind of material called a metascreen, created from strips of copper tape attached to a flexible polycarbonate film. The copper strips are only 66 micrometers (66 millionths of a meter) thick, while the polycarbonate film is 100 micrometers thick, and the two are combined in a diagonal fishnet pattern.

The creation is a departure from previous attempts to create invisibility cloaks, which have aimed to bend light rays around an object so that they don't scatter, or reflect off it, a technique that relies on so-called bulk metamaterials. Instead, the new cloak uses a technique called mantle cloaking to cancel out light waves that bounce off the shielded object so that none survive to reach an observer's eye. [10 Real-Life Sci-Fi Inventions]

"When the scattered fields from the cloak and the object interfere, they cancel each other out and the overall effect is transparency and invisibility at all angles of observation," study co-author Andrea Alu, a physicist at the University of Texas at Austin, said in a statement.

In lab tests, Alu and his colleagues successfully hid a 7-inch-long (18 centimeters) cylindrical rod from view in microwave light. They said the same technology should be able to cloak oddly shaped and asymmetrical objects, too.

"The advantages of the mantle cloaking over existing techniques are its conformability, ease of manufacturing and improved bandwidth," Alu said. "We have shown that you don't need a bulk metamaterial to cancel the scattering from an object — a simple patterned surface that is conformal to the object may be sufficient and, in many regards, even better than a bulk metamaterial."

In principle, the same kind of cloak could be used to hide objects in the visible range of light, as well, though it may work only for teensy-tiny objects, at least at first.

"In fact, metascreens are easier to realize at visible frequencies than bulk metamaterials and this concept could put us closer to a practical realization," Alu said. "However, the size of the objects that can be efficiently cloaked with this method scales with the wavelength of operation, so when applied to optical frequencies we may be able to efficiently stop the scattering of micrometer-sized objects."

The invention isn't just a novelty to thrill Harry Potter fans and aspiring spies. The researchers say it could have practical applications down the line, such as in noninvasive sensing devices or in biomedical instruments. They described their device in a paper published in the March 26 issue of the New Journal of Physics.

Trouncing Thirst


World Water Day was March 22.  Let's be honest, you have never suffered a day of extreme thirst.  The kind that drives you mad.  Even Christ couldn't stand it.  The only words of physical complaint that He uttered on the cross were, "I thirst".

Millions of people have a daily struggle with gaining access to clean, safe water.  Not just safe in that the water's potable (not salinated or polluted), but safe in that they won't get raped on their way to the source.  Or caught in the crossfire of warring factions (see Darfur and Rwanda).

There are a lot of ways to suffer on this planet, but the ways are not infinite.  Someone has come along and invented something to help shorten the list. - A.T.

Dean Kamen's Slingshot Aims To Bring Fresh Water To The World
Posted on Slate: 03/25/2013


Photo from the film Slingshot by Paul Lazarus

A recent invention called the Slingshot could provide freshwater to those with some of the most limited access. Inventor Dean Kamen, best known as the man behind the Segway, has partnered with Coca-Cola to place his machines throughout developing nations in Africa and Central America in hopes of eliminating the millions of deaths each year related to waterborne disease.

More than 783 million people don't have access to clean water and 37 percent don't have access to sanitation facilities, facts highlighted by the UN during World Water Day last week.

The device can take any form of potentially contaminated liquid and distill it into something safe to drink -- by evaporating the water and then condensing the steam, leaving pathogens behind. Kamen even joked in a 2008 interview with Steven Colbert that the Slingshot could sanitize a 50-gallon drum of urine.

A recent documentary short directed by Paul Lazarus and featuring Kamen won third place in GE's Focus Forward Film Festival, which highlights leading innovators around the world. The film was also screened at this year's Sundance Film Festival.

Watch the 3 minute documentary here:  http://focusforwardfilms.com/contest/16/slingshot-paul-lazarus

Sunday, March 24, 2013

Meditation Preliminaries

Preliminaries are as important to your meditation is stretching is to your workout.  The preliminaries announce to the mind that you're preparing to move from the mundane to the sacred.  Below is a detailed list of formal preliminaries.  You may move as slowly or quickly through them as you like.  You may alter the order to prevent boredom from seeping in.  It is best though, not to skip past them before focusing on the object.



The Preliminaries to Meditation (in practice order)
from the Lam Rim Chen Mo by Je Tsonkapa


Clean the meditation room and altar.  This becomes the cause to help create a paradise later. Also helps to wake up and get the day going and to slow down the mind. 

2.      Set up the altar and make offerings. Find offerings that you obtain without using any dishonest means. Put them forth in an attractive arrangement. If using water bowl offerings - Fill and empty bowls from the left. Empty bowls at night to signify ready to die now. Wipe bowls before filling.

3.      Physical prostrations (three)

4.      Sit on your cushion in the proper 8 point posture

5.      Focus on and count your breath (begin with exhalation; each exhalation and inhalation count as the same number); try to get to 10 without major distraction)

6.      Visualize merit field. Start simply, by visualizing the silhouette of the root lama or holy being with whom you strongly identify, and then begin to add features, color, and details later.   Then visualize the garden for gathering the power  of good: the lineage lamas together with an inconceivable mass of Buddhas, and bodhisattvas, listeners, self-made Buddhas, and protectors of the Dharma.

7.      Go for refuge

8.      Generate bodhicitta

9.      Invite and visualize a holy being to meditate with you

10.      Mental prostrations (think of and admire a good quality of the holy being)

11.      Mental offerings (of things you own, things that are owned by no one, or of your practice)

12.     Confession and purification (with the four forces:  refuge, regret, restraint, and restitution)

13.     Rejoicing (in your own good deeds and the good deeds of others)

14.     Turn the Wheel of Dharma by requesting teachings (formal and informal)

15.      Request teachers (the holy being and all those in whose company you spiritually benefit) to stay with you

16.      Don’t forget to always end your preliminaries with the dedication of the merits accumulated (dedicate to your own enlightenment or to the benefit of others).  Dedication works to multiply, fantastically, even the minor good deeds you have done in the acts of gathering, and cleaning, and multiplying. It also takes good deeds that are short-term, those that are going to give a good result and then disappear, and changes them so that they will never be exhausted.

At this point you can move, scratch, etc. Then return to your breath to regain concentration and when you are ready begin the main meditation. 

Close with requesting blessings (ask the holy being to increase your capabilities to do good with body, speech, and mind). Make an offering of a mandala. Then make a request that the Lamas bless your mind. After requesting, absorb the being through your crown into your heart.

Friday, March 22, 2013

Iron Lung


Breathing Lung Transplant At UCLA, First Ever In U.S.


Image by shawnzrossi
Posted: on HuffingtonPost

In November, transplant patient Fernando Padilla, 57, got an early-morning call that a pair of donor lungs were available, UCLA reports.

But they weren't going to be delivered in the traditional icebox method. Instead, the doctors used an experimental device that kept the lungs breathing as they were transported from another state.

“They are as close as possible as they could be left in a live state,” Dr. Abbas Ardehali, director of the UCLA Transplant Program, told KTLA.

The lungs rose and fell in a box, as they were supplied with oxygen and a special solution supplemented with red-blood cells, NBC reports. Doctors described seeing the "breathing lungs" outside a body as "surreal."

This new technique will make for more successful lung transplant surgeries in the future, said Padilla's doctors. "Lungs are very sensitive and can easily be damaged during the donation process," Dr. Ardehali said on the UCLA site. "The cold-storage method does not allow for reconditioning of the lungs, but this promising technology enables us to potentially improve the function of the donor lungs before they are placed in the recipient."

Months later, the seven-hour transplant surgery has been deemed a success. It used to be a struggle for Padilla to take even a few steps, and he was permanently tethered to an oxygen tank, according to UCLA.


Now, he enjoys walking several miles a day with his wife and playing with his grandchildren.

"I'm feeling really good," Padilla said to NBC. "Getting stronger every day."


Thursday, March 21, 2013

Lead Man Walking


When I was a little girl, a few of my greatest "shoot the moon" wishes were for blind people to be able to see again, deaf people to hear again, and paraplegics to be able to get up and walk.  Now all of those wishes have come true.  It's time to shoot past the moon. -A.T.

Could a robotic exoskeleton turn you into a real-life Iron Man?
By Will Oremus on Slate Thursday, March 21, 2013

 Robert Woo is outfitted with an exoskeleton device to walk in made by Ekso Bionics.
Robert Woo is outfitted with an exoskeleton device to walk in made by Ekso Bionics.
Photo by Mario Tama/Getty Images

Six years ago, a 39-year-old architect named Robert Woo was working on the Goldman Sachs Tower in Lower Manhattan when a crane’s nylon sling snapped, dropping seven tons of metal studs onto his construction trailer. He survived, but he was paralyzed from the waist down. He never expected to walk again.

Last week, I watched as physical therapists at Mt. Sinai Medical Center in Manhattan helped Woo into a robotic exoskeleton. He braced himself for a moment with crutches. Then he stood up and strode out of the room, his carbon-fiber leg joints whirring with each step. “My kids call me Iron Man,” Woo told me with a grin. “They say, ‘Daddy, can you fly too?’”

He can’t. But don’t rule out the possibility.

Powered exoskeletons once looked like a technological dead end, like flying cars and hoverboards. It wasn’t that you couldn’t make one. It was that you couldn’t make it practical. Early attempts were absurdly bulky, inflexible, and needed too much electricity.

Those limitations haven’t gone away. But in the past 10 years, the state of the art has been advancing so fast that even Google can’t keep up. Ask the search engine, “How do exoskeletons work?” and the top result is an article from 2011 headlined, “How Exoskeletons Will Work.” As Woo can testify, the future tense is no longer necessary. The question now is, how widespread will they become—and what extraordinary powers will they give us?

Woo’s exoskeleton, a 50-pound aluminum-and-titanium suit that takes a step with the push of a button, is called the Ekso, and it’s the flagship model of the Richmond, Calif.-based startup Ekso Bionics. The company has already sold three dozen to hospitals and clinics in 10 countries and plans to start selling them to individuals next year. Last May, a paraplegic woman named Claire Lomas used a similar device created by Israel-based Argo Technologies to walk the London Marathon. Next year, Fortune 500 firm Parker Hannifin plans to release its own version, said to be the lightest yet. One physical therapist calls it the iPhone of exoskeletons.

Other companies, including defense giant Lockheed Martin, are already eyeing the next step: commercial exoskeletons and bodysuits aimed at enhancing the strength and endurance of nondisabled people. Using technology licensed from Ekso, Lockheed is working on a device called the HULC—the Human Universal Load Carrier—that would allow soldiers to tote 200 pounds for hours without tiring. Unlike the Ekso, the HULC won’t take your steps for you. Instead, it uses accelerometers and pressure sensors to read your intentions, then provides a mechanical assist, like a power steering system for your legs.

In Italy, researchers have built a “body extender” robot aimed at allowing rescue workers to lift a wall from an earthquake survivor. Engineering students at Utah State concocted a vacuum-powered “Spider-Man suit” with which a soldier could scale the sheer side of a building. Ekso’s ExoHiker and ExoClimber target the outdoor recreation market. In Japan, a firm called Cyberdyne has developed what it calls Hybrid Assistive Limb technology to help home caregivers hoist an elderly patient from the bathtub with ease.

Cyberdyne says its name wasn’t intended as an allusion to the fictional firm that created the evil computer network in the Terminator films. Still, sci-fi reference points are inevitable for a technology that until recently existed only in movies and comic books. Iron Man is just one obvious touchstone. From Starship Troopers to Aliens to Avatar, powered armor has long been a staple of imaginary intergalactic conflicts. But the comparisons between Hollywood’s exoskeletons and their real-world counterparts are as inapt as they are inescapable. To companies like Ekso, the fictional technologies serve as both an inspiration and a frustratingly unrealistic benchmark.

“In the early days, the DARPA days, it really was science fiction,” says Russ Angold, who co-founded Ekso in 2005 to develop technology pioneered only a few years earlier by UC-Berkeley engineers working under a Department of Defense grant. The company was originally called Berkeley ExoWorks, but the concept of an exoskeleton was so foreign to the general public that they decided to change it to Berkeley Bionics, a reference to the TV show The Six Million Dollar Man.

Then came the Iron Man movies, bringing a flood of publicity for Ekso, which was portrayed in the press as a real-life analogue to Stark Industries. “Now people are finally starting to see utility in these devices,” Angold says. “The tone has changed from ‘it’s impossible’ to ‘it’s inevitable.’”

That was great for business, but it also led to some outsize expectations. Suffice it to say that Ekso’s suits don’t come equipped with an arc reactor. But the movie did get one thing right, Angold says: “It’s all about the power supply. Without that power supply, Iron Man doesn’t work.”

Power woes have in fact doomed some companies’ ambitious exoskeleton efforts. Raytheon’s ballyhooed Sarcos XOS 2 lost its funding in part because it had to be plugged in to work, rendering it useless in the field. One early Ekso idea assumed a gas-powered engine, Angold says. When he ran it by his brother, a Navy SEAL, he laughed at the implausibility. “That pushed us to find a fundamentally different way to power exoskeletons and make them more efficient.”

The solution: mimicking the structure and movement of the human body, which conserves energy remarkably well, especially when at rest. Minimalism is also key—the HULC forgoes luxuries like arms or headgear, which makes it not much of a Hulk by Hollywood standards. But it still has funding.

Ekso and its kin appear likely to succeed as medical devices. The barriers are convenience and cost—the Ekso will start at a hefty $110,000—but those seem surmountable. Argo’s version is under $70,000, and Parker Hannifin is aiming for a similar price point and a weight of just 27 pounds. All of those figures should come down over time. And the inconvenience is a small price to pay for people like Woo to be able to stand up and walk across a room again.

Exoskeletons’ future in war and the workplace is less secure. Lockheed has yet to find a killer military app for the HULC, whose strength enhancements come at the cost of agility. The most plausible use for the time being is to help people carry or unload heavy equipment at a forward operating base where you can’t drive a truck or a forklift.

Likewise, anyone expecting Utah State’s Spider-Man suit to give them powers akin to those of the comic book superhero are in for some sore disappointment. It’s useful for one thing: climbing a wall, which it does loudly and slowly, making it less than ideal for a covert operation. The same goes for suits that let you race a fighter jet, beat up a grizzly bear, or even sense danger from behind. You wouldn’t want to walk around in any of them.

How long will it be, I asked Angold, before we have a more Hollywood-ready exoskeleton, one that lets you run faster, jump higher, and move boulders, all while fitting comfortably under your clothes for daily wear? He thought for a moment. “You know, we could make exoskeletons today to enable people to run faster. We could make ones today that fit under your clothes. All these things, we can do today. But I don’t think you can do all of them at the same time.”

Lead Man Walking (video)

Wednesday, March 20, 2013


What Comes After the Silicon Computer Chip?

From Zocalo public square

Will Quantum computers will change everything?  Will we see mind-blowing medical breakthroughs?  Check out what some engineering pioneers are predicting for our future. -A.T.

The silicon computer chip is reaching the limits of Moore’s Law, Intel co-founder Gordon E. Moore’s observation that the number of transistors on chips would double every two years. Moore’s Law is one of the reasons why processing speed—and computer capabilities in general—have increased exponentially over the past few decades. But just because silicon is at its outer limits doesn’t mean that advances in computer hardware technology are going to stop; in fact, it might mean a whole new wave of innovation. In advance of former Intel CEO Craig R. Barrett and Arizona State University President Michael M. Crow’s Zócalo event on the future of nanotechnology, we asked engineers and people who think about computing, “What comes after the computer chip?”


SETH LLOYD
Quantum computers will change everything

In 1965, Gordon E. Moore, the founder of Intel, noted that the number of components in integrated circuits had doubled every year since their inception in 1958 and predicted that this annual doubling would continue for at least another 10 years. Since that time, the power of computers has doubled every year or year and a half, yielding computers that are millions of times more powerful than their ancestors of a half century ago. The result is the digital revolution that we see around us, including the Internet, iPhones, social networks, and spam.

Since Moore’s observation, the primary method of doubling has been to make the wires and transistors that transmit and process information smaller and smaller: The explosion in computing power comes from an implosion in the size of computing components. This implosion can’t go on forever, though, at least given the laws of physics as we know them. If we cram more and more, smaller and smaller, faster and faster components onto computer chips, they generate more and more heat. Eventually, the chip will melt. At the same time, basic semiconductor physics makes it difficult to keep increasing the clock speed of computer chips ever further into the gigahertz region. At some point—maybe even in the next decade or so—it will become hard to make semiconductor computer chips more powerful by further miniaturization.

At that point, the most important socio-economic event that will occur is that software designers will finally have to earn their pay. Not that they are not doing good work now—merely that they will have to use the resources available rather than simply assuming that computer power will have doubled by the time their software comes to market, thereby supporting the addition slop in their design. Enforced computational parsimony might not be a bad thing. The luxury of continual expansion of computer power can lead to design bloat. Is Microsoft Word today really better than Word in 1995? It is certainly more obnoxious about changing whatever word you are trying to write into the word it thinks you want to write.

The inevitable end to Moore’s Law for computer chips does not imply that the exponential increase in information processing power will end with it, however. The laws of physics support much faster and more precise information processing. For a decade and a half, my colleagues and I have been building prototype quantum computers that process information at the scale of atoms and elementary particles. Though tiny and computationally puny when compared with conventional chips, these quantum computers show that it is possible to represent and process information at scales far beyond what can be done in a semiconductor circuit. Moreover, quantum computers process information using weird and counterintuitive features of quantum mechanics that allow even these small, weak machines to perform tasks—such as simulating other quantum systems—that even the most powerful classical supercomputer cannot do.

Computation is not the only kind of societally relevant information processing that is improving exponentially. Dave Wineland of the National Institute of Standards and Technology shared the Nobel Prize in Physics this year in part for his work on quantum computing, but also in part for his use of funky quantum effects such as entanglement to construct the world’s most accurate atomic clocks. Conventional atomic clocks make up the guts of the global positioning system. Wineland’s novel clocks based on quantum information processing techniques have the potential to make GPS thousands of times more precise. Not just atomic clocks, but essentially every technology of precision measurement and control is advancing with its own “personal Moore’s Law.” The result is novel and startling developments in nanotechnology, medical devices and procedures, and personal hardware, including every known way of connecting to the Internet.

Finally, if we look at the ultimate limits to information processing, the laws of quantum mechanics and elementary particles allow much more extreme computation than could ever be found on a computer chip. Atomic scale computation? How about quark-scale computation? The ultimate level of miniaturization allowed by physical law is apparently the Planck scale, a billion billion billion times smaller than the current computational scale. And why just make things smaller—why not build larger computers? Why not enlist planets, stars, and galaxies in a universal computation? At the current rate of progress of Moore’s Law, in 400 years, the entire universe will be one giant quantum computer. Just don’t ask what the operating system will be.


Seth Lloyd is professor of mechanical engineering at MIT. His work focuses on the role of information processing in the universe, including quantum computation and complex systems. He is the author of Programming the Universe.


SETHURAMAN “PANCH” PANCHANATHAN
Better brain-computer interfaces

The evolutionary path of computing will no doubt result in ever increasing processing capacities through higher density and low power circuits, miniaturization, parallelization, and alternative forms of computing (such as quantum computing). These will address the demands of large-scale and big-data processing as well as the massive adoption of multimedia and multimodal computing in various applications.

However, future computing devices will have to shift from data- and information-level processing to higher levels of cognitive processing. For example, computing devices will be able to understand subtle cues such as intent in human communication rather than explicit cues such as prosody, expressions, and emotions. This will usher in a new era in computing in which the paradigm of humans interacting with computers in an explicit manner at higher levels of sophistication will be augmented by devices that also interact implicitly with humans. This “person-centered” engagement in which man and machine work as collaborative partners will allow for a range of tasks, from simple to complex. Computing devices on-body, in-body, and in the environment, as well as next-generation applications, will require the user to engage in a symbiotic relationship with the devices termed “coaptivecomputing.”

Computing devices (like prosthetic devices) working coaptively with the user will assist her in certain tasks that are predetermined for their role and purpose and even learn explicitly through instructions from the user. More importantly, devices need to learn through implicit observations of the interactions between the user and the environment, thereby relieving the user of the usual “mundane” tasks. This will enable users to enhance their capability and function and engage at higher levels of cognition, which thus far, has not been possible due to the limited capacity for multisensory perception and cognition.

For example, the user may recall only a few encounters with people and things at an event simply because she had a focused engagement with those particular people and objects. However, future computing devices can essentially recall all of the encounters in a “life log,” along with their context. This could prompt or inform the user as appropriate in their subsequent interactions. As coaption becomes more pervasive, the future of brain-computer interfaces will increasingly become a reality.

No longer will we think of a computer chip as just a physical entity, but instead as a ubiquitous device conjoined and operating seamlessly with humans as partners in everyday activities.

Sethuraman “Panch” Panchanathan is the senior vice president of the Office of Knowledge Enterprise Development at Arizona State University. He is also a foundation chair in Computing and Informatics and director of the Center for Cognitive Ubiquitous Computing. Dr. Panchanathan was the founding director of the School of Computing and Informatics and was instrumental in founding the Biomedical Informatics Department at ASU.


KONSTANTIN KAKAES
The end of the “La-Z-Boy era” of sequential programming

The important question to the end-user is not what comes after the chip, but how chips can be designed and integrated with sufficient ingenuity so that processing speed improves even as physics constrains the speed and size of circuits.

Ever since John von Neumann first enunciated the architecture of the modern computer in 1945, processors and memory have gotten faster more quickly than the ability to communicate between them, leading to an ever-worsening “von Neumann bottleneck”—the connection between memory and a CPU (or central processing unit).

Because chip features can no longer simply be made smaller, the only way forward is through increasing parallelism—doing many computations at once instead of, as in a classic von Neumann architecture, one computation at a time. (Each computation is essentially a logical operation like “AND” and “OR” executed in the correct order by hardware—it’s the basis for how a computer functions.)

Though the first multiprocessor architecture debuted in 1961, the practice didn’t become mainstream until the mid-’00s, when chip companies started placing multiple processing units or “cores” on the same microprocessor. Chips often have two or four cores today. Within a decade, a chip could have hundreds or even thousands of cores. A laptop or mobile device might have one chip with many cores, while supercomputers will be comprised (as they are today) of many such chips in parallel, so that a single computer will have as many as a billion processors before the end of the decade, according to Peter Ungaro, the head of supercomputing company Cray.

Figuring out how best to interconnect both many cores on a single chip and many chips to one another is a major challenge. So is how to move a computation forward when it is no long possible to synchronize all of a chip’s processors with a signal from a central clock, as is done today. New solutions like “transactional memory” will allow different processes to efficiently share memory without introducing errors.

The overall problem is so difficult because the hardware is only as good as the software, and the software only as good as the hardware. One way around this chicken-and-egg problem will be “autotuning” systems that will replace traditional compilers. Compilers translate a program in a high-level language into a specific set of low-level instructions. Autotuning will instead try out lots of different possible translations of a high-level program to see which works best.

Autotuning and transactional memory are just two of many new techniques being developed by computer scientists to take advantage of parallelism. There is no question the new techniques are harder for programmers. One group at Berkeley calls it the end of the “La-Z-Boy era” of sequential programming.

Konstantin Kakaes, a former Economist correspondent in Mexico, is a Schwartz Fellow at The New America Foundation in Washington, D.C.


STEPHEN GOODNICK
Biology-inspired computing

We are rapidly reaching the end of the doubling of transistor density every two years described by Moore’s Law, as we are literally running out of atoms with which to make individual transistors. Recently, nanotechnology has led to many new and exciting materials—such as semiconductor nanowires, graphene, and carbon nanotubes. But as long as computing is based on digital logic (ones or zeros) moving electronic charge around to turn on and off individual transistors, these new materials will only extend Moore’s Law two or three more generations.  The fundamental size limits still exist, not to mention limitations due to heat generation. Some new paradigms of non-charge-based computing may emerge that for example, could theoretically use the spin of an electron or nuclei to store or encode information. However, there are many obstacles to creating a viable, scalable technology based on “spintronics” that can keep us on the path of Moore’s Law.

It’s important to remember, though, that Moore’s Law can be viewed not merely as a doubling of density of transistors every two years, but as a doubling of information processing capability as well. While bare number-crunching operations are most efficiently performed using digital logic, new developments in digital imagery, video, speech recognition, artificial intelligence, etc., require processing vast amounts of data. Nature has much to teach us in terms of how we can efficiently process vast amounts of sensory information in a highly parallel, analog fashion like the brain does, which is fundamentally different than conventional digital computation. Such “neuromorphic” computing systems, which mimic neural-biological functions, may be more efficiently realized with new materials and devices that are not presently on the radar screen.

Similarly, quantum computing may offer a way of addressing specialized problems involving large amounts of parallel information processing. The most likely scenario is that the computer chip of the future will marry a version of our current digital technology to highly parallel, specialized architectures inspired by biological systems, with each performing what it does best. New computational paradigms and architectures together with improved materials and device technologies will likely allow a continued doubling of our information processing capability long after we reach the limits of scaling of conventional transistors.

Stephen Goodnick is a professor of electrical engineering at Arizona State University, the deputy director of ASU Lightworks, and the president of the IEEE Nanotechnology Council.



H.-S. PHILIP WONG
Mind-blowing medical breakthroughs

The 10 fingers, the abacus, mechanical cash registers, vacuum tube-based ENIAC, the transistor, the integrated circuit, the billion-transistor “computer chip” … then what? I suppose that was the line of thinking when this question was posed. Rather than fixating on whether a new “transistor” or a new “integrated circuit” will be invented, it is useful to focus on two key observations: “It will be a long time before we reach the fundamental limits of computing,” and “The technologies we use to build the computer chip will impact many fields outside of computing.”

Advances in computing are reined in by energy consumption of the computer chip. Today’s transistor consumes in excess of 1,000 times more energy than the kT∙ ln(2) limit for erasing one bit of information per logical step of computing. Reversible computing, as described by physicist Rolf Landauer and computer scientist Charles Bennett, will reach below the kT∙ ln(2) limit once a practical implementation is devised. There is plenty of room at the bottom! We will continue to get more computational power for lesser amount of energy consumed.

Now that I have put to rest the inkling that there may be an end to the rapid progress we expect from the computer chip, let’s talk about what else the “computer chip” will bring us in addition to computing and information technology. The semiconductor technology and design methodology that are employed to fabricate the computer chip have already wielded their power in other fields. Tiny cameras in cellphones that allow us to take pictures wherever we go, digitally projected 3-D movies, and LED lighting that is substantially more energy efficient than the incandescent light bulb are all examples of “computer chip” technologies that have already made impact in society. Enabling technologies that transform the field of biomedical research are in the offing.

The cost for sequencing a genome has dropped faster than Moore’s Law; the technique is based on technologies borrowed from computer chip manufacturing. Nanofabrication techniques developed for the semiconductor industry have enabled massive probing of neural signals, which eventually will lead to a sea change in our understanding of neuroscience. Nanofabricated sensors and actuators, in the style of Fantastic Voyage, are now beginning to be developed and are not completely science fiction. Emulation of the brain, both by brute force supercomputers or innovative nanoscale electronic devices, is becoming possible and will reach human-scale if the present rate of progress continues.

I am optimistic that what we have experienced in technological progress so far is just the beginning. The societal impact of the “computer chip” and the basic technologies that are the foundations of the “computer chip” will advance knowledge in other fields.

H.-S. Philip Wong is the Willard R. and Inez Kerr Bell Professor in the School of Engineering at Stanford University. He joined Stanford University as a professor of electrical engineering in 2004 after a 16-year research career on the “computer chip” with the IBM T.J. Watson Research Center. He is the co-author (with Deji Akinwande) of the book Carbon Nanotube and Graphene Device Physics.