Friday, March 22, 2013

Iron Lung


Breathing Lung Transplant At UCLA, First Ever In U.S.


Image by shawnzrossi
Posted: on HuffingtonPost

In November, transplant patient Fernando Padilla, 57, got an early-morning call that a pair of donor lungs were available, UCLA reports.

But they weren't going to be delivered in the traditional icebox method. Instead, the doctors used an experimental device that kept the lungs breathing as they were transported from another state.

“They are as close as possible as they could be left in a live state,” Dr. Abbas Ardehali, director of the UCLA Transplant Program, told KTLA.

The lungs rose and fell in a box, as they were supplied with oxygen and a special solution supplemented with red-blood cells, NBC reports. Doctors described seeing the "breathing lungs" outside a body as "surreal."

This new technique will make for more successful lung transplant surgeries in the future, said Padilla's doctors. "Lungs are very sensitive and can easily be damaged during the donation process," Dr. Ardehali said on the UCLA site. "The cold-storage method does not allow for reconditioning of the lungs, but this promising technology enables us to potentially improve the function of the donor lungs before they are placed in the recipient."

Months later, the seven-hour transplant surgery has been deemed a success. It used to be a struggle for Padilla to take even a few steps, and he was permanently tethered to an oxygen tank, according to UCLA.


Now, he enjoys walking several miles a day with his wife and playing with his grandchildren.

"I'm feeling really good," Padilla said to NBC. "Getting stronger every day."


Thursday, March 21, 2013

Lead Man Walking


When I was a little girl, a few of my greatest "shoot the moon" wishes were for blind people to be able to see again, deaf people to hear again, and paraplegics to be able to get up and walk.  Now all of those wishes have come true.  It's time to shoot past the moon. -A.T.

Could a robotic exoskeleton turn you into a real-life Iron Man?
By Will Oremus on Slate Thursday, March 21, 2013

 Robert Woo is outfitted with an exoskeleton device to walk in made by Ekso Bionics.
Robert Woo is outfitted with an exoskeleton device to walk in made by Ekso Bionics.
Photo by Mario Tama/Getty Images

Six years ago, a 39-year-old architect named Robert Woo was working on the Goldman Sachs Tower in Lower Manhattan when a crane’s nylon sling snapped, dropping seven tons of metal studs onto his construction trailer. He survived, but he was paralyzed from the waist down. He never expected to walk again.

Last week, I watched as physical therapists at Mt. Sinai Medical Center in Manhattan helped Woo into a robotic exoskeleton. He braced himself for a moment with crutches. Then he stood up and strode out of the room, his carbon-fiber leg joints whirring with each step. “My kids call me Iron Man,” Woo told me with a grin. “They say, ‘Daddy, can you fly too?’”

He can’t. But don’t rule out the possibility.

Powered exoskeletons once looked like a technological dead end, like flying cars and hoverboards. It wasn’t that you couldn’t make one. It was that you couldn’t make it practical. Early attempts were absurdly bulky, inflexible, and needed too much electricity.

Those limitations haven’t gone away. But in the past 10 years, the state of the art has been advancing so fast that even Google can’t keep up. Ask the search engine, “How do exoskeletons work?” and the top result is an article from 2011 headlined, “How Exoskeletons Will Work.” As Woo can testify, the future tense is no longer necessary. The question now is, how widespread will they become—and what extraordinary powers will they give us?

Woo’s exoskeleton, a 50-pound aluminum-and-titanium suit that takes a step with the push of a button, is called the Ekso, and it’s the flagship model of the Richmond, Calif.-based startup Ekso Bionics. The company has already sold three dozen to hospitals and clinics in 10 countries and plans to start selling them to individuals next year. Last May, a paraplegic woman named Claire Lomas used a similar device created by Israel-based Argo Technologies to walk the London Marathon. Next year, Fortune 500 firm Parker Hannifin plans to release its own version, said to be the lightest yet. One physical therapist calls it the iPhone of exoskeletons.

Other companies, including defense giant Lockheed Martin, are already eyeing the next step: commercial exoskeletons and bodysuits aimed at enhancing the strength and endurance of nondisabled people. Using technology licensed from Ekso, Lockheed is working on a device called the HULC—the Human Universal Load Carrier—that would allow soldiers to tote 200 pounds for hours without tiring. Unlike the Ekso, the HULC won’t take your steps for you. Instead, it uses accelerometers and pressure sensors to read your intentions, then provides a mechanical assist, like a power steering system for your legs.

In Italy, researchers have built a “body extender” robot aimed at allowing rescue workers to lift a wall from an earthquake survivor. Engineering students at Utah State concocted a vacuum-powered “Spider-Man suit” with which a soldier could scale the sheer side of a building. Ekso’s ExoHiker and ExoClimber target the outdoor recreation market. In Japan, a firm called Cyberdyne has developed what it calls Hybrid Assistive Limb technology to help home caregivers hoist an elderly patient from the bathtub with ease.

Cyberdyne says its name wasn’t intended as an allusion to the fictional firm that created the evil computer network in the Terminator films. Still, sci-fi reference points are inevitable for a technology that until recently existed only in movies and comic books. Iron Man is just one obvious touchstone. From Starship Troopers to Aliens to Avatar, powered armor has long been a staple of imaginary intergalactic conflicts. But the comparisons between Hollywood’s exoskeletons and their real-world counterparts are as inapt as they are inescapable. To companies like Ekso, the fictional technologies serve as both an inspiration and a frustratingly unrealistic benchmark.

“In the early days, the DARPA days, it really was science fiction,” says Russ Angold, who co-founded Ekso in 2005 to develop technology pioneered only a few years earlier by UC-Berkeley engineers working under a Department of Defense grant. The company was originally called Berkeley ExoWorks, but the concept of an exoskeleton was so foreign to the general public that they decided to change it to Berkeley Bionics, a reference to the TV show The Six Million Dollar Man.

Then came the Iron Man movies, bringing a flood of publicity for Ekso, which was portrayed in the press as a real-life analogue to Stark Industries. “Now people are finally starting to see utility in these devices,” Angold says. “The tone has changed from ‘it’s impossible’ to ‘it’s inevitable.’”

That was great for business, but it also led to some outsize expectations. Suffice it to say that Ekso’s suits don’t come equipped with an arc reactor. But the movie did get one thing right, Angold says: “It’s all about the power supply. Without that power supply, Iron Man doesn’t work.”

Power woes have in fact doomed some companies’ ambitious exoskeleton efforts. Raytheon’s ballyhooed Sarcos XOS 2 lost its funding in part because it had to be plugged in to work, rendering it useless in the field. One early Ekso idea assumed a gas-powered engine, Angold says. When he ran it by his brother, a Navy SEAL, he laughed at the implausibility. “That pushed us to find a fundamentally different way to power exoskeletons and make them more efficient.”

The solution: mimicking the structure and movement of the human body, which conserves energy remarkably well, especially when at rest. Minimalism is also key—the HULC forgoes luxuries like arms or headgear, which makes it not much of a Hulk by Hollywood standards. But it still has funding.

Ekso and its kin appear likely to succeed as medical devices. The barriers are convenience and cost—the Ekso will start at a hefty $110,000—but those seem surmountable. Argo’s version is under $70,000, and Parker Hannifin is aiming for a similar price point and a weight of just 27 pounds. All of those figures should come down over time. And the inconvenience is a small price to pay for people like Woo to be able to stand up and walk across a room again.

Exoskeletons’ future in war and the workplace is less secure. Lockheed has yet to find a killer military app for the HULC, whose strength enhancements come at the cost of agility. The most plausible use for the time being is to help people carry or unload heavy equipment at a forward operating base where you can’t drive a truck or a forklift.

Likewise, anyone expecting Utah State’s Spider-Man suit to give them powers akin to those of the comic book superhero are in for some sore disappointment. It’s useful for one thing: climbing a wall, which it does loudly and slowly, making it less than ideal for a covert operation. The same goes for suits that let you race a fighter jet, beat up a grizzly bear, or even sense danger from behind. You wouldn’t want to walk around in any of them.

How long will it be, I asked Angold, before we have a more Hollywood-ready exoskeleton, one that lets you run faster, jump higher, and move boulders, all while fitting comfortably under your clothes for daily wear? He thought for a moment. “You know, we could make exoskeletons today to enable people to run faster. We could make ones today that fit under your clothes. All these things, we can do today. But I don’t think you can do all of them at the same time.”

Lead Man Walking (video)

Wednesday, March 20, 2013


What Comes After the Silicon Computer Chip?

From Zocalo public square

Will Quantum computers will change everything?  Will we see mind-blowing medical breakthroughs?  Check out what some engineering pioneers are predicting for our future. -A.T.

The silicon computer chip is reaching the limits of Moore’s Law, Intel co-founder Gordon E. Moore’s observation that the number of transistors on chips would double every two years. Moore’s Law is one of the reasons why processing speed—and computer capabilities in general—have increased exponentially over the past few decades. But just because silicon is at its outer limits doesn’t mean that advances in computer hardware technology are going to stop; in fact, it might mean a whole new wave of innovation. In advance of former Intel CEO Craig R. Barrett and Arizona State University President Michael M. Crow’s Zócalo event on the future of nanotechnology, we asked engineers and people who think about computing, “What comes after the computer chip?”


SETH LLOYD
Quantum computers will change everything

In 1965, Gordon E. Moore, the founder of Intel, noted that the number of components in integrated circuits had doubled every year since their inception in 1958 and predicted that this annual doubling would continue for at least another 10 years. Since that time, the power of computers has doubled every year or year and a half, yielding computers that are millions of times more powerful than their ancestors of a half century ago. The result is the digital revolution that we see around us, including the Internet, iPhones, social networks, and spam.

Since Moore’s observation, the primary method of doubling has been to make the wires and transistors that transmit and process information smaller and smaller: The explosion in computing power comes from an implosion in the size of computing components. This implosion can’t go on forever, though, at least given the laws of physics as we know them. If we cram more and more, smaller and smaller, faster and faster components onto computer chips, they generate more and more heat. Eventually, the chip will melt. At the same time, basic semiconductor physics makes it difficult to keep increasing the clock speed of computer chips ever further into the gigahertz region. At some point—maybe even in the next decade or so—it will become hard to make semiconductor computer chips more powerful by further miniaturization.

At that point, the most important socio-economic event that will occur is that software designers will finally have to earn their pay. Not that they are not doing good work now—merely that they will have to use the resources available rather than simply assuming that computer power will have doubled by the time their software comes to market, thereby supporting the addition slop in their design. Enforced computational parsimony might not be a bad thing. The luxury of continual expansion of computer power can lead to design bloat. Is Microsoft Word today really better than Word in 1995? It is certainly more obnoxious about changing whatever word you are trying to write into the word it thinks you want to write.

The inevitable end to Moore’s Law for computer chips does not imply that the exponential increase in information processing power will end with it, however. The laws of physics support much faster and more precise information processing. For a decade and a half, my colleagues and I have been building prototype quantum computers that process information at the scale of atoms and elementary particles. Though tiny and computationally puny when compared with conventional chips, these quantum computers show that it is possible to represent and process information at scales far beyond what can be done in a semiconductor circuit. Moreover, quantum computers process information using weird and counterintuitive features of quantum mechanics that allow even these small, weak machines to perform tasks—such as simulating other quantum systems—that even the most powerful classical supercomputer cannot do.

Computation is not the only kind of societally relevant information processing that is improving exponentially. Dave Wineland of the National Institute of Standards and Technology shared the Nobel Prize in Physics this year in part for his work on quantum computing, but also in part for his use of funky quantum effects such as entanglement to construct the world’s most accurate atomic clocks. Conventional atomic clocks make up the guts of the global positioning system. Wineland’s novel clocks based on quantum information processing techniques have the potential to make GPS thousands of times more precise. Not just atomic clocks, but essentially every technology of precision measurement and control is advancing with its own “personal Moore’s Law.” The result is novel and startling developments in nanotechnology, medical devices and procedures, and personal hardware, including every known way of connecting to the Internet.

Finally, if we look at the ultimate limits to information processing, the laws of quantum mechanics and elementary particles allow much more extreme computation than could ever be found on a computer chip. Atomic scale computation? How about quark-scale computation? The ultimate level of miniaturization allowed by physical law is apparently the Planck scale, a billion billion billion times smaller than the current computational scale. And why just make things smaller—why not build larger computers? Why not enlist planets, stars, and galaxies in a universal computation? At the current rate of progress of Moore’s Law, in 400 years, the entire universe will be one giant quantum computer. Just don’t ask what the operating system will be.


Seth Lloyd is professor of mechanical engineering at MIT. His work focuses on the role of information processing in the universe, including quantum computation and complex systems. He is the author of Programming the Universe.


SETHURAMAN “PANCH” PANCHANATHAN
Better brain-computer interfaces

The evolutionary path of computing will no doubt result in ever increasing processing capacities through higher density and low power circuits, miniaturization, parallelization, and alternative forms of computing (such as quantum computing). These will address the demands of large-scale and big-data processing as well as the massive adoption of multimedia and multimodal computing in various applications.

However, future computing devices will have to shift from data- and information-level processing to higher levels of cognitive processing. For example, computing devices will be able to understand subtle cues such as intent in human communication rather than explicit cues such as prosody, expressions, and emotions. This will usher in a new era in computing in which the paradigm of humans interacting with computers in an explicit manner at higher levels of sophistication will be augmented by devices that also interact implicitly with humans. This “person-centered” engagement in which man and machine work as collaborative partners will allow for a range of tasks, from simple to complex. Computing devices on-body, in-body, and in the environment, as well as next-generation applications, will require the user to engage in a symbiotic relationship with the devices termed “coaptivecomputing.”

Computing devices (like prosthetic devices) working coaptively with the user will assist her in certain tasks that are predetermined for their role and purpose and even learn explicitly through instructions from the user. More importantly, devices need to learn through implicit observations of the interactions between the user and the environment, thereby relieving the user of the usual “mundane” tasks. This will enable users to enhance their capability and function and engage at higher levels of cognition, which thus far, has not been possible due to the limited capacity for multisensory perception and cognition.

For example, the user may recall only a few encounters with people and things at an event simply because she had a focused engagement with those particular people and objects. However, future computing devices can essentially recall all of the encounters in a “life log,” along with their context. This could prompt or inform the user as appropriate in their subsequent interactions. As coaption becomes more pervasive, the future of brain-computer interfaces will increasingly become a reality.

No longer will we think of a computer chip as just a physical entity, but instead as a ubiquitous device conjoined and operating seamlessly with humans as partners in everyday activities.

Sethuraman “Panch” Panchanathan is the senior vice president of the Office of Knowledge Enterprise Development at Arizona State University. He is also a foundation chair in Computing and Informatics and director of the Center for Cognitive Ubiquitous Computing. Dr. Panchanathan was the founding director of the School of Computing and Informatics and was instrumental in founding the Biomedical Informatics Department at ASU.


KONSTANTIN KAKAES
The end of the “La-Z-Boy era” of sequential programming

The important question to the end-user is not what comes after the chip, but how chips can be designed and integrated with sufficient ingenuity so that processing speed improves even as physics constrains the speed and size of circuits.

Ever since John von Neumann first enunciated the architecture of the modern computer in 1945, processors and memory have gotten faster more quickly than the ability to communicate between them, leading to an ever-worsening “von Neumann bottleneck”—the connection between memory and a CPU (or central processing unit).

Because chip features can no longer simply be made smaller, the only way forward is through increasing parallelism—doing many computations at once instead of, as in a classic von Neumann architecture, one computation at a time. (Each computation is essentially a logical operation like “AND” and “OR” executed in the correct order by hardware—it’s the basis for how a computer functions.)

Though the first multiprocessor architecture debuted in 1961, the practice didn’t become mainstream until the mid-’00s, when chip companies started placing multiple processing units or “cores” on the same microprocessor. Chips often have two or four cores today. Within a decade, a chip could have hundreds or even thousands of cores. A laptop or mobile device might have one chip with many cores, while supercomputers will be comprised (as they are today) of many such chips in parallel, so that a single computer will have as many as a billion processors before the end of the decade, according to Peter Ungaro, the head of supercomputing company Cray.

Figuring out how best to interconnect both many cores on a single chip and many chips to one another is a major challenge. So is how to move a computation forward when it is no long possible to synchronize all of a chip’s processors with a signal from a central clock, as is done today. New solutions like “transactional memory” will allow different processes to efficiently share memory without introducing errors.

The overall problem is so difficult because the hardware is only as good as the software, and the software only as good as the hardware. One way around this chicken-and-egg problem will be “autotuning” systems that will replace traditional compilers. Compilers translate a program in a high-level language into a specific set of low-level instructions. Autotuning will instead try out lots of different possible translations of a high-level program to see which works best.

Autotuning and transactional memory are just two of many new techniques being developed by computer scientists to take advantage of parallelism. There is no question the new techniques are harder for programmers. One group at Berkeley calls it the end of the “La-Z-Boy era” of sequential programming.

Konstantin Kakaes, a former Economist correspondent in Mexico, is a Schwartz Fellow at The New America Foundation in Washington, D.C.


STEPHEN GOODNICK
Biology-inspired computing

We are rapidly reaching the end of the doubling of transistor density every two years described by Moore’s Law, as we are literally running out of atoms with which to make individual transistors. Recently, nanotechnology has led to many new and exciting materials—such as semiconductor nanowires, graphene, and carbon nanotubes. But as long as computing is based on digital logic (ones or zeros) moving electronic charge around to turn on and off individual transistors, these new materials will only extend Moore’s Law two or three more generations.  The fundamental size limits still exist, not to mention limitations due to heat generation. Some new paradigms of non-charge-based computing may emerge that for example, could theoretically use the spin of an electron or nuclei to store or encode information. However, there are many obstacles to creating a viable, scalable technology based on “spintronics” that can keep us on the path of Moore’s Law.

It’s important to remember, though, that Moore’s Law can be viewed not merely as a doubling of density of transistors every two years, but as a doubling of information processing capability as well. While bare number-crunching operations are most efficiently performed using digital logic, new developments in digital imagery, video, speech recognition, artificial intelligence, etc., require processing vast amounts of data. Nature has much to teach us in terms of how we can efficiently process vast amounts of sensory information in a highly parallel, analog fashion like the brain does, which is fundamentally different than conventional digital computation. Such “neuromorphic” computing systems, which mimic neural-biological functions, may be more efficiently realized with new materials and devices that are not presently on the radar screen.

Similarly, quantum computing may offer a way of addressing specialized problems involving large amounts of parallel information processing. The most likely scenario is that the computer chip of the future will marry a version of our current digital technology to highly parallel, specialized architectures inspired by biological systems, with each performing what it does best. New computational paradigms and architectures together with improved materials and device technologies will likely allow a continued doubling of our information processing capability long after we reach the limits of scaling of conventional transistors.

Stephen Goodnick is a professor of electrical engineering at Arizona State University, the deputy director of ASU Lightworks, and the president of the IEEE Nanotechnology Council.



H.-S. PHILIP WONG
Mind-blowing medical breakthroughs

The 10 fingers, the abacus, mechanical cash registers, vacuum tube-based ENIAC, the transistor, the integrated circuit, the billion-transistor “computer chip” … then what? I suppose that was the line of thinking when this question was posed. Rather than fixating on whether a new “transistor” or a new “integrated circuit” will be invented, it is useful to focus on two key observations: “It will be a long time before we reach the fundamental limits of computing,” and “The technologies we use to build the computer chip will impact many fields outside of computing.”

Advances in computing are reined in by energy consumption of the computer chip. Today’s transistor consumes in excess of 1,000 times more energy than the kT∙ ln(2) limit for erasing one bit of information per logical step of computing. Reversible computing, as described by physicist Rolf Landauer and computer scientist Charles Bennett, will reach below the kT∙ ln(2) limit once a practical implementation is devised. There is plenty of room at the bottom! We will continue to get more computational power for lesser amount of energy consumed.

Now that I have put to rest the inkling that there may be an end to the rapid progress we expect from the computer chip, let’s talk about what else the “computer chip” will bring us in addition to computing and information technology. The semiconductor technology and design methodology that are employed to fabricate the computer chip have already wielded their power in other fields. Tiny cameras in cellphones that allow us to take pictures wherever we go, digitally projected 3-D movies, and LED lighting that is substantially more energy efficient than the incandescent light bulb are all examples of “computer chip” technologies that have already made impact in society. Enabling technologies that transform the field of biomedical research are in the offing.

The cost for sequencing a genome has dropped faster than Moore’s Law; the technique is based on technologies borrowed from computer chip manufacturing. Nanofabrication techniques developed for the semiconductor industry have enabled massive probing of neural signals, which eventually will lead to a sea change in our understanding of neuroscience. Nanofabricated sensors and actuators, in the style of Fantastic Voyage, are now beginning to be developed and are not completely science fiction. Emulation of the brain, both by brute force supercomputers or innovative nanoscale electronic devices, is becoming possible and will reach human-scale if the present rate of progress continues.

I am optimistic that what we have experienced in technological progress so far is just the beginning. The societal impact of the “computer chip” and the basic technologies that are the foundations of the “computer chip” will advance knowledge in other fields.

H.-S. Philip Wong is the Willard R. and Inez Kerr Bell Professor in the School of Engineering at Stanford University. He joined Stanford University as a professor of electrical engineering in 2004 after a 16-year research career on the “computer chip” with the IBM T.J. Watson Research Center. He is the co-author (with Deji Akinwande) of the book Carbon Nanotube and Graphene Device Physics.

Tuesday, March 19, 2013


Who says you can't teach an old dog new tricks? -A.T.

Moving Beyond Weapons to Clean Water

Graphic: David Cohen-Tanugi

Interview by Ben Johnson
Marketplace Tech for Monday, March 18, 2013

Defense contractor Lockheed Martin has discovered a way to make desalination 100 times more efficient. And that could have a big impact on bringing clean drinking water to the developing world.

The process is called reverse osmosis, and the material used is graphene -- a lot like the stuff you smudge across paper with your pencil.

"This stuff is so thin and so strong, it's a remarkable compound, it is one atom thick," says Lockheed Martin senior engineer John Stetson. "If you have a piece of paper that represents the thickness of graphene, the closest similar membrane is about the height of a room."

The new material essentially acts as a sieve, allowing water to pass though while salts remain behind. Graphene could make for smaller, cheaper plants that turn salt water into drinking water, but it could also have uses in war zones as a portable water desalinator.

"Lockheed really is concerned with the broadest aspects of global security [and] maintaining safe environments and that includes water," says Stetson.

Monday, March 18, 2013


The technology of the bodhisattva is also available to the wayward.  If you don't think we're in a race against time, read on. -A.T.


SXSW: 'Wiki Weapons' Maker Says 3D Printed Guns 'Are Going To Be Possible Forever'
By Joshua Ostroff. Huffington Post: 03/13/2013 2:45 pm EDT

Cody Wilson Defense Distributed Sxsw

The takeaway from SXSW Interactive, the massive annual technology conference in Austin, Texas, is that this year got away from social media (finally) and started delving into the physical realm, in particular the coming 3D printer revolution.

But for all the wonderful possibilities that we were told this new technology portends, there was one which was somewhat more ominous — the creation of 3D printed guns.

“What does it mean to have a have a file that could be readily be assembled by a machine into a firearm?” asked Wilson to a surprisingly small crowd (or maybe not that surprising considering there was a Google Glasses demo down the hall). “Is that [file] by itself a firearm? Is it just data? Is it just speech? Where is the offense?” Though he did admit, “it’s not a book; it can become something, so it’s a grey area.”

Now here’s the thing about Wilson. He seems to be an extremely intelligent law student, well-versed in political and philosophical thought (even quoting 19th-century French thinker Pierre-Joseph Proudhon) as well as the intricacies of technology that would make most of us blanche. He is also a self-declared anarchist and a lover of firearms. He will probably be underestimated, and that perhaps will be at our peril.

Wilson’s SXSW talk, which featured just him and some slides, attempted to lay out the relatively short lifespan of the so-called wiki weapon project. A 3D printer is able to digitize an object and then rebuild a replica of that object using resins and polymers. Defense Distributed began as, essentially, a “university project” but once Wilson got push-back, it went from being a hobby to being a serious undertaking.

He recounts how a major 3D printer company rented him a high-end machine and then, when they found out what he was using it for, not only took it from him but criminally referred him to the the bureau of Alcohol, Tobacco and Firearms.


“They didn’t just try and put a kibosh on my project, they tried to mortally wound me,” Wilson said, sounding as bitter as you could imagine. In the process, it seems, they turned what was something of a lark into a life’s mission.

Though crowdfunding site IndieGoGo took Defense Distributed’s gun-printing project down after complaints, they still managed to raise $20,000 via Paypal, proving that this is more than the work of a, shall we say, lone gunman.

“Not only can we be successfully defensive with this project, that we can pivot and wrap around the laws quickly, but we realized we can actually go on the offensive with this project,” he said.

Congress has noticed what Wilson is up to, with Rep. Steve Israel sponsoring legislation to prevent 3D guns. “He saw an opportunity, ‘3D printed guns are upon us. Let’s legislate them away. Be gone, be gone!’” Wilson mocked, before boasting his files have already been downloaded 440,000 times. “Does he think law enforcement should have the power to affect your ability to find files on Google?”

Wilson, who doesn’t “view government as a benign institution,” argued these efforts would be ultimately futile. “[Israel] thinks this is how we’re all going to rid ourselves of wiki-weapons, and that’s false,” he said, noting that the industry-friendly law is written to apply to individuals, not gun manufacturers. “We’ve applied for a federal firearms manufacturing license.”

He also said they’ve been working on printing magazines. “It’s a box with a spring, and you can make it. This put us over with the firearms community. They were very ambivalent about us, especially the NRA types, that might seem hard to believe, but when magazines were on the chopping block and we said, ‘look, you could make one of these tonight for $15,’ the point was driven home.”

He noted his reason for naming the printable magazine software after Dianne Feinstein — the democratic senator who spearheaded the 1994 assault weapons ban — was that he hoped to associate the two forever. “That’s power,” he said.

After the Sandy Hook massacre, all of Defense Distributed’s gun files got taken down from Makerbot’s “Thingiverse,” a clearinghouse site for the 3D printer community. While this clearly angered Wilson, he also acknowledged that the Newtown elementary school shooting had turned his project into “a political football.”

What was perhaps most disquieting about Wilson’s talk was how much he sounded like the kids who founded Napster, except of course that, despite occasional claims against heavy metal, nobody believes music can kill anyone. Guns, not so much. But Wilson, in his evangelism, refuses to acknowledge that even as he boasts that once his tech is proven and let loose in the wild, that genie won’t go back in its bottle.

“We can pantomime a legislative solution,” he sneered, “but this is going to be possible forever -- and I'm interested in creating that world.”