In labs around the world, researchers are busy creating technologies that will change the way we conduct business and live our lives. These are not the latest crop of gadgets and gizmos: they are completely new technologies that could soon transform computing, medicine, manufacturing, transportation, and our energy infrastructure. Nurturing the people and the culture needed to make the birth of such technological ideas possible is a messy endeavor, as MIT Media Lab cofounder Nicholas Negroponte explains in Creating a Culture of Ideas . But in this special section, Technology Review's editors have identified 10 emerging technologies that we predict will have a tremendous influence in the near future. For each, we've chosen a researcher or research team whose work and vision is driving the field. The profiles, on the following pages, offer a sneak preview of the technology world in the years and decades to come.
Wireless Sensor Networks
Great Duck Island, a 90-hectare expanse of rock and grass off the coast of Maine, is home to one of the world's largest breeding colonies of Leach's storm petrels-and to one of the world's most advanced experiments in wireless networking. Last summer, researchers bugged dozens of the petrels' nesting burrows with small monitoring devices called motes. Each is about the size of its power source-a pair of AA batteries-and is equipped with a processor, a tiny amount of computer memory, and sensors that monitor light, humidity, pressure, and heat. There's also a radio transceiver just powerful enough to broadcast snippets of data to nearby motes and pass on information received from other neighbors, bucket brigadestyle.
This is more than the latest in avian intelligence gathering. The motes preview a future pervaded by networks of wireless battery-powered sensors that monitor our environment, our machines, and even us . It's a future that David Culler, a computer scientist at the University of California, Berkeley, has been working toward for the last four years. "It's one of the big opportunities" in information technology, says Culler. "Low-power wireless sensor networks are spearheading what the future of computing is going to look like."
Culler is on partial leave from Berkeley to direct an Intel "lablet" that is perfecting the motes, as well as the hardware and software systems needed to clear the way for wireless networks made up of thousands or even millions of sensors. These networks will observe just about everything, including traffic, weather, seismic activity, the movements of troops on battlefields, and the stresses on buildings and bridges-all on a far finer scale than has been possible before.
Because such networks will be too distributed to have the sensors hard-wired into the electrical or communications grids, the lablet's first challenge was to make its prototype motes communicate wirelessly with minimal battery power. "The devices have to organize themselves in a network by listening to one another and figuring out who can they hear...but it costs power to even listen," says Culler. That meant finding a way to leave the motes' radios off most of the time and still allow data to hop through the network, mote by mote, in much the same way that data on the Internet are broken into packets and routed from node to node.
Until Culler's group attacked the problem, wireless networking had lacked an equivalent to the data-handling protocols that make the Internet work. The lablet's solution: TinyOS, a compact operating system only a few kilobytes in size, that handles such administrative tasks as encoding data packets for relay and turning on radios only when they're needed. The motes that run TinyOS should cost a few dollars apiece when mass produced and are being field-tested in several locations from Maine to California, where Berkeley seismologists are using them to monitor earthquakes.
Anyone is free to download and tinker with TinyOS, so researchers outside of Berkeley and Intel can test wireless sensor networks in a range of environments without having to reinvent the underlying technology. Culler's motes have been "a tremendously enabling platform," says Deborah Estrin, director of the Center for Embedded Networked Sensing at the University of California, Los Angeles. Estrin is rigging a nature reserve in the San Jacinto mountains with a dense array of wireless microclimate and imaging sensors.
Others are trying to make motes even smaller. A group led by Berkeley computer scientist Kristofer Pister is aiming for one cubic millimeter-the size of a few dust mites. At that scale, wireless sensors could permeate highway surfaces, building materials, fabrics, and perhaps even our bodies. The resulting data bonanza could vastly increase our understanding of our physical environment-and help us protect our own nests. - Wade Roush
Others in
WIRELESS SENSOR NETWORKS RESEARCHER PROJECT Gaetano Borriello
U. Washington; Intel
Small embedded computers and communications protocols Deborah Estrin
U. California, Los Angeles
Networking, middleware, data handling, and hardware for distributed sensors and actuators Michael Horton
Crossbow Technology
Manufacture of sensors and motes Kristofer Pister
U. California, Berkeley Millimeter-size sensing and communication devices
Injectable Tissue Engineering
Every year, more than 700,000 patients in the United States undergo joint replacement surgery. The procedure-in which a knee or a hip is replaced with an artificial implant-is highly invasive, and many patients delay the surgery for as long as they can. Jennifer Elisseeff, a biomedical engineer at Johns Hopkins University, hopes to change that with a treatment that does away with surgery entirely: injectable tissue engineering. She and her colleagues have developed a way to inject joints with specially designed mixtures of polymers, cells, and growth stimulators that solidify and form healthy tissue. "We're not just trying to improve the current therapy," says Elisseeff. "We're really trying to change it completely."
Elisseeff is part of a growing movement that is pushing the bounds of tissue engineering-a field researchers have long hoped would produce lab-grown alternatives to transplanted organs and tissues. For the last three decades, researchers have focused on growing new tissues on polymer scaffolds in the lab. While this approach has had success producing small amounts of cartilage and skin, researchers have had difficulty keeping cells alive on larger scaffolds. And even if those problems could be worked out, surgeons would still have to implant the lab-grown tissues. Now, Elisseeff, as well as other academic and industry researchers, are turning to injectable systems that are less invasive and far cheaper. Many of the tissue-engineering applications to reach the market first could be delivered by syringe rather than implants, and Elisseeff is pushing to make this happen as soon as possible.
Elisseeff and her colleagues have used an injectable system to grow cartilage in mice. The researchers added cartilage cells to a light-sensitive liquid polymer and injected it under the skin on the backs of mice. They then shone ultraviolet light through the skin, causing the polymer to harden and encapsulate the cells. Over time, the cells multiplied and developed into cartilage. To test the feasibility of the technique for minimally invasive surgery, the researchers injected the liquid into the knee joints of cadavers. The surgeons used a fiber-optic tube to view the hardening process on a television monitor. "This has huge implications," says James Wenz, an orthopedic surgeon at Johns Hopkins who is collaborating with Elisseeff.
While most research on injectable systems has focused on cartilage and bone, observers say this technology could be extended to tissues such as those of the liver and heart. The method could be used to replace diseased portions of an organ or to enhance its functioning, says Harvard University pediatric surgeon Anthony Atala. In the case of heart failure, instead of opening the chest and surgically implanting an engineered valve or muscle tissue, he says, simply injecting the right combination of cells and signals might do the trick.
For Elisseeff and the rest of the field, the next frontier lies in a powerful new tool: stem cells. Derived from sources like bone marrow and embryos, stem cells have the ability to differentiate into numerous types of cells. Elisseeff and her colleagues have exploited that ability to grow new cartilage and bone simultaneously-one of the trickiest feats in tissue engineering. They made layers of a polymer-and-stem-cell mixture, infusing each layer with specific chemical signals that triggered the cells to develop into either bone or cartilage. Such hybrid materials would simplify knee replacement surgeries, for instance, that require surgeons to replace the top of the shin bone and the cartilage above it.
Don't expect tissue engineers to grow entire artificial organs anytime soon. Elisseeff, for one, is aiming for smaller advances that will make tissue engineering a reality within the decade. For the thousands of U.S. patients who need new joints every year, such small feats could be huge. - Alexandra M. Goho
Others in
INJECTABLE TISSUE ENGINEERING RESEARCHER PROJECT Anthony Atala
Harvard Medical School Cartilage Jim Burns
Genzyme Cartilage Antonios Mikos
Rice U. Bone and cardiovascular tissue David Mooney
U. Michigan Bone and cartilage
Nano Solar Cells
The sun may be the only energy source big enough to wean us off fossil fuels. But harnessing its energy depends on silicon wafers that must be produced by the same exacting process used to make computer chips. The expense of the silicon wafers raises solar-power costs to as much as 10 times the price of fossil fuel generation-keeping it an energy source best suited for satellites and other niche applications.
Paul Alivisatos, a chemist at the University of California, Berkeley, has a better idea: he aims to use nanotechnology to produce a photovoltaic material that can be spread like plastic wrap or paint. Not only could the nano solar cell be integrated with other building materials, it also offers the promise of cheap production costs that could finally make solar power a widely used electricity alternative.
Alivisatos's approach begins with electrically conductive polymers. Other researchers have attempted to concoct solar cells from these plastic materials ( see " Solar on the Cheap ," TR January/ February 2002 ), but even the best of these devices aren't nearly efficient enough at converting solar energy into electricity. To improve the efficiency, Alivisatos and his coworkers are adding a new ingredient to the polymer: nanorods, bar-shaped semiconducting inorganic crystals measuring just seven nanometers by 60 nanometers. The result is a cheap and flexible material that could provide the same kind of efficiency achieved with silicon solar cells. Indeed, Alivisatos hopes that within three years, Nanosys-a Palo Alto, CA, startup he cofounded-will roll out a nanorod solar cell that can produce energy with the efficiency of silicon-based systems.
The prototype solar cells he has made so far consist of sheets of a nanorod-polymer composite just 200 nanometers thick. Thin layers of an electrode sandwich the composite sheets. When sunlight hits the sheets, they absorb photons, exciting electrons in the polymer and the nanorods, which make up 90 percent of the composite. The result is a useful current that is carried away by the electrodes.
Early results have been encouraging. But several tricks now in the works could further boost performance. First, Alivisatos and his collaborators have switched to a new nanorod material, cadmium telluride, which absorbs more sunlight than cadmium selenide, the material they used initially. The scientists are also aligning the nanorods in branching assemblages that conduct electrons more efficiently than do randomly mixed nanorods. "It's all a matter of processing," Alivisatos explains, adding that he sees "no inherent reason" why the nano solar cells couldn't eventually match the performance of top-end, expensive silicon solar cells.
The nanorod solar cells could be rolled out, ink-jet printed, or even painted onto surfaces, so "a billboard on a bus could be a solar collector," says Nanosys's director of business development, Stephen Empedocles. He predicts that cheaper materials could create a $10 billion annual market for solar cells, dwarfing the growing market for conventional silicon cells.
Alivisatos's nanorods aren't the only technology entrants chasing cheaper solar power. But whether or not his approach eventually revolutionizes solar power, he is bringing novel nanotechnology strategies to bear on the problem. And that alone could be a major contribution to the search for a better solar cell. "There will be other research groups with clever ideas and processes-maybe something we haven't even thought of yet," says Alivisatos. "New ideas and new materials have opened up a period of change. It's a good idea to try many approaches and see what emerges."
Thanks to nanotechnology, those new ideas and new materials could transform the solar cell market from a boutique source to the Wal-Mart of electricity production. - Eric Scigliano
Others in
NANO SOLAR CELLS RESEARCHER PROJECT Richard Friend
U. Cambridge Fullerene-polymer composite solar cells Michael Grtzel
Swiss Federal Institute of Technology Nanocrystalline dye-sensitized solar cells Alan Heeger
U. California,Santa Barbara Fullerene-polymer composite solar cells N. Serdar Sariciftci
Johannes Kepler U. Polymer and fullerene-polymer composite solar cells
Mechatronics
To improve everything from fuel economy to performance, automotive researchers are turning to "mechatronics," the integration of familiar mechanical systems with new electronic components and intelligent-software control. Take brakes. In the next five to 10 years, electromechanical actuators will replace hydraulic cylinders; wires will replace brake fluid lines; and software will mediate between the driver's foot and the action that slows the car. And because lives will depend on such mechatronic systems, Rolf Isermann, an engineer at Darmstadt University of Technology in Darmstadt, Germany, is using software that can identify and correct for flaws in real time to make sure the technology functions impeccably. "There is a German word for it: grndlich ," he says. "It means you do it really right."
In order to do mechatronic braking right, Isermann's group is developing software that tracks data from three sensors: one detects the flow of electrical current to the brake actuator; a second tracks the actuator's position; and the third measures its clamping force. Isermann's software analyzes those numbers to detect faults-such as an increase in friction-and flashes a dashboard warning light, so the driver can get the car serviced before the fault leads to failure.
"Everybody initially was worried about the safety of electronic devices. I think people are now becoming aware they are safer than mechanical ones," says Karl Hedrick, a mechanical engineer at the University of California, Berkeley. "A large part of the reason they are safer is you can build in fault diagnoses and fault tolerance. Isermann is certainly in the forefront of people developing technology to do this."
Isermann is also working to make engines run cleaner. He is developing software that detects combustion misfires, which can damage catalytic converters and add to pollution. Because it's not practical to have a sensor inside a combustion chamber, Isermann's system relies on data from sensors that measure oxygen levels in exhaust and track the speed of the crankshaft (the mechanism that delivers the engine's force to the wheels). Tiny fluctuations in crankshaft speed accompanied by changes in emissions reveal misfires. If a misfire is detected, the software can warn the driver or, in the future, might automatically fix the problem.
Partnerships with manufacturing companies-including DaimlerChrysler and Continental Teves-merge the basic research from Isermann's group with industry's development of such technologies in actual cars. Isermann says that "80 to 90 percent of the innovations in the development of engines and cars these days are due to electronics and mechatronics." Until recent years, mechatronic systems were found mainly in such big-ticket items as aircraft and industrial equipment or in small precision components for products such as cameras and photocopiers. But new applications in cars and trucks have helped prompt a surge in the number of groups working on mechatronics. The trend has been fueled by falling prices for microprocessors and sensors, more stringent vehicle-emissions regulations in Europe and California, and automakers' wanting to enhance their vehicles with additional comfort and performance features.
Although the luxury market looms largest today-new high-end models from BMW contain more than 70 microprocessors that control more than 120 tiny motors-mechatronics will be moving into the wider car market within five years, says Lino Guzzella, codirector of the Institute of Measurement and Control at the Swiss Federal Institute of Technology. And with software like Isermann's on board, the electronic guts of these new driving machines should be as sturdy and reliable as steel. - David Talbot
Others in
MECHATRONICS RESEARCHER PROJECT Lino Guzzella
Swiss Federal Institute of Technology Engine modeling and control systems Karl Hedrick and Masayoshi Tomizuka
U. California, Berkeley Control systems and theory Uwe Kiencke
U. Karlsruhe Digital signal processing Philip Koopman
Carnegie Mellon U. Fault tolerance in control software Lars Nielsen
Linkping U. Engine control systems
Grid Computing
In the 1980s "internetworking protocols" allowed us to link any two computers, and a vast network of networks called the Internet exploded around the globe. In the 1990s the "hypertext transfer protocol" allowed us to link any two documents, and a vast, online library-cum-shoppingmall called the World Wide Web exploded across the Internet. Now, fast emerging "grid protocols" might allow us to link almost anything else: databases, simulation and visualization tools, even the number-crunching power of the computers themselves. And we might soon find ourselves in the midst of the biggest explosion yet.
"We're moving into a future in which the location of [computational] resources doesn't really matter," says Argonne National Laboratory's Ian Foster. Foster and Carl Kesselman of the University of Southern California's Information Sciences Institute pioneered this concept, which they call grid computing in analogy to the electric grid, and built a community to support it. Foster and Kesselman, along with Argonne's Steven Tuecke, have led development of the Globus Toolkit, an open-source implementation of grid protocols that has become the de facto standard. Such protocols promise to give home and office machines the ability to reach into cyberspace, find resources wherever they may be, and assemble them on the fly into whatever applications are needed.
Imagine, says Kesselman, that you're the head of an emergency response team that's trying to deal with a major chemical spill. "You'll probably want to know things like, What chemicals are involved? What's the weather forecast, and how will that affect the pattern of dispersal? What's the current traffic situation, and how will that affect the evacuation routes?" If you tried to find answers on today's Internet, says Kesselman, you'd get bogged down in arcane log-in procedures and incompatible software. But with grid computing it would be easy: the grid protocols provide standard mechanisms for discovering, accessing, and invoking just about any online resource, simultaneously building in all the requisite safeguards for security and authentication.
Construction is under way on dozens of distributed grid computers around the world-virtually all of them employing Globus Toolkit. They'll have unprecedented computing power and applications ranging from genetics to particle physics to earthquake engineering. The $88 million TeraGrid of the U.S. National Science Foundation will be one of the largest. When it's completed later this year, the general-purpose, distributed supercomputer will be capable of some 21 trillion floating-point operations per second, making it one of the fastest computational systems on Earth. And grid computing is experiencing an upsurge of support from industry heavyweights such as IBM, Sun Microsystems, and Microsoft. IBM, which is a primary partner in the TeraGrid and several other grid projects, is beginning to market an enhanced commercial version of the Globus Toolkit.
Out of Foster and Kesselman's work on protocols and standards, which began in 1995, "this entire grid movement emerged," says Larry Smarr, director of the California Institute for Telecommunications and Information Technology. What's more, Smarr and others say, Foster and Kesselman have been instrumental in building a community around grid computing and in advocating its integration with two related approaches: peer-to-peer computing, which brings to bear the power of idle desktop computers on big problems in the manner made famous by SETI@home, and Web services, in which access to far-flung computational resources is provided through enhancements to the Web's hypertext protocol. By helping to merge these three powerful movements, Foster and Kesselman are bringing the grid revolution much closer to reality. And that could mean seamless and ubiquitous access to unfathomable computer power. - M. Mitchell Waldrop
Others in
GRID COMPUTING RESEARCHER PROJECT Andrew Chien
Entropia Peer-to-Peer Working Group Andrew Grimshaw
Avaki; U. Virginia Commercial grid software Miron Livny
U. Wisconsin, Madison Open-source system to harness idle workstations Steven Tuecke
Argonne National Laboratory Globus Toolkit
Molecular Imaging
At Massachusetts General Hospital's Center for Molecular Imaging Research-a bustling facility nestled next to an old Navy shipyard-Umar Mahmood uses a digital camera to peer through the skin of a living mouse into a growing tumor. Using fluorescent tags and calibrated filters, the radiologist actually sees the effects of the cancer on a molecular scale: destructive enzymes secreted by the tumor show up on Mahmood's computer screen as splotches of red, yellow, and green. In the future, he says, such "molecular imaging" may lead to earlier detection of human disease, as well as more effective therapies.
Molecular imaging-shorthand for a number of techniques that let researchers watch genes, proteins, and other molecules at work in the body-has exploded, thanks to advances in cell biology, biochemical agents, and computer analysis. Research groups around the world are joining the effort to use magnetic, nuclear, and optical imaging techniques to study the molecular interactions that underlie biological processes. Unlike x-ray, ultrasound, and other conventional techniques that give doctors only such anatomical clues as the size of a tumor, molecular imaging could help track the underlying causes of disease. The appearance of an unusual protein in a cluster of cells, say, might signal the onset of cancer. Mahmood is helping to lead the effort to put the technology into medical practice.
It is challenging, though, to detect a particular molecule in the midst of cellular activity. When researchers inject a tag that binds to the molecule, they face the problem of distinguishing the bound tags from the extra, unbound tags. So Mahmood has worked with chemists to develop "smart probes" that change their brightness or their magnetic properties when they meet their target. "This is a big deal," says David Piwnica-Worms, director of the Molecular Imaging Center at Washington University in St. Louis. The method, he explains, "allows you to see selected proteins and enzymes that you might miss with standard tracer techniques."
In a series of groundbreaking experiments, Mahmood's team treated cancerous mice with a drug meant to block the production of an enzyme that promotes tumor growth. The researchers then injected fluorescent probes designed to light up in the presence of that enzyme. Under an optical scanner, treated tumors showed up as less fluorescent than untreated tumors, demonstrating the potential of molecular imaging to monitor treatments in real time-rather than waiting months to see whether a tumor shrinks. "The big goal is to select the optimum therapy for a patient and then to check that, say, a drug is hitting a particular receptor," says John Hoffman, director of the Molecular Imaging Program at the National Cancer Institute. What's more, molecular imaging could be used to detect cancer signals that precede anatomical changes by months or years, eliminating the need for surgeons to cut out a piece of tissue to make a diagnosis. "At the end of the day, we may replace a number of biopsies with imaging," Mahmood says.
In Mahmood's lab, clinical trials are under way for magnetic resonance imaging of blood vessel growth-an early indicator of tumor growth and other changes. For more advanced techniques such as those used in the mouse cancer study, clinical trials are two years away. The big picture: 10 years down the road, molecular imaging may take the place of mammograms, biopsies, and other diagnostic techniques. Although it won't replace conventional imaging entirely, says Mahmood, molecular imaging will have a profound effect both on basic medical research and on high-end patient care. Indeed, as his work next door to the shipyard makes clear, an important new field of biotechnology has set sail. - Gregory T. Huang
Others in
MOLECULAR IMAGING RESEARCHER PROJECT Ronald Blasberg
Memorial Sloan-Kettering Cancer Center Imaging of gene expression Harvey Herschman
U. California, Los Angeles Tracking of gene therapy, gene activities David Piwnica-Worms
Washington U. Protein interactions, imaging tools Patricia Price
U. Manchester Clinical oncology, imaging drug targets Ralph Weissleder
Harvard Medical School Cell tracking, molecular targets, drug discovery
Nanoimprint Lithography
A world of Lilliputian sensors, transistors, and lasers is in development at nanotechnology labs worldwide. These devices point to a future of ultrafast and cheap electronics and communications. But making nanotechnology relevant beyond the lab is difficult because of the lack of suitable manufacturing techniques. The tools used to mass-produce silicon microchips are far too blunt for nanofabrication, and specialized lab methods are far too expensive and time-consuming to be practical. "Right now everybody is talking about nanotechnology, but the commercialization of nanotechnology critically depends upon our ability to manufacture," says Princeton University electrical engineer Stephen Chou.
A mechanism just slightly more sophisticated than a printing press could be the answer, Chou believes. Simply by stamping a hard mold into a soft material, he can faithfully imprint features smaller than 10 nanometers across. Last summer, in a dramatic demonstration of the potential of the technique, Chou showed that he could make nano features directly in silicon and metal. By flashing the solid with a powerful laser, he melted the surface just long enough to press in the mold and imprint the desired features.
Although Chou was not the first researcher to employ the imprinting technique, which some call soft lithography, his demonstrations have set the bar for nanofabrication, says John Rogers, a chemist at Lucent Technologies' Bell Labs. "The kind of revolution that he has achieved is quite remarkable in terms of speed, area of patterning, and the smallest-size features that are possible. It's leading edge," says Rogers. Ultimately, nanoimprinting could become the method of choice for cheap and easy fabrication of nano features in such products as optical components for communications and gene chips for diagnostic screening. Indeed, NanoOpto, Chou's startup in Somerset, NJ, is already shipping nanoimprinted optical-networking components. And Chou has fashioned gene chips that rely on nano channels imprinted in glass to straighten flowing DNA molecules, thereby speeding genetic tests.
Chou is also working to show that nanoimprinting can tackle lithography's grand challenge: how to etch nano patterns into silicon for future generations of high-performance microchips. Chou says he can already squeeze at least 36 times as many transistors onto a silicon wafer as the most advanced commercial lithography tools. But to make complex chips, which have many layers, perfect alignment must be maintained through as many as 30 stamping steps. For Chou's process, in which heat could distort the mold and the wafer, that means each round of heating and imprinting must be quick. With his recent laser-heating innovations, Chou has cut imprinting time from 10 seconds to less than a microsecond. As a result, he has demonstrated the ability to make basic multilayered chips, and he says complex processors and memory chips are next. Chou's other startup, Nanonex in Princeton, NJ, is busy negotiating alliances with lithography tool manufacturers.
Chou's results come at a time when the chipmaking industry has been spending billions of dollars developing exotic fabrication techniques that use everything from extreme ultraviolet light to electron beams. But, says Stanford University nanofabrication expert R. Fabian Pease, "If you look at what the extreme ultraviolet and the electron projection lithography techniques have actually accomplished, [imprint lithography], which has had a tiny fraction of the investment, is looking awfully good." This is sweet vindication for Chou, who began working on nanofabrication in the 1980s, before most of his colleagues recognized that nano devices would be worth manufacturing. "Nobody questions the manufacturing ability of nanoimprint anymore," says Chou. "Suddenly the doubt is gone." - Peter Fairley
Others in
NANOIMPRINT LITHOGRAPHY RESEARCHER PROJECT Yong Chen
Hewlett-Packard High-density molecular electronic memory John Rogers
Bell Labs Patterning polymer electronics George Whitesides
Harvard U. Contact printing on flexible substrates Grant Willson
U. Texas;
Molecular Imprints High-density microchip fabrication
Software Assurance
Computers crash. That's a fact of life. And when they do, it's usually because of a software bug. Generally, the consequences are minimal-a muttered curse and a reboot. But when the software is running complex distributed systems such as those that support air traffic control or medical equipment, a bug can be very expensive, and even cost lives. To help avoid such disasters, Nancy Lynch and Stephen Garland are creating tools they hope will yield nearly error-free software.
Working together at MIT's Laboratory for Computer Science, Lynch and Garland have developed a computer language and programming tools for making software development more rigorous, or as Garland puts it, to "make software engineering more like an engineering discipline." Civil engineers, Lynch points out, build and test a model of a bridge before anyone constructs the bridge itself. Programmers, however, often start with a goal and, perhaps after some discussion, simply sit down to write the software code. Lynch and Garland's tools allow programmers to model, test, and reason about software before they write it. It's an approach that's unique among efforts launched recently by the likes of Microsoft, IBM, and Sun Microsystems to improve software quality and even to simplify and improve the programming process itself.
Like many of these other efforts, Lynch and Garland's approach starts with a concept called abstraction. The idea is to begin with a high-level summary of the goals of the program and then write a series of progressively more specific statements that describe both steps the program can take to reach its goals and how it should perform those steps. For example, a high-level abstraction for an aircraft collision avoidance system might specify that corrective action take place whenever two planes are flying too close. A lower-level design might have the aircraft exchange messages to determine which should ascend and which should descend.
Lynch and Garland have taken the idea of abstraction further. A dozen years ago, Lynch developed a mathematical model that made it easier for programmers to tell if a set of abstractions would make a distributed system behave correctly. With this model, she and Garland created a computer language programmers can use to write "pseudocode" that describes what a program should do. With his students, Garland has also built tools to prove that lower levels of abstractions relate correctly to higher levels and to simulate a program's behavior before it is translated into an actual programming language like Java. By directing programmers' attention to many more possible bug-revealing circumstances than might be checked in typical software tests, the tools help assure that the software will always work properly. Once software has been thus tested, a human can easily translate the pseudocode into a standard programming language.
Not all computer scientists agree that it is possible to prove software error free. Still, says Shari Pfleeger, a computer scientist for Rand in Washington, DC, mathematical methods like Lynch and Garland's have a place in software design. "Certainly using it for the most critical parts of a large system would be important, whether or not you believe you're getting 100 percent of the problems out," Pfleeger says.
While some groups have started working with Lynch and Garland's software, the duo is pursuing a system for automatically generating Java programs from highly specified pseudocode. The aim, says Garland, is to "cut human interaction to near zero" and eliminate transcription errors. Collaborator Alex Shvartsman, a University of Connecticut computer scientist, says, "A tool like this will take us slowly but surely to a place where systems are much more dependable than they are today." And whether we're boarding planes or going to the hospital, we can all appreciate that goal. - Erika Jonietz
Others in
SOFTWARE ASSURANCE RESEARCHER PROJECT Gerard Holzmann
Bell Labs Software to detect bugs in networked computers Charles Howell
Mitre Benchmarks for software assurance Charles Simonyi
Intentional Software Programming tools to improve software Douglas Smith
Kestrel Institute Mechanized software development
Glycomics
James Paulson, a researcher at the Scripps Research Institute in La Jolla, CA, lifts a one-liter, orange-capped bottle from his desk. The bottle is filled with sugar, and Paulson estimates that, had the substance been purchased from a chemical supply house, it would have cost about $15 million. "If I could only sell it," Paulson jokes, admiring what looks like the chunky, raw sugar served at health food restaurants.
In fact, Cytel, a biotech company Paulson once helped run, synthesized the sugar-one of thousands made by the human body-with hopes it could be sold to truly boost health. Cytel's aim was to turn the sugar into a drug that could tame the immune system to minimize damage following heart attacks and surgery. That ambition failed, but the effort to understand and ultimately harness sugars-a field called glycomics-is thriving. And Paulson, who has gone on to cofound Abaron Biosciences in La Jolla, CA, is leading the way, developing new glycomic drugs that could have an impact on health problems ranging from rheumatoid arthritis to the spread of cancer cells.
The reason for the excitement around glycomics is that sugars have a vital, albeit often overlooked, function in the body. In particular, sugars play a critical role in stabilizing and determining the function of proteins through a process called glycosylation, in which sugar units are attached to other molecules including newly made proteins. "If you don't have any glycosylation, you don't have life," says Paulson.
By manipulating glycosylation or sugars themselves, researchers hope to shut down disease processes, create new drugs, and improve existing ones. Biotech giant Amgen, for instance, made a more potent version of its best-selling drug (a protein called erythropoietin, which boosts red-blood-cell production) by attaching two extra sugars to the molecule. Other companies such as GlycoGenesys, Progenics Pharmaceuticals, and Oxford Glycoscience have glycomic drugs in human tests for ailments ranging from Gaucher's disease to colorectal cancer. "The medical potential...is absolutely enormous," says Abaron cofounder Jamey Marth, a geneticist at the University of California, San Diego.
Despite the importance of sugars, efforts to unravel their secrets long remained in the shadows of research into genes and proteins-in part because there is no simple "code" that determines sugars' structures. But over the last few decades, researchers have slowly uncovered clues to sugars' functions. In the late 1980s, Paulson and his team isolated a gene for one of the enzymes responsible for glycosylation. Since that watershed event, scientists have been piecing together an ever more detailed understanding of the ways sugars can in some instances ensure healthy functioning and in others make us susceptible to disease.
It's a gargantuan task. Researchers estimate that as many as 40,000 genes make up each person, and each gene can code for several proteins. Sugars modify many of those proteins, and various cell types attach the same sugars in different ways, forming a variety of branching structures, each with a unique function. "It's a nightmare" to figure all this out, says Paulson. "In order for the field to progress rapidly, we need to bring together the experts in the various subfields to think about the problems of bridging the technologies and beginning to move toward a true glycomics approach." In an attempt do just that, Paulson heads the Consortium for Functional Glycomics. The group, comprising more than 40 academics from a number of disciplines, has a five-year $34 million grant from the National Institutes of Health.
Despite this large-scale effort and healthy dose of federal funding, however, Paulson stresses that the consortium cannot detail every sugar in the body. "We're just taking a bite out of the apple." But what a sweet, large apple it is. - Jon Cohen
Others in
GLYCOMICS RESEARCHER PROJECT Carolyn Bertozzi
U. California, Berkeley;
Thios Pharmaceuticals Glycosylation and receptor binding in disease Richard Cummings
U. Oklahoma Sugars in cell adhesion Stuart Kornfeld
Washington U.
School of Medicine Pathways of glycosylation and genetic disorders John Lowe
U. Michigan Sugars in immunity and cancer Jamey Marth
U. California, San Diego;
Abaron Biosciences Sugars in physiology and disease
Quantum Cryptography
The world runs on secrets. Governments, corporations, and individuals-to say nothing of Internet-based businesses-could scarcely function without secrecy. Nicolas Gisin of the University of Geneva is in the vanguard of a technological movement that could fortify the security of electronic communications. Gisin's tool, called quantum cryptography, can transmit information in such a way that any effort to eavesdrop will be detectable.
The technology relies on quantum physics, which applies at atomic dimensions: any attempt to observe a quantum system inevitably alters it. After a decade of lab experiments, quantum cryptography is approaching feasibility. "We can now think about using it for practical purposes," says Richard Hughes, a quantum cryptography pioneer at the Los Alamos National Laboratory in New Mexico. Gisin-a physicist and entrepreneur-is leading the charge to bring the technology to market.
The company that Gisin spun off from his University of Geneva laboratory in 2001, id Quantique, makes the first commercially available quantum-cryptography system, he says. The PC-size prototype system includes a random-number generator (essential for creating a decryption key) and devices that emit and detect the individual photons of light that make up the quantum signal.
Conventional cryptographers concentrate on developing strong digital locks to keep information from falling into the wrong hands. But even the strongest lock is useless if someone steals the key. With quantum cryptography, "you can be certain that the key is secure," says Nabil Amer, manager of the physics of information group at IBM Research. Key transmission takes the form of photons whose direction of polarization varies randomly. The sender and the intended recipient compare polarizations, photon by photon. Any attempt to tap this signal alters the polarizations in a way that the sender and intended recipient can detect. They then transmit new keys until one gets through without disturbance.
Quantum cryptography is still ahead of its time. Nonquantum encryption schemes such as the public-key systems now commonly used in business have yet to be cracked. But the security of public-key systems relies on the inability of today's computers to work fast enough to break the code. Ultimately, as computers get faster, this defense will wear thin. Public-key encryption, Gisin says, "may be good enough today, but someone, someday, will find a way to crack it. Only through quantum cryptography is there a guarantee that the coded messages sent today will remain secret forever."
Gisin has no illusions about the challenges he faces. For one thing, quantum cryptography works only over the distance a light pulse can travel through the air or an optical fiber without a boost; the process of amplification destroys the quantum-encoded information. Gisin's team holds the world's distance record, having transmitted a quantum key over a 67-kilometer length of fiber connecting Geneva and Lausanne, Switzerland.
The work of Gisin and others could usher in a new epoch of quantum information technology. Ironically, it is in part the prospect that superfast quantum computers will someday supply fantastic code-breaking power that drives Gisin and others to perfect their method of sheltering secret information. In the coming decades, Gisin contends, "e-commerce and e-government will be possible only if quantum communication widely exists." Much of the technological future, in other words, depends on the science of secrecy. - Herb Brody
Others in
QUANTUM CRYPTOGRAPHY RESEARCHER PROJECT Nabil Amer
IBM Quantum key exchange through optical fiber Richard Hughes
Los Alamos
National Laboratory Ground-to-satellite optical communications John Preskill
Caltech Quantum information theory John Rarity
QinetiQ Through-air quantum-key transmission Alexei Trifonov and
Hoi-Kwong Lo
MagiQ Technologies Quantum-cryptography hardware
No comments:
Post a Comment