Growing money from Algorithms
- 03 December 1994
- From New Scientist Print Edition. Subscribe and get 4 free issues.
- Vincent Kiernan
MENTION the term "genetic algorithm" to a select band of Wall Street pension fund managers or some of the top brass in the US military, and you are not likely to get much of a response. But this is not because of ignorance. It is the same uneasy silence you might get if you had asked people in a similar line of business to spill the beans on insider dealing or some top secret project.
So what is it about this method of designing computer programs - which loosely follows the biological rules of natural selection, sexual reproduction and mutation - that prompts such a response? Money is one answer.
First Quadrant, an investment firm in Pasadena, California, relies on genetic algorithms to help it manage $5 billion worth of investments. Since 1993, when it began to use the technique for commercial systems, the programs have earned the company $25 million. The software "allows us to refine ideas we know are sensible in a general sense," says David Leinwebber, the company's director of research.
The experience of First Quadrant is just one instance that shows how genetic algorithms have made it out of the laboratory and into the commercial world. Volker Nissen of the University of Göttingen in Germany catalogued 34 different genetic algorithm projects in corporate management between 1986 and 1993. They included such practical applications as planning production in the chemicals industry, evaluating applicants for credit cards, and predicting bankruptcies. Genetic algorithms are "the tools, the mental swords, used by those who want to do business in today's financial markets", writes Richard Bauer, a business professor at St Mary's University of San Antonio in Texas, in his recent book Genetic algorithms and investment strategies.
The principle behind genetic algorithms is that populations of relatively simple programmes should be allowed to interact and alter each other in order to produce programs that are particularly good at achieving some predetermined goal. The way that these "well adapted" populations are formed is through a process similar to natural selection: chance matings allow sections of code to be exchanged, and mutations subtly alter the details of the code. Some of the results of these recombinations and mutations may result in fragments of code that are particularly good at carrying out certain tasks.
These individuals are the most likely to go on and reproduce, they may also compete with weaker individuals for tasks, eventually squeezing out these weaker entities.
From a programming standpoint, genetic algorithms are attractive because they do away with the need to tease out complex rules for deciding when to buy or sell shares, for example, or when to fire a missile. Instead, the software embodies only general rules to define the overall aim of the program. It is left to "decide" for itself which fragments of code are best at achieving this goal. The genetic algorithms used by First Quadrant, for example, must be good at identifying money making opportunities in the US. The process takes much of the human judgment out of the programming process. As Bauer puts it: "You're not constrained by your own preconceptions."
The founding father of genetic algorithm research is John Holland, who realised in the 1960s, while carrying out research at the University of Michigan, that similar techniques to those observed in Darwinian evolution could be used to solve problems very different from surviving on planet Earth. Holland likened the kind of strings of 1s and 0s that you get in computer programs to a simple genetic code. The idea is that genetic algorithms shape this code just as environmental forces and natural selection shape the genetic make-up of living organisms.
Right combination
Software designed around the principles of genetic algorithms comprises of a large number of fragments of computer code and a number of logical operators that encourage these code strings to combine, change or be destroyed. The process may start with many thousands of strings, which are often generated randomly rather than being written to perform a specific task.
These strings are first subjected to a fitness test that determines how well a particular string or rule solves the desired problem. A rule that states "buy low, sell high" for example, will make more money than one that says "buy low, sell low". Stock traders using genetic algorithms can decide from the results of a fitness test how much money each string of code made. The more successful the code is at making money, the "fitter" it is. These fit solutions are kept, while the strings of code that are least good at making money are eliminated from memory. This is rather like the extinction of a species.
The successful strings of code can go on to combine - the software equivalent of reproduction. If two strings in a population were 111111 and 000000, for example, the reproduction process of those two parents might produce offspring of 110000 and 000111. The fitness test can then be applied to this new population to see if any of the new combinations are even better at the desired task this is a process that simulates mutation, where external factors lead to random changes in the code. Thus, a string of 111111 might mutate to 110111. Once again the fitness test will identify the fragment best suited to the task in hand.
As these processes are repeated for each generation, the best strings in each population become better and better solutions. Eventually, after many generations, the algorithm produces the solution that is deemed sufficiently close to the ideal.
At first, this programming technique was treated as a research novelty, to be tested on computer science departments' chess games and suchlike. Biologists also took a keen interest in the technique, and some used it to simulate populations of single-cell organisms. Over the past ten to twenty years, people have tried to apply the technique to a broader variety of problems. But it is only in the past five years that computing power has become cheap and plentiful enough for genetic algorithms to be considered commercially viable.
"The genetic algorithm provided a subtle but valuable boost to our models," says Rob Arnott, First Quadrant's president. His company manages pension funds using an investment strategy called tactical asset allocation, which exploits the fact that the same asset - a particular currency, for example - may be available more cheaply in one market than in another somewhere else in the world. There is money to be made for anyone who can predict when and where to buy and sell a currency.
The starting point for First Quadrant's genetic algorithms was the company's computerised model based on a set of rules for finding such money-making opportunities in the US economy, known in the trade as "market inefficiencies". First Quadrant then used genetic algorithms to refine the model, as it worked its way through historical economic data. This boosted its prediction capabilities by 50 percent. The company was so impressed that it used genetic algorithms to construct a model governing tactical asset allocation in 17 different countries.
Bauer has used similar techniques for investing money in the stock markets, which have been taken up by some Wall Street firms. The aim is to buy when the price of a particular security is at a minimum and sell when its price peaks. In the financial world, this is known as a "market timing", and even the best market analysts find it horribly difficult to gauge because so many factors can affect the movement in a stock's price.
Mixed results
Bauer's original population of algorithms was made up of text strings stating investment rules, such as: "IF (the three-month change in industrial production is in the lower two-thirds of its historical range and the three-month change in labour costs is in the upper two-thirds of its range) OR (the three-month change in the price of lead scrap is in the bottom third of its range) THEN invest in stocks. If not, then invest in Treasury bills." He tested the "fitness" of the rules by applying each of them to stock market data from the period 1984 to 1988. The rules were allowed to reproduce and mutate, and the final rules were tested against data from 1989 to 1991.
Bauer's results were mixed. In one case, for example, an investor would have made more money during the three-year test period by simply holding onto the original portfolio than by following the trading rules developed by the genetic algorithm. And the rules did particularly badly in the third year. But, in the real world, Bauer argues, that this would not be a problem. Professional money managers would be responsible for updating the genetic rules.
Obviously, applying genetic algorithms in this way requires sophisticated computer hardware and software. Sensing a potential market, computer companies are beginning to offer such products. Earlier this year, for example, the Massachusetts company Thinking Machines, which specialises in computers designed for parallel processing, unveiled a package of four computer programs collectively known as Darwin, which are designed to make it easier to predict trends from information extracted from databases. This could be used, for example, to identify consumers who are likely to buy from a catalogue sent to them by post, or to see if the doctors are influenced by marketing campaigns when they come to decide which drugs to prescribe.
Genetic algorithms are making inroads into military technology as well. Take, for example, the problem of locating missiles accurately enough to shoot them down. To track the incoming enemy missiles, the Pentagon plans to use infrared sensors that detect the heat of a missile body or its hot plume of exhaust gas. Passive sensors of this kind have big advantages over active systems such as radar, which broadcast a radar beam and interpret the echoes that get reflected back: they can't be jammed, and they use a lot less power. But passive sensors have serious drawbacks, too: a passive infrared sensor can tell the direction in which a target lies, but not how far away it is.
This is not a problem if there is just one missile. To work out exactly where it is you can use two or more infra-red sensors placed some distance apart. Each will see the incoming missile at a different angle, and using simple trigonometry you can figure out how far away it is.
But what if there are several incoming targets, and several sensors? Figuring out which images in the tracking system correspond to which incoming missiles can be a daunting task. If one of the images is assigned to the wrong target the system will report a "ghost" missile where none exists. And the number of possibilities is huge: with 10 incoming missiles, for example, there is a whopping 102 ways to assign tracking reports to missiles, far too many to be examined before the missile strikes.
"You need a way to narrow down your search," says John Angus of the Claremont Graduate School in Claremont, California. Hughes Aircraft Company asked Angus for help, and he turned to genetic algorithms. As usual, Angus's system starts out with a large population of algorithms - in this case, thousands of different combinations of sensor reports and missiles. Each of these combinations is tested to see whether they cross 10 distinct points in the sky. The combinations that come closest to crossing in 10 points are rated as the fittest and become the basis for the next generation. In tests of the technique on a desktop workstation, Angus's students found that the algorithm could locate 10 missiles within 15 seconds. A powerful military computer could probably work it out within 1 or 2 seconds, more than enough time to aim defensive missiles in the right direction.
Charles Darwin's ideas of evolutionary fitness shook the intellectual and social establishment of his day. But even at his most visionary, could Darwin ever have imagined that his theories would evolve so far?
No comments:
Post a Comment