Wednesday, January 12, 2011

Algorithms Take Control of Wall Street

Wall Street Algorithms Are in Control

Today Wall Street is ruled by thousands of little algorithms, and they've created a new market—volatile, unpredictable, and impossible for humans to comprehend.
Photo: Mauricio Alejo


Last spring, Dow Jones launched a new service called Lexicon, which sends real-time financial news to professional investors. This in itself is not surprising. The company behind The Wall Street Journal and Dow Jones Newswires made its name by publishing the kind of news that moves the stock market. But many of the professional investors subscribing to Lexicon aren’t human—they’re algorithms, the lines of code that govern an increasing amount of global trading activity—and they don’t read news the way humans do. They don’t need their information delivered in the form of a story or even in sentences. They just want data—the hard, actionable information that those words represent.

Lexicon packages the news in a way that its robo-clients can understand. It scans every Dow Jones story in real time, looking for textual clues that might indicate how investors should feel about a stock. It then sends that information in machine-readable form to its algorithmic subscribers, which can parse it further, using the resulting data to inform their own investing decisions. Lexicon has helped automate the process of reading the news, drawing insight from it, and using that information to buy or sell a stock. The machines aren’t there just to crunch numbers anymore; they’re now making the decisions.

Music

An app that jams with you.

A good session player is hard to find, but ujam is always ready to rock. The Web app doubles as a studio band and a recording studio. It analyzes a melody and then produces sophisticated harmonies, bass lines, drum tracks, horn parts, and more.

Before ujam’s AI can lay down accompaniment, it must figure out which notes the user is singing or playing. Once it recognizes them, the algorithm searches for chords to match the tune, using a mix of statistical techniques and hardwired musical rules. The stats are part of the software’s AI and can generate myriad chord progressions. The rules-based module then uses its knowledge of Western musical tropes to narrow the chord options to a single selection.

The service is still in alpha, but it has attracted 2,500 testers who want to use the AI to explore their musical creativity—and they have the recordings to prove it. As ujam gathers more data on users’ preferences and musical tastes, programmers feed this info back into the system, improving its on-the-fly performance. In this respect at least, ujam is like a human: It gets better with practice.
—Jon Stokes

That increasingly describes the entire financial system. Over the past decade, algorithmic trading has overtaken the industry. From the single desk of a startup hedge fund to the gilded halls of Goldman Sachs, computer code is now responsible for most of the activity on Wall Street. (By some estimates, computer-aided high-frequency trading now accounts for about 70 percent of total trade volume.) Increasingly, the market’s ups and downs are determined not by traders competing to see who has the best information or sharpest business mind but by algorithms feverishly scanning for faint signals of potential profit.

Algorithms have become so ingrained in our financial system that the markets could not operate without them. At the most basic level, computers help prospective buyers and sellers of stocks find one another—without the bother of screaming middlemen or their commissions. High-frequency traders, sometimes called flash traders, buy and sell thousands of shares every second, executing deals so quickly, and on such a massive scale, that they can win or lose a fortune if the price of a stock fluctuates by even a few cents. Other algorithms are slower but more sophisticated, analyzing earning statements, stock performance, and newsfeeds to find attractive investments that others may have missed. The result is a system that is more efficient, faster, and smarter than any human.

It is also harder to understand, predict, and regulate. Algorithms, like most human traders, tend to follow a fairly simple set of rules. But they also respond instantly to ever-shifting market conditions, taking into account thousands or millions of data points every second. And each trade produces new data points, creating a kind of conversation in which machines respond in rapid-fire succession to one another’s actions. At its best, this system represents an efficient and intelligent capital allocation machine, a market ruled by precision and mathematics rather than emotion and fallible judgment.

But at its worst, it is an inscrutable and uncontrollable feedback loop. Individually, these algorithms may be easy to control but when they interact they can create unexpected behaviors—a conversation that can overwhelm the system it was built to navigate. On May 6, 2010, the Dow Jones Industrial Average inexplicably experienced a series of drops that came to be known as the flash crash, at one point shedding some 573 points in five minutes. Less than five months later, Progress Energy, a North Carolina utility, watched helplessly as its share price fell 90 percent. Also in late September, Apple shares dropped nearly 4 percent in just 30 seconds, before recovering a few minutes later.

These sudden drops are now routine, and it’s often impossible to determine what caused them. But most observers pin the blame on the legions of powerful, superfast trading algorithms—simple instructions that interact to create a market that is incomprehensible to the human mind and impossible to predict.

For better or worse, the computers are now in control.

Ironically enough, the notion of using algorithms as trading tools was born as a way of empowering traders. Before the age of electronic trading, large institutional investors used their size and connections to wrangle better terms from the human middlemen that executed buy and sell orders. “We were not getting the same access to capital,” says Harold Bradley, former head of American Century Ventures, a division of a midsize Kansas City investment firm. “So I had to change the rules.”

Bradley was among the first traders to explore the power of algorithms in the late ’90s, creating approaches to investing that favored brains over access. It took him nearly three years to build his stock-scoring program. First he created a neural network, painstakingly training it to emulate his thinking—to recognize the combination of factors that his instincts and experience told him were indicative of a significant move in a stock’s price.

But Bradley didn’t just want to build a machine that would think the same way he did. He wanted his algorithmically derived system to look at stocks in a fundamentally different—and smarter—way than humans ever could. So in 2000, Bradley assembled a team of engineers to determine which characteristics were most predictive of a stock’s performance. They identified a number of variables—traditional measurements like earnings growth as well as more technical factors. Altogether, Bradley came up with seven key factors, including the judgment of his neural network, that he thought might be useful in predicting a portfolio’s performance.

He then tried to determine the proper weighting of each characteristic, using a publicly available program from UC Berkeley called the differential evolution optimizer. Bradley started with random weightings—perhaps earnings growth would be given twice the weight of revenue growth, for example. Then the program looked at the best-performing stocks at a given point in time. It then picked 10 of those stocks at random and looked at historical data to see how well the weights predicted their actual performance. Next the computer would go back and do the same thing all over again—with a slightly different starting date or a different starting group of stocks. For each weighting, the test would be run thousands of times to get a thorough sense of how those stocks performed. Then the weighting would be changed and the whole process would run all over again. Eventually, Bradley’s team collected performance data for thousands of weightings.

Once this process was complete, Bradley collected the 10 best-performing weightings and ran them once again through the differential evolution optimizer. The optimizer then mated those weightings—combining them to create 100 or so offspring weightings. Those weightings were tested, and the 10 best were mated again to produce another 100 third-generation offspring. (The program also introduced occasional mutations and randomness, on the off chance that one of them might produce an accidental genius.) After dozens of generations, Bradley’s team discovered ideal weightings. (In 2007, Bradley left to manage the Kauffman Foundation’s $1.8 billion investment fund and says he can no longer discuss his program’s performance.)

Bradley’s effort was just the beginning. Before long, investors and portfolio managers began to tap the world’s premier math, science, and engineering schools for talent. These academics brought to trading desks sophisticated knowledge of AI methods from computer science and statistics.

And they started applying those methods to every aspect of the financial industry. Some built algorithms to perform the familiar function of discovering, buying, and selling individual stocks (a practice known as proprietary, or “prop,” trading). Others devised algorithms to help brokers execute large trades—massive buy or sell orders that take a while to go through and that become vulnerable to price manipulation if other traders sniff them out before they’re completed. These algorithms break up and optimize those orders to conceal them from the rest of the market. (This, confusingly enough, is known as algorithmic trading.) Still others are used to crack those codes, to discover the massive orders that other quants are trying to conceal. (This is called predatory trading.)

No comments:

Post a Comment