What I'm Up To

Since 2008, Nate Silver has achieved a degree of fame for his predictions of the U.S. presidential election.  After successfully predicting the 2008 election results in 49 of 50 states, his blog FiveThirtyEight was picked up by the New York Times.  In 2012, his model successfully predicted the outcome in all fifty states.  Now Mr. Silver is the best-selling author of The Signal and the Noise, and you can read all about him virtually anywhere on the web.

Unfortunately (from my point of view), most of the media coverage focused on the controversy between Silver's projections versus those of political pundits, whose predictions were based on no methodology whatsoever.  There was little discussion of how his models really work.  All he's doing is interpreting the data, making an attempt to correct for readily identifiable biases in the way polls are conducted, and doing the math.  It should be noted that there are a few other web-published authors using similar methods as Mr. Silver's, but none with the widespread name recognition that he has achieved outside the beltway.

So, what am I working on that has to do with FiveThirtyEight's presidential election models?  Silver's predictions are the most widely-known use of Monte Carlo models that I can think of, even if he doesn't explicitly refer to them as such.

In order to explain what is meant by a "Monte Carlo" model, we need to do some time traveling. Set the Wayback Machine for the middle of World War II, in Los Alamos, New Mexico.  The Manhattan Project had encountered a major difficulty: the "thin man" gun-type bomb design worked for uranium, but not for plutonium, which was available in much larger quantities due to major investments at the Hanford plutonium processing facility in southeastern Washington state, on the Columbia River.  Uranium enrichment remained a slow, expensive process.

The project therefore took a sharp turn, with a new design that used shaped charges to create an implosive blast wave around a plutonium core, thereby providing enough energy to trigger the fission reaction.  As it happens, one of the foremost experts on the mathematics of shaped charges was John von Neumann, an initial member of the Institute for Advanced Study in Princeton, New Jersey, along with Albert Einstein.

Von Neumann's implosion design worked in the "Trinity" test; the "Fat Man" fission bomb was dropped by the United States on Nagasaki, Japan, along with the uranium-based "Little Boy" dropped on Hiroshima, ending World War II in August 1945.

As Bill Cosby once said, "I told you that story so I could tell you this one."

Photo Credit:  Paul R. Halmos Collection , Mathematical Association of America

Photo Credit: Paul R. Halmos Collection, Mathematical Association of America

Stanislaw Ulam, another scientist on the Manhattan Project, and close friend of von Neumann, developed a case of acute encephalitis after the war.  While recovering from the illness in early 1946, he spent a great deal of time playing solitaire.  Being a mathematician, he wanted to estimate the probability of a successful outcome.  It occurred to him that he could closely approximate that probability just by playing a sufficiently large number of games; eventually, the percentage of successful outcomes would converge on the correct probability.  When von Neumann came to visit during his recuperation, Ulam shared this idea with his friend.

Von Neumann was not just an expert in shaped charges; he was a genius, an expert in a wide variety of fields, known as a polymath.  And it happens that since before the war ended, he had been involved in the EDVAC project, a computer to be built at the U.S. Army's Ballistic Research Lab by the University of Pennsylvania's Moore School of Electrical Engineering.  The EDVAC was planned as a successor to ENIAC, the first electronic general-purpose computer.  While ENIAC could be configured to perform different kinds of tasks, EDVAC was a significant step toward modern computers in that it could be reprogrammed without modifying the computer's hardware -- the first computer that could run a software program.

Since von Neumann wrote the paper describing the stored program concept, it came to be known as "the von Neumann architecture," despite the fact that it was a team effort between von Neumann, Herman Goldstine and several others.  Alan Turing should also be given some credit since he conceived of the idea as a hypothetical several years earlier.  Controversy aside, however, this still forms the basis of today's computer designs.

Ulam realized that his method for approximating the probability of a solitaire outcome -- running a large number of randomized trials and aggregating the results -- could be generalized to a wide variety of problems, including some of the most vexing problems in the design of thermonuclear (fusion) weapons, if it were practical to do the calculations in a reasonable amount of time.  With the emergence of digital computers, running those computations at high speed had become possible.

Because the method was designed to introduce elements of randomness to an otherwise well-defined problem, they referred to it as the "Monte Carlo" method, after the famous Monaco casino where Ulam's uncle liked to gamble.

The method has since become widespread in physics, weather forecasting, and a variety of scientific disciplines.

I am working on a (brace yourself for the buzzwords, here they come) generalized, reusable, open-source tool for running arbitrarily large, stochastic, dynamic simulations.  In simpler terms, a tool for running big Monte Carlo models.  Hopefully, one that is relatively easy to use and that helps regular human beings interpret the results.

I believe that this method is underused, partly because people think that it requires advanced mathematics, and partly because it requires some tools that aren't generally available to most people.

I don't think the first part is true; if used for problems that don't involve nuclear physics, the math is more tedious than difficult.  Numeric function libraries and exceedingly fast CPUs do most of the work for you.  All you have to do is understand your own problem domain.

As for the second part: if Monte Carlo made sense in 1946, it makes way more sense in 2012, because the iPhone 4 in my pocket has about 800,000 times the computational power that ENIAC had.  Inexpensive desktop computers easily exceed the definition of what used to be called a "supercomputer."  There is simply no excuse for not using a few minutes of computing power to do a probabilistic analysis as you plan your next project, or decide on a major investment.  All we lack is a tool to make it easy.

In my next post, I'll dive into an example that uses the Monte Carlo method to solve a simple problem, so you can get the feel of it, and then I'll explain in rough terms how Silver's presidential election model worked.