Part I The Weatherman Is Not a Moron
Answer the following questions with 3-4 sentences each, and submit on canvas:
Part II Forecasting Methods
Take a look at this page that reviews some of the different forecasting methods for weather:
http://www.ux1.eiu.edu/~cfjps/1400/forecasting.html (Links to an external site.)
Answer the following questions:
Take a look at this page that shows NWP information from the North American Mesoscale (NAM) model:
http://weather.utah.edu/ (Links to an external site.)
The Weatherman Is Not a Moron by Nate Silver
From the inside, the National Centers for Environmental Prediction looked like a cross between a submarine command center and a Goldman Sachs trading floor. Twenty minutes outside Washington, it consisted mainly of sleek workstations manned by meteorologists working an armada of flat-screen monitors with maps of every conceivable type of weather data for every corner of the country.
The center is part of the National Weather Service, which Ulysses S. Grant created under the War Department. Even now, it remains true to those roots.
Many of its meteorologists have a background in the armed services, and virtually all speak with the precision of former officers.
They also seem to possess a high-frequency-trader’s skill for managing risk. Expert meteorologists are forced to arbitrage a torrent of information to make their predictions as accurate as possible.
After receiving weather forecasts generated by supercomputers, they interpret and parse them by, among other things, comparing them with various conflicting models or what their colleagues are seeing in the field or what they already know about certain weather patterns — or, often, all of the above.
From station to station, I watched as meteorologists sifted through numbers and called other forecasters to compare notes, while trading instant messages about matters like whether the chance of rain in Tucson should be 10 or 20 percent.
As the information continued to flow in, I watched them draw on their maps with light pens, painstakingly adjusting the contours of temperature gradients produced by the computers — 15 miles westward over the Mississippi Delta or 30 miles northward into Lake Erie— in order to bring them one step closer to accuracy.
These meteorologists are dealing with a small fraction of the 2.5 quintillion bytes of information that, I.B.M. estimates, we generate each day. That’s the equivalent of the entire printed collection of the Library of Congress about three times per second.
Google now accesses more than 20 billion Web pages a day; the processing speed of an iPad rivals that of last generation’s most powerful supercomputers. All that information ought to help us plan our lives and profitably predict the world’s course.
In 2008, Chris Anderson, the editor of Wired magazine, wrote optimistically of the era of Big Data. So voluminous were our databases and so powerful were our computers, he claimed, that there was no longer much need for theory, or even the scientific method. At the time, it was hard to disagree.
But if prediction is the truest way to put our information to the test, we have not scored well. In November 2007, economists in the Survey of Professional Forecasters — examining some 45,000 economic-data series — foresaw less than a 1-in-500 chance of an economic meltdown as severe as the one that would begin one month later.
Attempts to predict earthquakes have continued to envisage disasters that never happened and failed to prepare us for those, like the 2011 disaster in Japan, that did.
The one area in which our predictions are making extraordinary progress, however, is perhaps the most unlikely field. Jim Hoke, a director with 32 years’ experience at the National Weather Service, has heard all the jokes about weather forecasting, like Larry David’s jab on “Curb Your Enthusiasm” that weathermen merely forecast rain to keep everyone else off the golf course.
And to be sure, these slick-haired and/or short-skirted local weather forecasters are sometimes wrong. A study of TV meteorologists in Kansas City found that when they said there was a 100 percent chance of rain, it failed to rain at all one-third of the time.
But watching the local news is not the best way to assess the growing accuracy of forecasting (more on this later). It’s better to take the long view. In 1972, the service’s high-temperature forecast missed by an average of six degrees when made three days in advance. Now it’s down to three degrees. More stunning, in 1940, the chance of an American being killed by lightning was about 1 in 400,000. Today it’s 1 in 11 million.
This is partly because of changes in living patterns (more of our work is done indoors), but it’s also because better weather forecasts have helped us prepare.
Perhaps the most impressive gains have been in hurricane forecasting. Just 25 years ago, when the National Hurricane Center tried to predict where a hurricane would hit three days in advance of landfall, it missed by an average of 350 miles.
If Hurricane Isaac, which made its unpredictable path through the Gulf of Mexico last month, had occurred in the late 1980’s, the center might have projected landfall anywhere from Houston to Tallahassee, cancelling untold thousands of business deals, flights and picnics in between — and damaging its reputation when the hurricane zeroed in hundreds of miles away. Now the average miss is only about 100 miles.
Why are weather forecasters succeeding when other predictors fail? It’s because long ago they came to accept the imperfections in their knowledge. That helped them understand that even the most sophisticated computers, combing through seemingly limitless data, are painfully ill equipped to predict something as dynamic as weather all by themselves. So as fields like economics began relying more on Big Data, meteorologists recognised that data on its own isn’t enough.
The I.B.M. Bluefire supercomputer in the basement of the National Center for Atmospheric Research in Boulder, Colo., is so large that it essentially creates its own weather. The 77 trillion calculations that Bluefire makes each second, in its mass of blinking lights and coaxial cable, generate so much radiant energy that it requires a liquid cooling system.
The room where Bluefire resides is as drafty as a minor-league hockey rink, and it’s loud enough that hearing protection is suggested.
The 11 cabinets that hold the supercomputer are long and narrow and look like space-age port-a-potties. When I mentioned this to Rich Loft, the director of technology development for NCAR, he was not amused. To him, this computer represents the front line in an age-old struggle to predict our environment.
“You go back to Chaco Canyon or Stonehenge,” Loft said, “and people realised they could predict the shortest day of the year and the longest day — that the moon moved in predictable ways. But there are things an ancient man couldn’t predict: ambush from some kind of animal, a flash flood or a thunderstorm.”
For centuries, meteorologists relied on statistical tables based on historical averages — it rains about 45 percent of the time in London in March, for instance — to predict the weather.
But these statistics are useless on a day-to-day level. Jan. 12, 1888, was a relatively warm day on the Great Plains until the temperature dropped almost 30 degrees in a matter of hours and a blinding snowstorm hit. More than a hundred children died of hypothermia on their way home from school that day. Knowing the average temperature for a January day in
Topeka wouldn’t have helped much in a case like that.
The holy grail of meteorology, scientists realised, was dynamic weather prediction — programs that simulate the physical systems that produce clouds and cold fronts, windy days in Chicago and the morning fog over San Francisco as they occur. Theoretically, the laws that govern the physics of the weather are fairly simple.
In 1814, the French mathematician Pierre-Simon Laplace postulated that the movement of every particle in the universe should be predictable as long as meteorologists could know the position of all those particles and how fast they are moving.
Unfortunately, the number of molecules in the earth’s atmosphere is perhaps on the order of 100 tredecillion, which is a 1 followed by 44 zeros. To make perfect weather predictions, we would not only have to account for all of those molecules, but we would also need to solve equations for all 100 tredecillion of them at once.
The most intuitive way to simplify the problem was to break the atmosphere down into a finite series of boxes, or what meteorologists variously refer to as a matrix, a lattice or a grid.
The earliest credible attempt at this, according to Loft, was made in 1916 by an English physicist named Lewis Fry Richardson, who wanted to determine the weather over northern Germany on May 20, 1910. This was not technically a prediction, because the date was some six years in the past, but Richardson treated it that way, and he had a lot of data:
a series of observations of temperature, barometric pressures and wind speeds that had been gathered by the German government.
And as a pacifist serving a volunteer ambulance unit in northern France, he also had a lot of time on his hands. So, Richardson broke Germany down into a series of two-dimensional boxes, each measuring three degrees of latitude by three degrees of longitude. Then he went to work trying to solve the equations that governed the weather in each square and how they might affect weather in the adjacent ones.
Richardson’s experiment failed miserably. It “predicted” a dramatic rise in barometric pressure that hadn’t occurred and produced strange-looking weather patterns that didn’t resemble any seen in Germany before or since. Had he made a computational error?
Were his equations buggy? It was hard to say. Even the most devoted weather nerds weren’t eager to solve differential equations for months on end to double-check his work for one day in one country six years in the past.