Two water molecules are at the bottom of a waterfall.
What does their position there at the bottom tell you about where they were at the top? In classical physics,
meaning Newtonian and Laplacian physics, the idea was that if you knew where something started out and what direction it was going, you could say with near certainty
where it would end up. Comets and planets are like that. If you want to know the position of Haley's Comet 25 years from now, that's a solvable problem.
Comets don't spend their lives jostling with trillions of other comets, getting pushed this way and sucked that way the way water molecules do, however. A scientist banging out some simulator code in a dark room can forget about including the gazillions of miniscule pushes and pulls from say, galaxies far far away, and only worry about the few objects Haley passes close by to: the sun, planets, and so on.
The life of a water molecule, apparently, is so much more complicated than your average comet that to try and predict where it will end up milliseconds from now might be impossible with all the computing power available in the world, let alone when it finally arrives at the bottom of the waterfall.
Atmospheric science bumps up against the same conundrum. At one time in the 1950s and 60s, the scientific community, led by John von Neuman (of Manhattan Project fame), thought they were hot on the trail of being able to predict weather days, months, or even years ahead of time. Their optimism stemmed from their recent experience tracking celestial bodies combined with the advancement of computing power. That is, until a guy named Edward Lorenz came along.
Lorenz had the impertinence to blow up everyone's plans of accurate simulation of weather by, you guessed it: building a simulation. At the time, he was dealing with a computer with all of the processing horsepower contained in a modern day garage door opener, but his workaround was to simplify, simplify, simplify. He would give it a starting point and it would then figure and spit out things like the temperature and wind direction for every so many minutes of simulated time. Lorenz World's weather never quite repeated, but it did rhyme, kind of like real weather!
The surprise came one day when Lorenz wanted to start the simulation on day 3 (or day 5 or whatever), so to save time he just entered all the weather conditions at the beginning of day 3 that the computer gave him from the last simulation run. The following days 4,5,6 and so on should look identical to what they always did, right? Wrong. What actually happened was that day 3 looked identical to normal, day 4 looked a little off, and day 5 went completely bonkers haywire so that the weather in Lorenz World diverged completely from any previous time he had run the sim.
What was going on here? After a while spent scratching his head, Lorenz finally realized that when entering information for the beginning of day 3, he only used 3 decimal places, whereas the computer had been keeping track of 6! These last three decimals, believe it or not, made all the difference. There's a phrase everybody has heard before that sums it up: a butterfly flaps its wings in Brazil and eventually causes a storm in Chicago.
That's what put a stop to the hopes and dreams of long term weather forcasters ever since. No matter how detailed your data is, it's not enough to get an accurate forecast more than a few days out. If you had a temperature, pressure, and velocity measurement for each and every cubic foot of the atmosphere at precisely the same instant and fed that data into a perfect simulation (one with a lot more equations than the 12 that Lorenz used), even that wouldn't work because wherever you took those measurements it would only be accurate for those exact points in space. You'd be blind about the spaces in between. Move one inch to the right or one inch to the left and the temperature changes by a tenth of a degree, or the wind velocity is 12.2mph instead of 12mph. What Lorenz showed is that these seemingly miniscule variations don't cancel each other out, they are amplified over time and lead to major differences in the world. Some called it the butterfly effect; Lorenz called it sensitive dependence on initial conditions.
Comets don't spend their lives jostling with trillions of other comets, getting pushed this way and sucked that way the way water molecules do, however. A scientist banging out some simulator code in a dark room can forget about including the gazillions of miniscule pushes and pulls from say, galaxies far far away, and only worry about the few objects Haley passes close by to: the sun, planets, and so on.
The life of a water molecule, apparently, is so much more complicated than your average comet that to try and predict where it will end up milliseconds from now might be impossible with all the computing power available in the world, let alone when it finally arrives at the bottom of the waterfall.
Atmospheric science bumps up against the same conundrum. At one time in the 1950s and 60s, the scientific community, led by John von Neuman (of Manhattan Project fame), thought they were hot on the trail of being able to predict weather days, months, or even years ahead of time. Their optimism stemmed from their recent experience tracking celestial bodies combined with the advancement of computing power. That is, until a guy named Edward Lorenz came along.
Lorenz had the impertinence to blow up everyone's plans of accurate simulation of weather by, you guessed it: building a simulation. At the time, he was dealing with a computer with all of the processing horsepower contained in a modern day garage door opener, but his workaround was to simplify, simplify, simplify. He would give it a starting point and it would then figure and spit out things like the temperature and wind direction for every so many minutes of simulated time. Lorenz World's weather never quite repeated, but it did rhyme, kind of like real weather!
The surprise came one day when Lorenz wanted to start the simulation on day 3 (or day 5 or whatever), so to save time he just entered all the weather conditions at the beginning of day 3 that the computer gave him from the last simulation run. The following days 4,5,6 and so on should look identical to what they always did, right? Wrong. What actually happened was that day 3 looked identical to normal, day 4 looked a little off, and day 5 went completely bonkers haywire so that the weather in Lorenz World diverged completely from any previous time he had run the sim.
What was going on here? After a while spent scratching his head, Lorenz finally realized that when entering information for the beginning of day 3, he only used 3 decimal places, whereas the computer had been keeping track of 6! These last three decimals, believe it or not, made all the difference. There's a phrase everybody has heard before that sums it up: a butterfly flaps its wings in Brazil and eventually causes a storm in Chicago.
That's what put a stop to the hopes and dreams of long term weather forcasters ever since. No matter how detailed your data is, it's not enough to get an accurate forecast more than a few days out. If you had a temperature, pressure, and velocity measurement for each and every cubic foot of the atmosphere at precisely the same instant and fed that data into a perfect simulation (one with a lot more equations than the 12 that Lorenz used), even that wouldn't work because wherever you took those measurements it would only be accurate for those exact points in space. You'd be blind about the spaces in between. Move one inch to the right or one inch to the left and the temperature changes by a tenth of a degree, or the wind velocity is 12.2mph instead of 12mph. What Lorenz showed is that these seemingly miniscule variations don't cancel each other out, they are amplified over time and lead to major differences in the world. Some called it the butterfly effect; Lorenz called it sensitive dependence on initial conditions.