Friday, November 23, 2012

teaching backward, calculus, part 1

Some (admittedly limited) experience teaching math and physics in high school has led me to believe that the standard approach to teaching calculus is misguided. The way we typically teach math, the solutions all come first, and then we teach students why those solutions exist. But problems always precede solutions in real life. Why not in the classroom?

Calculus was developed as a solution to a very specific problem: the motion of objects through space. Though its applications range far beyond that problem, that original problem remains by far the best way to approach calculus since everybody already has intuition and experience with moving objects.

In a better world, then, calculus will always be approached from a physical perspective, since everyone already (sort of, at least) understands how things move around. Here's how. [This is, incidentally, what I did in the first day of AP physics class, but as you'll see, it's not terribly complicated, and (hopefully) most anyone can follow it.]

Imagine you're standing around with a stopwatch on a road that conveniently has length measurements posted all along it, and a car drives past you. Your task is to measure how fast the car is going the instant it passes the mark that's right at your feet. How do you do it?

Well, speed is just a measure of how far the car goes in some unit of time, say, a second, so you can just start your watch as the front wheels pass the mark by your feet, and then mark off where the front wheels of the car are when the watch reads exactly one second. (We can ignore the fact that, perceptually, this might actually be a difficult task...imagine you have some helpers or something). Let's say it's gone 10 meters, as marked on the road. Then it's speed is just 10 meters / 1 second=10 meters per second. Right?

Almost. What you've measured is the car's average speed over one whole second. But remember we want to find the speed of the car the instant it passes by your feet. Let's say it passed you by quite slowly but then managed to speed up incredibly quickly and travel 100 m by the time your stopwatch reached the one second mark. You wouldn't conclude that it was going 100 meters per second when it passed you.

So, you say, okay, let's not measure the distance it travels in a whole second after it passes me, as it can speed up, slow down, and do all sorts of crazy things in that time! Let's measure the distance it goes in just a tenth of a second!

This approach will have the same problem, but it's definitely getting us closer to what we want. The car can speed up or slow down in a tenth of a second just as it can speed up or slow down in a whole second, but it can't speed up as much! What you'll end up measuring though, is the average speed of the car over one tenth of a second. That's probably closer to the speed we're looking for.

Okay, so make it a hundredth of a second, or a thousandth! Well, you're getting the idea. No matter how small you make the time interval over which you're measuring, the car will always move some finite distance over that time interval. You can basically think of the speed as the distance you travel in some tiny time interval, divided by the time interval. If the car goes 10 millionths of a meter in 1 millionth of a second, then it's speed is very well approximated by .000001 meters/.0000001 seconds=10 meters per second.

[Now, if you want to be more precise, the above definition of speed doesn't quite cut it (but it's close enough, so you can probably skip this paragraph). Really, you take all these different tiny time intervals, say a thousandth, a millionth, a billionth, and a trillionth of a second, and mark off where the car is after each time interval. You find the average speed associated with each time interval as we did above for one second and one tenth of a second. If they're the same, great! You're done. That's your speed. But even if they're different, you'll notice that as you make the time interval smaller and smaller and smaller, the speed you calculate will get closer and closer to some value. That's the speed.]

Congratulations, you now more or less understand the idea behind the derivative—one of calculus's two essential ideas! In this case, what we were looking for was the speed. But here's how we found it: we took the change in position (how far the car moves) and divided by the time interval, meanwhile shrinking the time interval so that it was arbitrarily small. In math jargon, this looks like





where x stands for position, t stands for time, and the Greek letter delta means "change in." This, then, we re-define as the derivative of position with respect to time. We solved our problem, and we generalized our solution to a definition, which will be very useful later on!

In the next post, I'll discuss the standard approach to calculus a little more thoroughly.




Tuesday, November 6, 2012

No, the electoral college is not a good system

Oh come on, obviously you can't think about anything but the election today anyway! You might as well keep reading, even though you probably already agree. Do not be attempted to check nytimes or cnn, as the election results are still not in. And don't worry, Fivethirtyeight still has Obama above 90%.

Anyway, yesterday, courtesy of Sarah (hi Sarah!) I was pointed to this interesting argument in favor of the electoral college (update! see this one from today in Slate, especially point 1 which is basically the same as the previous link). At first it seemed persuasive. But then I realized the entire argument rests upon a basic flaw of sampling and statistics!

Weingarten says that a close election in 1 or 2 states is a manageable disaster, but a close election nationally would be an unmanageable disaster because every vote would be contested, not just every vote in FL, or every vote in OH, or whatever. This is an appealing point—it would be a nightmare if the campaigns were suing for votes all over the country—but it ignores the fact that the likelihood of a close and contestable election in the statistical sense (explained below) decreases sharply with the number of votes cast. A 0.5% margin of victory nationally is equally likely to, but much more robust, than a 0.5% margin of victory in any one state. A more precise formulation of this same idea: if a candidate wins by 0.5% in a single state, it's much more likely that his victory in that state is a result of random vote-counting errors than if the candidate wins the national popular vote by 0.5%.

How much more likely? It depends on the relative size of the state vs. the national population, but the general relationship is that the statistical robustness of a given margin of victory grows like the square root of the sample size. So if a state has 1/100th the voting population of the country as a whole (like, say, CT), then a given margin of victory is equivalent to a national margin of victory that's only 1/10th as large (since 10 is the square root of 100).

The margin of victory of Florida in the 2000 election was about 500 votes out of over 5 million, or less than 0.01% of the total votes cast.  In order for a national victory in the popular vote to be as narrow statistically, it would have be a margin of less than 0.002%, or just 3000 votes out of about 140 million. Although one Presidential election has been this close (1880), it was way back when the population was much smaller, and that election was dubious for lots of other reasons. And no other popular vote result before or since has been anywhere near as questionable. In general, it remains true that the chance of a close election in one or more decisive electoral states is much more likely than the national popular vote being similarly questionable. Therefore a national popular vote is a much more reliable way to arrive at a clear, decisive winner.

Weingarten's other argument is that the electoral college ultimately legitimizes the electoral process by amplifying the margin of victory, since the winner typically wins a much larger fraction of the 538 electoral votes than of the total votes cast. But this contention seems neither desirable, nor true for any election that's close enough for it to really be an issue. Again, think back to the election of 2000. In that year, the election went to Bush by a mere 537 votes! Does that really legitimize the electoral process? No, it makes it seem incredibly arbitrary, because a national popular vote victory will simply never be that close!

Of course, as far as I know, no state has ever been decided that narrowly either, and so it was probably a one-time fluke as well. But the basic point remains: a narrow-enough margin to be dubious in a decisive electoral state is more likely than a narrow-enough margin nationally, because of the much bigger vote sample nationally.

And then there's all those other traditional reasons to dislike the electoral college. But I won't get into that.

Time to call some Ohians!