Friday, December 19, 2014

Serial's fallacies and how we fell for them

At the beginning of Serial, Sarah Koenig and Ira Glass convinced me I was in for a wild ride. I was supposed to live Koenig's back-forth-experience of doubt and conviction. Adnan is innocent—no, he's guilty! Every episode, a new telling clue, a new revelation!

Little did I know Koenig and her producers had no idea how to incorporate new evidence with their prior beliefs. They fell prey—repeatedly—to the prosecutor's fallacy, giving undue weight to new discoveries, trying to pull us along as they changed sides again and again. Unfortunately for them, the basic story never changed. At the beginning, there was no physical evidence against Adnan, and the whole case against him came down to Jay's credibility. In the end, there's no physical evidence against Adnan, and the whole case against him comes down to Jay's credibility.

Whenever Koenig was convinced of his innocence, something new—the Nisha call!—would change her mind. When she was almost convinced of his guilt, something different—the Aisha call!—would change it back. In the last episode, my frustration reached its peak when Dana—the "logical" one—summed up her understanding of the case with one telling instance of the prosecutor's fallacy. She argued that, despite the utter lack of evidence incriminating him, Adnan most likely killed Hae because the string of events that took place on the day of Hae's disappearance just make more sense that way, and are so unlikely if Adnan is innocent. I mean, if there's only a 10% chance that all of those unlucky coincidences would happen in the case of Adnan's innocence (the Nisha call, lending Jay his car and phone, asking Hae for a ride), then that means there's a 90% chance he did it, right??

Wrong. Turn that same argument on someone who wins the lottery. It's immensely unlikely, after all, for someone to win the lottery. But maybe she cheated. She's much more likely to win by cheating! There's only a 1/1000,000 chance of winning in the case of not cheating, so clearly, cheating is the more "logical" explanation.

The problem is with treating new pieces of evidence—winning the lottery, potentially unlucky coincidences—in isolation, rather than weighting them by prior likelihood. Yes, one is more likely to win the lottery by cheating, and yes, the Nisha call makes more sense if Adnan is guilty, but the prior probabilities in this case—of cheating at the lottery, of Adnan committing murder—are low. And that's what Koenig was missing throughout Serial, at every turn, in weighing evidence for or against Adnan.

And of course, there's a foolproof way to incorporate new beliefs with old ones: Bayes' rule, the most misunderstood of mathematical truths.

Or

For instance, consider Dana's mental calculation at the end. She's looking at these coincidences more or less in isolation, as if they make up most of the case against Adnan (which, aside from Jay's questionable testimony, they kinda do). That is, she's considering the probability of Adnan's guilt (A) given the coincidences (B). Dana incorrectly equates this probability with P(B|—A), probability of B given not A, or the probability of those coincidences, given that Adnan is not guilty, which is of course low. Let's say 10% for the sake or argument. But as you can see, the two things are just not equivalent (this is the prosecutor's fallacy, more or less).

Let's estimate the other quantities reasonably. Let's imagine P(B|A), the probability of all of these unlucky things happening in the case Adnan is guilty, is high, say, 70%. (I wouldn't say 100%, even for the sake of argument, because why would Adnan call Nisha in the course of a murder?? All these things happening together is unlikely, even if Adnan is guilty).

Then there's P(A), the prior estimation of probability that Adnan murdered Hae, before we consider the evidence at hand. This has to be low, because Dana is using these coincidences to weigh Adnan's story against Jay's. And aside from that story, what reason do we have to believe Adnan committed the murder? Not much. So let's say this is 20% (still an overestimation, as well, I would say).

This leaves P(—A) = 1 - 20% = 80%, and our calculation is: (.7)*(.2) / [(.7)*(.2) + (.1)*(.8)] = .64

After weighing the evidence of these coincidences, our subjective probability that Adnan is guilty goes up, as we would expect. But not as high as we would think. Definitely not to 90%, as Dana kinda-sorta-implies but without any numbers. And I would argue that the 10% for P(B|—A) we chose was too low anyway, since we know Adnan had other potential reasons for lending his phone and car to Jay, and we know there are other possibilities for how the Nisha call could have happened. (And I don't know why they kept considering the cell tower evidence, since my understanding is that the tower just isn't that closely related to the location of the call).

This same process seemed to take place over and over again in Serial. Something new would come up, and Sarah Koenig would overstate its significance. Adnan stole money from the mosque: maybe he has the personality to commit murder. Aisha called Adnan when he was high at Jen's house: Adnan had a reason to be acting scared of the police. These may have made for compelling drama early on, but ultimately, turned Serial into a frustrating experience, because after the first few episodes, nothing of any substance happened or changed. The case against Adnan was comically thin all along, and the producers of Serial failed to break the story down much further than that.

In the end, it all came down to your P(A), your estimation of the prior likelihood of Adnan committing the murder after considering the basic evidence, and so everyone's opinion pretty much lined up with whom they believe more, Adnan or Jay. And in that sense, the whole series was a massive exercise in confirmation bias.

Tuesday, December 9, 2014

Dick Cheney to Senate: Nobody Out-Evils Dick Cheney



In scathing comments regarding the recently released Senate Intelligence report on the CIA's torture of detainees, Dick Cheney lambasted the committee for ignoring his "singular role" in creating "the policies, culture and dubious legal cover" for the CIA's interrogation techniques.

While the report contends that CIA interrogators acted without authorization and outside the legal bounds of the programs approved by the Bush administration, Cheney insists that he was behind the torture of the terrorist suspects all along. "This idea that detainees were waterboarded, deprived of sleep or subjected to 'rectal hydration' without my knowledge is absurd, since everyone knows I practically invented rectal hydration," said Cheney.

Insisting that he would have "personally waterboarded every detainee in US custody, guilty or innocent," Mr. Cheney dismissed the claim that the CIA acted as a rogue agent in the aftermath of the September 11th attacks.

"Look, I think it's become pretty clear the most outrageously terrible ideas from the last Presidential administration were my idea. Nobody at the CIA could have come up with anything this ill-conceived and ineffective," said the former Vice President, adding, "Nobody out-evils Dick Cheney."


Friday, November 7, 2014

Newly Discovered Fossil—and Science—Could Prove Problem for Creationists



As the Washington Post reports, researchers have discovered a fossil that they claim—along with the entirety of the scientific literature of the past 150 years—could be a headache for Biblical Creationists.

The new fossil may, along with multiple lines of converging evidence from fields as diverse as paleontology, biology, astronomy, genetics, physics and geology, finally provide strong evidence against the hypothesis that the world was created in its present form as little as 10,000 years ago.

"Sure, we have plenty of evidence from carbon dating that the earth is billions of years old," said lead researcher, Ryosuke Motani. "The theory of continental drift makes it pretty clear that the earth has changed quite a bit over that time. Once you get into the biology, you've got evidence from molecular genetics, comparative anatomy of homologous traits, not to mention the distribution of various branches of the tree of life around the globe, to name just a few."

Added Motani, "but until today, we didn't really have a smoking gun."

Multiple scientific observers have expressed hope that, along with many cases of observed evolution in nature and in laboratories, and in addition to numerous already-discovered fossils that appear to be the intermediate stages between marine and land-dwelling animals, the newly discovered fossil will be the straw that breaks the camel's back in the Creationists' argument.

"We truly believe that Creationist fundamentalists the world over will finally yield on this," an assistant researcher said. "...especially after they consider that nothing in the last century of scientific investigation has produced any shred support for their hypothesis."

A spokesman at the Creation Museum in Petersburg, Kentucky said they will take a "very close look" at the group's forthcoming research paper before deciding their next move.

Monday, October 20, 2014

Podcast! The Music Post is here

The Music Post is here! It's a podcast dedicated to discovering what makes great music great.



Listen to the first episode above, or right click here and select "save link as" to download directly.

You can also find episode two and all future episodes here, and subscribe on iTunes here.

Should be available for subscription on iTunes within a few days!

Okay, now why would I do such a thing?

In the past five years, I've played a lot of performances, and received a lot of feedback from audiences. A fair amount of this feedback comes in the form of praise for the feat of performing, things like "That looked so difficult" or "I can't believe you memorized that whole piece." Of course, I appreciate this type of praise, but as a musician, it's definitely not what I'm going for, because music, after all, isn't a sport. It's art. 

What I'm really shooting for in a performance is to have the audience fall in love with whatever I'm playing, to seek it out. I know I've really, truly connected with an audience when people say something more like "That piece was so great! Where can I listen to it again?" 

Even more effective than simply playing, I've found that walking people through a piece, pointing out features and motives, beautiful moments, is the surest way to get them to connect with it and seek it out later. Even more so than my playing, people appreciate these mini-explanations of the music I'm playing (or at least, that's what they claim!), telling me they've never been able to listen more closely or more attentively. 

So I figure why restrict that to the recital hall? It scales perfectly well to a podcast, where you can listen while you drive, cook, or get ready for bed. 

Hope you enjoy!





Monday, October 6, 2014

Art of Fugue, cont'd: which instruments?

Of the many mysteries surrounding the Art of Fugue, perhaps the most practical regards the instrumentation: Bach leaves no indication of what instrument(s) he had in mind in composing the piece. Perhaps it was just an oversight, or perhaps he thought it was so obvious, he didn't need to even write down the instruments. More likely, though, Bach was purposefully evasive and ambiguous, leaving the door open for numerous readings.

There's evidence to support multiple sides of the debate. The piece is written entirely in "open score," instead of on a grand staff as was most typical for keyboard music. On the other hand, various details suggest that Bach heard it on a keyboard (the fact that it's possible to play). For many people, Bach’s failure to explicitly commit to an instrumentation—along with his failure/reluctance to indicate tempos/dynamics/phrasings in the vast majority of his compositions—is a weakness or an unfortunate omission, opening this work, and all the works of Bach, to “wrong” interpretations and gross manipulations.

But for me, it’s a strength, and maybe even a sign of Bach’s foresight and growing wisdom in his old age. Why commit his final masterpiece to any single instrument or ensemble when the world of music is always changing, adapting to new trends? If a piece is to be timeless, it should adapt too, and this is exactly what the Art of Fugue, and all of Bach’s music, has aged so magnificently well. It’s the difference between “No state shall discriminate on the basis of race” and “No state shall deny its citizens equal protection of the laws.” The former may have been essentially what the drafters of the 14th Amendment had in mind, but it would have been overly rigid, with no room for an evolving standard of equality.

Questions of history and constitutional law aside, though, the practical question remains: what instrumentation best suits this piece? Few works have been the subject of more variety of interpretation. You can hear the Art of Fugue played on piano, harpsichord, organ, harmonium, string quartet, brass quartet, and recorder quartet, guitar and trombone duo (seriously), among others. 


The advantage of the quartet versions is obvious: each player can give his full attention to a single voice, giving them each an independence that should—in theory at least—be impossible for a single keyboard player to execute.

On top of that, compare moments like 0:38 in the video below to 0:34 in the video above. On the piano (though not the organ, to be discussed next time), you can't sustain notes. Once you play a note, it immediately starts to disappear. This is a huge disadvantage, in general, but even more so in the Art of Fugue, in places like this one. This moment brings back the main theme for the entire piece in a soaring soprano line, but on piano, it can be a little, well, disappointing.




Then again, what we gain from hearing from four instruments' individuality and attention, we can easily lose in unity of vision and overall coherence. It's mighty difficult for one person to keep track of four voices, but at least that person is in full control and able to present one single vision of the music in question.

And then there’s the inescapable fact (for me, at least) that this music just doesn’t quite sound right played on strings or brass instruments. The main difference is in the quality of articulation on these instruments vs the piano, harpsichord, or organ. On the latter instruments, it’s impossible to play with a true legato, or smoothness between the notes. Every note is marked by a clear and definite beginning, unlike in string instruments (where the bow can continue moving between notes), or winds and brass (air keeps flowing). This fact is something we pianists work incredibly hard to overcome or compensate for, but it’s this place in between, the illusion of legato that’s not quite there, that makes our instrument perfect for contrapuntal writing. The quick-moving notes in the string parts have to be separated to be heard at all, but as such, they're too articulated, where on piano they can be with a more singing quality.


There are lots of other considerations, of course, and it depends on a lot on which number of the Art of Fugue we're talking about, as well as who exactly is playing. But perhaps not surprisingly, as a pianist, I think this music sounds best on a keyboard (and apologies to all brass player friends, but the brass quintet just doesn't work, at least not the one I linked above). But where I’m firmly convinced that much of Bach’s keyboard music sounds best on piano, specifically, I can’t say I’ve come to the same conclusion for the Art of Fugue. More on that next time.

For the record, this is my favorite non-keyboard version yet:

Thursday, September 25, 2014

The Hypnosis of Bach's Art of Fugue

A little over a year ago, I was driving along unsuspectingly when I switched on the car radio. Immediately I was taken, entranced by the incessant, interwoven lines of a four-part fugue played on the organ. I knew it could only be Bach, and moments later, when the original theme emerged in sustained and soaring whole notes in the soprano line, I knew it was from the Art of Fugue. Somehow, I had never heard any except the first, second, and fourth movements, or "Contrapunctuses," from one of Bach's last, culminating masterpieces. Sitting in city traffic is not usually the time or place we associate with profound experiences, but such was the power of Contrapunctus 9—and of Glenn Gould's playing—that I'll never forget those few minutes.

I went home and listened to more of the same recording, and was shocked at what I'd been missing. I know a lot of Bach's music, and have been lucky enough to perform a good chunk of it as well: the Well-Tempered Clavier, many of the dance suites, the Italian Concerto. When I played the Goldberg Variations, I kind of thought I had conquered the most difficult, the most complex, the pinnacle of them all. Little did I know the Art of Fugue is like the Goldberg Variations on steroids. In the words of Angela Hewitt, "it makes the Goldberg Variations sound like child's play."

So when a fellow pianist proposed last spring that, together, we learn the whole thing, I enthusiastically-and-somewhat-naively accepted the challenge. Over the next month, I'll attempt to describe the piece and my experience grappling with, learning, and playing it.

These pieces are different from anything Bach wrote. They are beautiful, yes, and they can entertain, certainly. But at their best, they overwhelm, they awe, and they mesmerize. 

Of course, if you don't know this music, what are you waiting for?  Go listen

The whole project consists of eleven fugues, plus four canons, plus six more "mirror" fugues, plus the final, colossal-yet-tragically-unfinished quadruple fugue, all labeled with the somewhat more generic Latin "Contrapunctus," for "counterpoint."

Why the obsession with fugues? Bach was the unquestionable master of counterpoint. No one has been (or will for all eternity be) able to combine music of multiple independent parts to better entice and challenge the human brain. That's why listening to Bach's music can sometimes be "difficult": there's usually no one thing going on (no "melody") that draws your ear to the exclusion of other parts of the music; there is, rather, a bunch of melodies all at once, constantly vying for your (and the poor keyboard player's) attention.

But the payoff is enormous. To the extent that our enjoyment of art comes from subtly recognizing patterns, contrapuntal music provides a whole layer or dimension on which to built those patterns. And fugues—pieces defined by one (usually) to a few recurring musical ideas, twisted and transformed to varying degrees of recognizability—provide the perfect medium for a true craftsman to demonstrate his mastery of counterpoint. And Bach was the greatest craftsman of all.

The first fugue is smooth, filled with a feeling of emptiness and desolation. It never departs from the key of d minor yet never stands still, continuously in flux, with no clear structural boundaries or moments of repose. It is in one sense the simplest piece of the whole set, yet it is mysteriously elusive. Whereas the other fugues will draw heavily on increasingly elaborate technical feats of fugal style—stretto, inversion, augmentation and diminution—as well as an elaborate chromaticism that was a hundred years ahead of its time, this fugue is propelled by nothing more than its simple subject, which serves as the inspiration for all that follows.

After that, the the music grows in technical and harmonic complexity, as well as sheer density, culminating—at least temporarily—in the obsessive, relentless and truly manic Contrapunctus 11.

The four canons and six mirror fugues are even more esoteric in style, but underscore Bach's incredible ability to mold his music to his thematic ideas. (The mirror fugues, by the way, are kind of exactly what they sound like: each has a right-side-up version, and an upside-down version, to be presented separately in their completion...pretty incredible!).

But none of them matches the dignity, solemnity and profundity of the final fugue (Glenn Gould's all-time favorite, by the way)....

Perhaps it is not the easiest piece to become acquainted with, and doesn't exactly make for easy listening. But be careful: the Art of Fugue is apt to consume you as it has done for me over these past couple of months. And the best is yet to come!



Tuesday, September 9, 2014

Alex Ross Missed the Point


Having just read Alex Ross's typically eloquent essay from last week's New Yorker (hoping my opinion is still relevant a week later), I can't help but wonder whether he's missed the point.

Ross talks about the joys of scanning his collection of CDs, of reading the old liner notes, of the personal connection he feels to each recording by virtue of its real, physical existence. He laments the economics of online streaming, where royalty payments are so pathetic, only the artists who have no need for them can hope to earn anything from them. (Aside: I was legitimately excited last month when my Spotify income surged to $1.95.) Certainly, he realizes that not everyone agrees: "If I were a music-obsessed teen-ager today, I would probably be revelling in this endless feast, and dismissing the complaints of curmudgeons."

But the problem of the cloud reaches deeper than Mr. Ross realizes. All nostalgia and ethics aside, listening to music from the cloud changes the perceptual experience of listening. Technology has shortened and divided our attention in many ways, and listening to music is no exception. And that's bad news for classical music in particular.

CDs, LPs, and cassette tapes: music in these forms is (or was) a commitment. You made a decision to buy a recording and spent your hard-earned money on it. Any time you wanted to listen, you had to decide that you would listen, what you would listen to, and physically go through the motions of starting some sort of listening device, while remaining in the same place for the duration of the recording. Listening was expensive, not just because it cost money, but in the cost of setting it up and parking yourself somewhere to hear it (or carrying around a massive portable player).

One of my very first recordings was a cassette of the Bach inventions. Cassettes were terrible, of course, but they were great because I couldn't skip ahead in the tracks: they forced me to listen all the way through.

At that point I probably only had three of four classical recordings to choose from (along with a handful of Madonna and Alanis Morisette tapes).  Every time I bought or received a new CD, it was an event! I always listened religiously until I knew the new recording inside out.

Sometime in middle school, I jumped on board the technological bandwagon (at the time, this consisted of acquiring something called a "MiniDisc") that allowed me to splice different parts of different CD tracks together. I thought this was great! I could finally take my favorite moments of every piece of music and listen to them next to each other, basking in the continuous ecstasy of musical climax after climax. But of course, things didn't work out that way, because a musical moment's power comes with context. Much of the power of the end of Beethoven's 7th Symphony, or Mahler's 2nd, or any piece, comes from all the buildup before it, and trying to cheat my way to that kind of musical epiphany was shortsighted and naive. 

In college, I swung back in the other direction. Having bought a record player and inheriting a bunch of my parents' old records, it was like elementary school all over again, and I could listen for long stretches of time. Maybe listening to LPs was a waste of time at a university (sorry parents!), but once I went through the tiresome routine of removing a record, cleaning it, and putting it on, I was certainly going to try my best to enjoy it and absorb everything it had to teach.

Listening today takes neither time nor money. MiniDiscs never rose to popularity, but what replaced them is more efficient and much easier to use (and much, much cheaper). On its face, this seems great! I doubt I ever would have bought six different recordings of the Marriage of Figaro, but I do have them all in my Spotify. But every time I want to listen to something new, the number of choices is mind-boggling. And humans don't always deal with choices intelligently. If I don't like something new right away, I don't usually follow through and listen to it again, the way I always would with a new CD, until I was absolutely sure I didn't like it.

Streaming makes it easy, too easy, to listen to music. We can listen any time, anywhere, for whatever tiny duration we choose, without taking our attention from whatever other task we might be engaged in.  If you're listening to Ke$ha and Taylor Swift, that's probably okay. Pop music can survive a five minute attention span. But "classical" or "art" music requires sustained engagement and attention in order to be fully appreciated and understood.

Cloud listening has the potential for good, but it doesn't lend itself to true listening. Unfortunately, that's what classical music depends on. The only solution: fight back against our habits!

Friday, November 23, 2012

teaching backward, calculus, part 1

Some (admittedly limited) experience teaching math and physics in high school has led me to believe that the standard approach to teaching calculus is misguided. The way we typically teach math, the solutions all come first, and then we teach students why those solutions exist. But problems always precede solutions in real life. Why not in the classroom?

Calculus was developed as a solution to a very specific problem: the motion of objects through space. Though its applications range far beyond that problem, that original problem remains by far the best way to approach calculus since everybody already has intuition and experience with moving objects.

In a better world, then, calculus will always be approached from a physical perspective, since everyone already (sort of, at least) understands how things move around. Here's how. [This is, incidentally, what I did in the first day of AP physics class, but as you'll see, it's not terribly complicated, and (hopefully) most anyone can follow it.]

Imagine you're standing around with a stopwatch on a road that conveniently has length measurements posted all along it, and a car drives past you. Your task is to measure how fast the car is going the instant it passes the mark that's right at your feet. How do you do it?

Well, speed is just a measure of how far the car goes in some unit of time, say, a second, so you can just start your watch as the front wheels pass the mark by your feet, and then mark off where the front wheels of the car are when the watch reads exactly one second. (We can ignore the fact that, perceptually, this might actually be a difficult task...imagine you have some helpers or something). Let's say it's gone 10 meters, as marked on the road. Then it's speed is just 10 meters / 1 second=10 meters per second. Right?

Almost. What you've measured is the car's average speed over one whole second. But remember we want to find the speed of the car the instant it passes by your feet. Let's say it passed you by quite slowly but then managed to speed up incredibly quickly and travel 100 m by the time your stopwatch reached the one second mark. You wouldn't conclude that it was going 100 meters per second when it passed you.

So, you say, okay, let's not measure the distance it travels in a whole second after it passes me, as it can speed up, slow down, and do all sorts of crazy things in that time! Let's measure the distance it goes in just a tenth of a second!

This approach will have the same problem, but it's definitely getting us closer to what we want. The car can speed up or slow down in a tenth of a second just as it can speed up or slow down in a whole second, but it can't speed up as much! What you'll end up measuring though, is the average speed of the car over one tenth of a second. That's probably closer to the speed we're looking for.

Okay, so make it a hundredth of a second, or a thousandth! Well, you're getting the idea. No matter how small you make the time interval over which you're measuring, the car will always move some finite distance over that time interval. You can basically think of the speed as the distance you travel in some tiny time interval, divided by the time interval. If the car goes 10 millionths of a meter in 1 millionth of a second, then it's speed is very well approximated by .000001 meters/.0000001 seconds=10 meters per second.

[Now, if you want to be more precise, the above definition of speed doesn't quite cut it (but it's close enough, so you can probably skip this paragraph). Really, you take all these different tiny time intervals, say a thousandth, a millionth, a billionth, and a trillionth of a second, and mark off where the car is after each time interval. You find the average speed associated with each time interval as we did above for one second and one tenth of a second. If they're the same, great! You're done. That's your speed. But even if they're different, you'll notice that as you make the time interval smaller and smaller and smaller, the speed you calculate will get closer and closer to some value. That's the speed.]

Congratulations, you now more or less understand the idea behind the derivative—one of calculus's two essential ideas! In this case, what we were looking for was the speed. But here's how we found it: we took the change in position (how far the car moves) and divided by the time interval, meanwhile shrinking the time interval so that it was arbitrarily small. In math jargon, this looks like





where x stands for position, t stands for time, and the Greek letter delta means "change in." This, then, we re-define as the derivative of position with respect to time. We solved our problem, and we generalized our solution to a definition, which will be very useful later on!

In the next post, I'll discuss the standard approach to calculus a little more thoroughly.




Tuesday, November 6, 2012

No, the electoral college is not a good system

Oh come on, obviously you can't think about anything but the election today anyway! You might as well keep reading, even though you probably already agree. Do not be attempted to check nytimes or cnn, as the election results are still not in. And don't worry, Fivethirtyeight still has Obama above 90%.

Anyway, yesterday, courtesy of Sarah (hi Sarah!) I was pointed to this interesting argument in favor of the electoral college (update! see this one from today in Slate, especially point 1 which is basically the same as the previous link). At first it seemed persuasive. But then I realized the entire argument rests upon a basic flaw of sampling and statistics!

Weingarten says that a close election in 1 or 2 states is a manageable disaster, but a close election nationally would be an unmanageable disaster because every vote would be contested, not just every vote in FL, or every vote in OH, or whatever. This is an appealing point—it would be a nightmare if the campaigns were suing for votes all over the country—but it ignores the fact that the likelihood of a close and contestable election in the statistical sense (explained below) decreases sharply with the number of votes cast. A 0.5% margin of victory nationally is equally likely to, but much more robust, than a 0.5% margin of victory in any one state. A more precise formulation of this same idea: if a candidate wins by 0.5% in a single state, it's much more likely that his victory in that state is a result of random vote-counting errors than if the candidate wins the national popular vote by 0.5%.

How much more likely? It depends on the relative size of the state vs. the national population, but the general relationship is that the statistical robustness of a given margin of victory grows like the square root of the sample size. So if a state has 1/100th the voting population of the country as a whole (like, say, CT), then a given margin of victory is equivalent to a national margin of victory that's only 1/10th as large (since 10 is the square root of 100).

The margin of victory of Florida in the 2000 election was about 500 votes out of over 5 million, or less than 0.01% of the total votes cast.  In order for a national victory in the popular vote to be as narrow statistically, it would have be a margin of less than 0.002%, or just 3000 votes out of about 140 million. Although one Presidential election has been this close (1880), it was way back when the population was much smaller, and that election was dubious for lots of other reasons. And no other popular vote result before or since has been anywhere near as questionable. In general, it remains true that the chance of a close election in one or more decisive electoral states is much more likely than the national popular vote being similarly questionable. Therefore a national popular vote is a much more reliable way to arrive at a clear, decisive winner.

Weingarten's other argument is that the electoral college ultimately legitimizes the electoral process by amplifying the margin of victory, since the winner typically wins a much larger fraction of the 538 electoral votes than of the total votes cast. But this contention seems neither desirable, nor true for any election that's close enough for it to really be an issue. Again, think back to the election of 2000. In that year, the election went to Bush by a mere 537 votes! Does that really legitimize the electoral process? No, it makes it seem incredibly arbitrary, because a national popular vote victory will simply never be that close!

Of course, as far as I know, no state has ever been decided that narrowly either, and so it was probably a one-time fluke as well. But the basic point remains: a narrow-enough margin to be dubious in a decisive electoral state is more likely than a narrow-enough margin nationally, because of the much bigger vote sample nationally.

And then there's all those other traditional reasons to dislike the electoral college. But I won't get into that.

Time to call some Ohians!

Monday, October 29, 2012

Teaching Backwards

One of the most difficult things about teaching math and science is avoiding the temptation to teach everything backwards.

The backward approach roughly follows the same standard scheme, as exemplified in every textbook ever (and far too many classrooms): introduce an idea, term, or a definition, and then explain what it means, how it's relevant, and how to do "problems" or answer "questions" using it.

For example, most high school math textbooks come to a chapter titled "trigonometry" or something like that, give definitions of the trigonometric functions (sine, cosine, tangent, etc.), and then proceed to show how useful they are in solving problems involving triangles.

Later in the chapter, in the section titled "the sum and difference formulas," the book gives the sum and difference formulas for sine and cosine, before giving the proofs, and before even stating a relevant question that would require the sum and difference formulas.

Unfortunately, this approach completely eliminates any and all creative insight into the very real problems at hand, and leaves the student with no reason or desire to acquire that insight. It's as if the definitions of the trigonometric functions were handed down by God, followed by a set of problems to solve that require them.

In the real world, of course, everything happens in the opposite order: definitions don't lead to problems. Problems and questions lead to insights, which lead to generalizations, which eventually lead to generalizations and definitions. Someone tries to find the distance between two points knowing their respective distances to a third point and one of the angles. After studying geometry (or even before), most students have an intuitive sense of how to approach this problem, and with some care and coaxing, you can get them to "discover" the law of sines. But if they learn trigonometry from a standard textbook, they'll never even get the chance, because there it is before the problem is given!

Most students can learn to use and manipulate trigonometric functions fine by the backwards method, but its flaws aren't merely aesthetic. If a student has no sense for the scope of a problem he or she is trying to solve, what reason does he or she have to remember the trigonometric functions beyond the next test, or the SATs? For most students, the sum and difference formulas are something to memorize and then forget, rather than a beautiful solution to a seemingly intractable problem.

On the first day of my 4th grade science class, I handed the students a sheet with just three questions: What is science?, Why do we care about/study science?, and How do we study science? Their answers were revealing and kind of depressing (also somewhat hilarious). How do we study science? Why, from science books of course! Why do we study science? So that if we need something to fall back on, we can be science teachers!


These students, like so many, have completely missed the point (so far at least!). Maybe some of them will miss the point either way. But it seems more likely that the things they learn will leave enduring memories if they have to confront the same problems that the people who actually discovered and developed them had to confront. At least that way, they get some sense of what math and science are really about!

Not-Romney for President

All the media back & forths of this Presidential campaign have somewhat obscured a few basic truths about the Republicans and their ideas for governing the country. Yes, we all know Romney has changed his mind on almost every issue, and nobody knows how he would govern. But the narrative and motivation of his candidacy still rest on a few paradoxical ideas about government and the economy.

The first is the "government can't fix your problems, so elect me to fix all your problems" fallacy. It's worth stepping back every once and a while and realize that this makes no sense. The Republicans have been hammering Obama for four years for directing his focus away from "job-creation," while arguing that the government should do less to create jobs. Less government is certainly a coherent ideological position, but not a good way to make jobs in a recession.

The next contradiction: lower taxes will encourage more people to work, which will bring down unemployment. Huh? This wouldn't even make sense if the US economy were somehow lacking for people looking for work. And anyway the problem is precisely the opposite. To the extent that lower taxes would encourage more people to work (which may not even be the case anyway), they would obviously increase unemployment since there aren't enough job openings for all the people looking for jobs anyway.

And lastly, the deficit. People seem to forget that there's a very specific reason to fear government deficits and debt: higher interest rates. Government debt isn't some vague but inherently evil entity that will erode Your Children's Future if not tackled Right Away. Your children will be richer than you! They'll pay back your debt fine!

On the other hand, if people fear the government will become insolvent and therefore unable to back its debts, then they demand higher interest rates, and these higher rates make private investment a less attractive alternative by comparison. That would be bad. But with interest rates on treasury bonds lower than ever, there's just no reason to make an issue of short term deficits.

And don't even get me started on foreign or social policy.

Wednesday, June 27, 2012

soccer, chance, and attribution, continued


As I argued in an earlier post, football is an inherently probabilistic game. Here I’d like to expand a bit on what I mean, and look at some (preliminary) evidence.

Clearly, everyone realizes that chance and luck play some role in soccer (indeed, in any sport). I’d like to argue that it plays a rather larger, and more specific role than we might think. In particular my hypothesis is that we can predict the distribution of goals in a soccer match and over a number of matches with a fixed-probability model. Imagine a soccer game is like a series of coin tosses of a very, very unfair coin. In each minute of a soccer match, we toss a coin that has about a 1/35 probability of landing on heads. How many times will it land on heads?

My hunch is that that the number of heads you get in this experiment is the same as the number of goals you get in any given soccer match, which (if true) means that in every minute of a soccer match there’s more or less a fixed probability that one or the other team will score.

This isn’t what we expect, or what conventional wisdom would predict. We like to think that in a 0-0 soccer game, the teams just didn’t attack very well, or defended very well, or both, and that in a 4-3 game, the opposite is true. Those teams really came out with attack-minded tactics and didn’t play defensively at all! And they did brilliantly, too! Right?

more thoughts on euro 2012 houghts on euros

I have to admit I succumbed to the hype of Euro 2012. I was excited to watch soccer every day, to watch some of the best teams and the best players on earth. But the tournament has, overall, been a disappointment. Let's admit it: too many of the games that were, on paper, decent match-ups, have been colossal bores. Starting with Germany-Portugal and continuing through to Portugal-Spain today, the games have been low-scoring (lowest since euro 96 overall), defensive-minded, and lacking general excitement. Even a game like Italy-England, which, to be fair, actually had quite a bit of attacking play and lots of chances (mostly for Italy), ended up with no goals at all. For me it's another sign that football, as a game, simply cries out for more goals. It's becoming a game of who can hold on to slim leads, rather than a game of who can attack the most and create the most chances. In all the rule changes I've suggested over the years for soccer, of course, I've never written about the most obvious, most consequential, and least likely to change in the near future: make the goals bigger. But that's for another time.

As for yesterday's semifinal, a few comments. First, Portugal did, in truth, defend brilliantly through the 90 minutes. Spain certainly were not at their best, and seemed to lack a lot of energy, but even a weak Spain team usually dominates possession and creates a lot more than they did. And people tend to look back (as I've mentioned before) when a team gets a clean sheet and claim that they defended well even if they just, in fact, got lucky, but in this case it was no meager stroke of luck. Portugal did what no team in the tournament had done thus far: they defended high up the pitch and denied Spain's defenders time to play the ball out. Teams have tried this against Barcelona, most notably Man. Utd in the 2009 champions league final, or Madrid in various Clasicos, but it usually doesn't work because if you apply high pressure to such a skilled team, you're vulnerable to quick attacks when the team breaks that pressure. But Spain were unable to do that, lacking, most notably, someone quick to run at defenders through midfield. They don't have a Messi, and until late in the game, they didn't even have a Pedro. Iniesta can usually take up this role, but he was unusually subdued.

Tuesday, June 19, 2012

An offside conundrum

If you've read anything I've written about soccer before, you probably already know that perhaps nothing annoys me more than when a team is falsely penalized for offside. So in a surprising turn of events, I'll be writing today about a new problem: the offside rule as currently interpreted allows for certain plays that should be sanctioned for offside. As you might, expect, two recent examples from Euro 2012 motivate this post: Bendtner's first goal for Denmark against Portgual, and Jesus Navas' goal for Spain against Croatia. The offside rule should be clarified so that these types of goals don't count.

The "Laws of the Game" state that a player is guilty of an offside violation if two conditions are met: 1) He is in an offside position when the ball is last touched by a player on his team, and, 2 "He is involved in active play by [either] interfering with play, interfering with an opponent, or gaining an advantage by being in that position."

Now back to the two goals I linked above. In the first, Bendtner is in an offside position when the ball is initially crossed to Krohn-Deli. He is clearly not in an offside position when Krohn-Deli heads the ball back across to him. On the initial cross, he is neither interfering with play, nor interfering with an opponent. But surely he gains an advantage by being in an offside position at that moment. If he weren't in an offside position, he would be closer to both Pepe and Bruno Alves, who could more easily track him and mark him on the following play. Now it turns out that both Bruno Alves and Pepe are giant ball-watchers, and simply turned their heads and watched as the play unfolded, in this case. However, even if they were decent defenders, they wouldn't have been able to get back mark Bendtner and prevent the goal, precisely because Bendtner was already closer to the goal. Thus, according to the clear and obvious meaning of the words in the offside rule, Bendtner is guilty of offside.

But wait! Since the offside rule is so complicated, FIFA appends a whole section to the "Laws of the Game" clarifying its interpretation. As you can see if you care to look at page 109-110 (that's right) in this PDF, you'll see that precisely this type of play, is deemed "not an offside offence." In fact, the phrase "gaining an advantage by being in that position" is furthermore defined to encompass only two specific situations, namely being an offside position when a teammate makes an effort on goal that rebounds off the goalpost or the goalkeeper.


That's it! Of course, FIFA can define the rules as it wants. But is it really "fair" in some more objective sense to allow these types of plays to proceed and not be offside? Well, by now you probably know what my answer is! To help you see why I think so, take the situation to its logical extreme. Imagine that one striker on an attacking team is camped out in the opposing team's penalty area (cherry-picking, as we say). Surely, the other team doesn't have to mind him when the ball is in the other half, since he's so far offside! That's the whole point of the offside rule, to essentially eliminate that player from relevance! But clearly, as the rule is currently interpreted, the defending team does have to mind the cherry-picker, because of the following possibility (illustrated in the awesome image below). Imagine a long, well-timed through pass is played toward the defending team's corner flag. One attacking player, who was already running toward the corner when the ball was played, chases it, and is tracked by a single defending player. But not all the defending players were running back when the ball was played because they weren't similarly tracking penetrating runs. But what would normally be a defensive situation totally under control, as the player running toward the ball is under pressure even if he gets to the ball first, is now a very worrying situation, because the attacking player can make a simple pass across to his teammate who is now waiting, onside, inside the penalty area, without a defender anywhere in sight! This is analogous to the goals linked above; according to the rules, the play is not offside, but clearly, it should be, because the cherry-picking player in fact compels the defending team to defend him, in a manner completely contrary to the spirit of the offside rule (and, indeed, to the most reasonable interpretation of the language of the rule).