Friday, December 04, 2009

Fake surveys?

Today was not supposed to be "Beat up on the Goldwater Institute Day", but here's the second in a row about their lax intellectual standards.

David Safier, of Blog for Arizona, outlines good reason to believe that the survey of students' knowledge about their government conducted by Strategic Vision LLC for the Goldwater Institute was extremely flawed if not outright bogus. See:


It's impossible to say if the Institute colluded with Strategic Vision or was merely a "mark" in a scam. Donors to the Goldwater Institute--and supporters of the free-market cause--should be very concerned either way. We depend on them to at least be a sound, credible source of information. (It'd also be nice for them to be intellectually sound in their arguments, too!) If the surveys turn out to be fake, the result isn't merely the loss of a talking point.

Nate Silver has more. Although he uses the word "random" to mean "uniformly distributed" (sort of like a signal that he drags his knuckles when walking--the concepts are very different and not in a subtle way) once one substitutes the right word in, his point is valid.

FWIW, a real test of randomness would be interesting, as well. In a large table of survey responses, the results should be "spatially" uncorrelated--e.g. the response in row 198 should not depend on the response in rows 197 or 199, or (e.g.) if the responses are on an integer scale from 1-10 the probability of respondent N+1 reporting a 7 if respondent N reported a 2 should be the same as that of a respondent reporting a 7 in the data set as a whole. If the behavior of science lab students is indicative, data fabricators tend to like zigzag patterns, avoiding repetition or even lingering above or below the mean.

No comments: