Peffley
Political Behavior, PS 271--002
Short Written Assignment I:
Critiquing a News Article’s Presentation of a Political Opinion
Poll
Use the survey checklist below and the
discussion in class to evaluate and critique the reporting of results from a
public opinion poll you find in a newspaper, magazine or some other source
(excluding textbooks and periodicals like the Gallup Report). Your critique is due two weeks from the day
we cover the survey checklist in class. It will be due at the beginning of class
and should be about 3 double-spaced typed pages with standard one-inch
margins. The purpose of the assignment
is to demonstrate your knowledge of the pitfalls of public opinion polls as
well as to apply that knowledge in an astute way to critique the particular
poll you find. No additional outside
reading is expected. Try to find a poll
that is discussed extensively so that you can pick and choose what aspects of
the poll to critique. Be sure to staple
your poll article to your paper when you hand it in. Your grade will of course reflect not only the substance of your
arguments but their clarity and organization, as well.
Checklist
of Potential Problems with Surveys
A. Sampling
Procedures
1. Is the sample a haphazard (nonprobability) sample or some variant of a probability (random) sample?
Examples of haphazard samples are: "person-on-the-street" interviews,
letters to the editor, call-in polls, "straw" polls, Literary Digest, etc. Problems are bias, nonrepresentativeness.
Probability samples give each individual from
the population an equal chance of being selected. They allow for
generalizability with some degree of sampling error.
2. What is the size of the sample? What is the "sampling error," or
the "accuracy level" of the survey and how does this affect the
interpretation of the survey findings?
Smaller samples (especially less than about
600 respondents) begin to yield intolerably high levels (4% and higher) of sampling error – the error or inaccuracy
in being able to generalize from sample results to the population. For example, for a sample size of 600 and a
sampling error of + or - 4%, if we find that 50% of the respondents in the
sample prefer candidate X to candidate Y, this actually means that we are
relatively certain (there is a 95% probability) that between 46% and 54% of the
American public prefer candidate X to Y.
Also, sampling errors are larger for smaller subgroups (e.g., women vs.
men) of the survey. Of course, if
accuracy isn't all that important, higher levels of sampling error may be
tolerable.
3. Was the interviewing done face-to-face or over
the telephone? If a telephone
interview, was random digit dialing used to select respondents?
4. What was the "response rate" of the
survey--i.e., percentage of those selected who refused to participate?
5. Note:
Sampling errors are just the "tip of the iceberg" in terms of problems
or errors with public opinion polls and reporting response rates, sampling
errors, etc. tend to give the reader a false sense of the accuracy of polling
results, as if such errors are the only ones we need to know about and that
most of the error in a survey can be estimated with precision. If the poll is done by a reputable firm, the
sampling procedure is probably the least important aspect of the survey to know
about.
B. Question Wording
1. Is the question "loaded" or biased in
some way? Does it "lead"
respondents to answer in a particular manner?
Does it present different sides of an issue fairly?
2. Is the question susceptible to social
desirability biases so that some answers might appear more socially acceptable
or "politically correct?"
3. Is the question clear and unambiguous, simple
and straightforward? Or are several
issues at stake in an unnecessarily complicated question? And does the question require knowledge that
many people may not have, or use terms that some people might not
understand? If so, the question may be
"testing" familiarity and measuring "nonattitudes" rather
than soliciting real opinions.
4. Are responses affected by the context of the
question--i.e., previous questions, question order, and the like?
C. Interpreting Survey
Results
1. Is
there any reason to think that the polling organization or sponsor is
distorting the results of the poll for its own benefit?
2. Are
there alternative interpretations or explanations for the results, besides
those being reported or intimated?
Could differences in responses across groups, over time, etc. be due to
some other reason than those suggested in the article?
3. What
are the goals of the analyst? Mere
description, explanation, or prediction?
4. What
"model" of polling and public opinion do pollsters and reporters seem
to have in mind in describing and interpreting the results of a poll?
Public opinion as elections: Is the public opinion poll being interpreted
as a sort of "interim election"
or a "mandate from the people" that should be followed by the
nation's leaders (George Gallup's position)?
Are the results being used to predict political behavior or support
weeks and months from now? If so,
political attitudes on an issue should be salient, stable, and
"strong" so that the picture provided by the public opinion poll is
not necessarily a distortion.
Public opinion is complex and we need to
understand these complexities: Or
is the poll being used to understand
the sources and dynamics of public opinion, which is acknowledged to be complex
and ever-changing? If so, is it
acknowledged that much of public opinion is often subject to change, and is
sometimes amorphous, somewhat weak and passive, with only a minority mobilized
pro or con? Is there an attempt to
understand how public opinion changes in response to events and how those
changes produce trends in the "climate" of public opinion? Is there an attempt to document trends in
public opinion over time, to understand the origins of public opinion, or
document and explain differences in public opinion across different social,
political, and information groups in the population?
5. Would
using other methods besides surveys help us to:
· delve
beneath the surface of superficial survey responses and understand how people arrive at their opinions in
the first place or what their responses mean?
Examples: depth interviews or
focus groups.
· disentangle
causes from effects in public opinion? Example: experiments.
· understand
the different meanings and subjective frames of reference that people use to
interpret terms and questions? Example:
Q-methodology.