Advertisement

Poll of local planners was perfectly legit

Share via

In Paul Clinton’s recent article on our survey of local coastal

planning administrators (“Survey reveals mixed opinion of

commission,” Saturday), the California Coastal Commission’s

legislative director, Sarah Christie, tried to dismiss our findings

as a “push poll” that she characterized as biased and inaccurate,

with no credibility.

Nothing could be further from the truth.

A “push poll” isn’t really a poll at all, but rather a large-scale

telemarketing campaign posing as a legitimate survey (in the

industry, we refer to it as “mugging” -- Marketing Under the Guise of

research). The intent is not to actually measure the opinion of

survey respondents, but to change it. To describe our survey as a

“push poll” is at best disingenuous, if not a downright misleading

effort to divert attention away from our legitimate survey findings.

Ours was a highly targeted survey of an extremely small sample

base -- not a large-scale telemarketing campaign. Had we wished to

“push” a position on our respondents, we would have asked something

like: “A large number of local coastal planning administrators around

the rest of the state have told us about a number of negative

experiences they’ve had with the California Coastal Commission. Have

you had any negative experiences as well?” Instead, we asked “whether

your professional opinion of the coastal commission generally tends

toward the positive or negative?” If that’s supposed to be a “push

poll” question, then we’ve failed miserably.

In the example just given, you’ll notice that we actually led with

“positive” as the first option presented to respondents. In fact, we

did that consistently throughout the entire survey. This type of

consistency improves the respondent’s ability to get into the

“rhythm” of the survey; had we wished to boost a negative response to

a particular question, we would have occasionally tried to “trick”

respondents by flipping the positive and negative responses once that

rhythm had been established.

We did not do that and, in fact, we went to great lengths not to

influence any of our respondents’ opinions through careful wording of

our questions (to be as balanced as possible), extensive use of skip

patterns (to limit specific probes of negative responses to only

those who actually had a preexisting strong negative position) and

careful placement of questions exploring specific concerns at the end

of the survey (so as not to influence any of the answers already

given). Had we wished to bias the results, we would have done just

the opposite.

A large-scale public opinion poll of several hundred to 1,000 or

more respondents may be able to generate a margin of error of 3% to

5%, but our survey was not designed for a large population. By

limiting our sample to local coastal planning administrators in each

of the 73 coastal commission jurisdictions, we fielded what we in the

industry refer to as a “professional survey.”

Typically, the level of cooperation on such surveys is just 10% to

15%; on ours, we enjoyed an extraordinarily high cooperation rate of

67% from an active sample list of 55 names (apparently, contact

information in the other 18 jurisdictions was either out of date or

nonexistent). In fact, our cooperation rate would have been even

higher had six of the respondents not indicated that they couldn’t

participate because they’d never had any personal dealings with the

commission. We only experienced four active refusals, while the

remaining eight respondents did not return our calls.

More to the point: Our margin of error of just 11% is extremely accurate given the very small active sample we were dealing with.

Given the widespread criticism it received from survey respondents,

we can understand why the commission might want to dismiss this as

“three times higher than most surveys,” but again, ours was not a

public opinion survey. In point of fact, no matter how positive the

other prospective respondents might have been, we still would have

wound up with an astonishingly high number of negatives.

While this survey did probe some of the negative opinions stated

by respondents, it did not probe any of the positives. This was not

done to bias the survey, but to cut down on the expense of this

project (which was paid for directly out of the pockets of our

underwriters). As you may know, legitimate survey research is

extraordinarily expensive, so all surveys limit the number of

questions asked.

Our clients’ stated objective in fielding this survey was to

identify ways in which the commission could better serve its

constituency (a question which was in fact asked of all respondents).

That’s precisely what our questionnaire was designed to do, as

quickly and efficiently as possible. If we had wanted to bias the

survey, we would have asked negative probes of all respondents in an

effort to influence their train of thought. Instead, those negative

probes never once came up in the survey -- unless the respondent had

already taken a strong negative position.

Instead of addressing the legitimate concerns raised by an

extraordinarily large number of the local coastal planners under its

jurisdiction, the commission seems to be doing everything it can to

dismiss them. If anything, this only serves to validate one of the

most common complaints identified by our survey: According to the

vast majority of local coastal planners, the commission is simply not

responsive to local input.

SCOTT TALLAL

Malibu

* EDITOR’S NOTE: Scott Tallal is president of Insite Research in

Malibu.

Advertisement