|
The final step in preparing the survey is developing the data collection instrument. The most common means of collecting data are the interview and the self- or group-administered questionnaire. In the past, the interview has been the most popular data-collecting instrument. Recently, the questionnaire has surpassed the interview in popularity, especially in the military. Due to this popularity, this chapter concentrates on the development of the questionnaire.
It is important to understand the advantages and disadvantages of
the questionnaire as opposed to the personal interview. This knowledge
will allow you to maximize the strengths of the questionnaire while
minimizing its weaknesses. The advantages of administering a
questionnaire instead of conducting an interview are:
The primary advantage is lower cost, in time as well as money. Not having to train interviewers eliminates a lengthy and expensive requirement of interviewing. The questionnaire can be administered simultaneously to large groups whereas an interview requires each individual to be questioned separately. This allows the questions to reach a given number of respondents more efficiently than is possible with the interview. Finally, the cost of postage should be less than that of travel or telephone expenses.
Recent developments in the science of surveying have led to incorporating computers into the interview process, yielding what is commonly known as computer automated telephone interview (or CATI) surveys. Advances in using this survey technique have dramatically reshaped our traditional views on the time-intensive nature and inherent unreliability of the interview technique. Yet, despite resurgence in the viability of survey interviews, instruction in the development and use of the CATI technique is well beyond the scope of this handbook.
Many surveys are constrained by a limited budget. Since a typical questionnaire usually has a lower cost per respondent, it can reach more people within a given budget (or time) limit. This can enhance the conduct of a larger and more representative sample.
The questionnaire provides a standardized data-gathering procedure. The effects of potential human errors (for example, altering the pattern of question asking, calling at inconvenient times, and biasing by "explaining") can be minimized by using a well-constructed questionnaire. The use of a questionnaire also eliminates any bias introduced by the feelings of the respondents towards the interviewer (or vice versa).
Although the point is debatable, most surveyors believe the respondent will answer a questionnaire more frankly than he would answer an interviewer, because of a greater feeling of anonymity. The respondent has no one to impress with his/her answers and need have no fear of anyone hearing them. To maximize this feeling of privacy, it is important to guard, and emphasize, the respondent's anonymity.
The primary disadvantages of the questionnaire are nonreturns, misinterpretation, and validity problems. Nonreturns are questionnaires or individual questions that are not answered by the people to whom they were sent. Oppenheim (1966) emphasizes that "the important point about these low response rates is not the reduced size of the sample, which could easily be overcome by sending out more questionnaires, but the possibility of bias. Nonresponse is not a random process; it has its own determinants, which vary from survey to survey" (p 34).
For example, you may be surveying to determine the attitude of a group about a new policy. Some of those opposed to it might be afraid to speak out, and they might comprise the majority of the nonreturns. This would introduce non-random (or systematic) bias into your survey results, especially if you found only a small number of the returns were in favor of the policy. Nonreturns cannot be overcome entirely. What we can do is try to minimize them. Techniques to accomplish this are covered later in this chapter.
Misinterpretation occurs when the respondent does not understand either the survey instructions or the survey questions. If respondents become confused, they will either give up on the survey (becoming a nonreturn) or answer questions in terms of the way they understand it, but not necessarily the way you meant it. Some view the latter problem as a more dangerous occurrence than merely nonresponding. The questionnaire instructions and questions must be able to stand on their own and must use terms that have commonly understood meanings throughout the population under study. If novel terms must be used, be sure to define them so all respondents understand your meaning.
The third disadvantage of using a questionnaire is inability to check on the validity of the answer. Did the person you wanted to survey give the questionnaire to a friend or complete it personally? Did the individual respond indiscriminately? Did the respondent deliberately choose answers to mislead the surveyor? Without observing the respondent's reactions (as would be the case with an interview) while completing the questionnaire, you have no way of knowing the true answers to these questions.
The secret in preparing a survey questionnaire is to take advantage of the strengths of questionnaires (lower costs, more representative samples, standardization, privacy) while minimizing the number of nonreturns, misinterpretations, and validity problems. This is not always as easy as it sounds. But an inventive surveyor can very often find legitimate ways of overcoming the disadvantages. We provide some suggestions below to help.
The key to minimizing the disadvantages of the survey questionnaire lies in the construction of the questionnaire itself. A poorly developed questionnaire contains the seeds of its own destruction. Each of the three portions of the questionnaire - the cover letter, the instructions, and the questions - must work together to have a positive impact on the success of the survey.
The cover letter should explain to the respondent the purpose of the survey and motivate him to reply truthfully and quickly. If possible, it should explain why the survey is important to him, how he was chosen to participate, and who is sponsoring the survey (the higher the level of sponsorship the better). Also the confidentiality of the results should be strongly stressed. A well written cover letter can help minimize both nonreturn and validity problems. An example is given in Appendix F. In support of the statement above regarding level of sponsorship, the signature block on the letter should be as high level as you can get commensurate with the topic being investigated. For instance, a survey about Air Force medical issues or policy should be signed by the Air Force Surgeon General or higher, a survey on religious issues by the Air Force Chief of Chaplains, etc. Another tip that seems to help improve response rate is to identify the survey as official. Even though the letter is on government stationery and is signed by an military official, it may help to mark the survey itself with an OFFICIAL stamp of some sort. In general, the more official the survey appears, the less likely it is to be disregarded.
The cover letter should be followed by a clear set of instructions explaining how to complete the survey and where to return it. If the respondents do not understand the mechanical procedures necessary to respond to the questions, their answers will be meaningless. The instructions substitute for your presence, so you must anticipate any questions or problems that may arise and attempt to prevent them from occurring. If you are using ADP scanner sheets, explain how you want the respondent to fill it in - what portions to use and what portions to leave blank. Remember anonymity! If you do not want respondents to provide their names or SSANs, say so explicitly in the instructions, and tell them to leave the NAME and SSAN portions of the scan sheets blank.
If you need respondents' SSAN and/or name included on the survey for tracking or analysis purposes, you will need to put a Privacy Act Statement somewhere on the survey (refer to Chapter 2). The instructions page is usually a good place for this statement. It places it in a prominent place where all respondents will see it, but does not clutter the instrument itself or the cover letter.
The third and final part of the questionnaire is the set of questions. Since the questions are the means by which you are going to collect your data, they should be consistent with your survey plan. They should not be ambiguous or encourage feelings of frustration or anger that will lead to nonreturns or validity problems.
Before investigating the art of question writing, it will be useful to examine the various types of questions. Cantelou (1964; p 57) identifies four types of questions used in surveying. The classifier or background question is used to obtain demographic characteristics of the group being studied, such as age, sex, grade, level of assignment, and so forth. This information is used when you are categorizing your results by various subdivisions such as age or grade. Therefore, these questions should be consistent with your data analysis plan. The second and most common type of question is the multiple choice or closed-end question. It is used to determine feelings or opinions on certain issues by allowing the respondent to choose an answer from a list you have provided (see Chapter 3). The intensity question, a special form of the multiple-choice question, is used to measure the intensity of the respondent's feelings on a subject. These questions provide answers that cover a range of feelings.
The intensity question is covered in greater detail later in this chapter. The final type of question is the free response or open-end question. This type requires respondents to answer the question in their own words (see Chapter 3). It can be used to gather opinions or to measure the intensity of feelings. Multiple-choice questions are the most frequently used types of questions in surveying today. It is prudent, therefore, to concentrate primarily on factors relating to their application.
The complex art of question writing has been investigated by many researchers. From their experiences, they offer valuable advice. Below are some helpful hints typical of those that appear most often in texts on question construction.
As mentioned previously, the intensity question is used to measure the strength of a respondent's feeling or attitude on a particular topic. Such questions allow you to obtain more quantitative information about the survey subject. Instead of a finding that 80 percent of the respondents favor a particular proposal or issue, you can obtain results that show 5 percent of them are strongly in favor whereas 75 percent are mildly in favor. These findings are similar, but the second type of response supplies more useful information.
The most common and easily used intensity (or scaled) question involves the use of the Likert-type answer scale. It allows the respondent to choose one of several (usually five) degrees of feeling about a statement from strong approval to strong disapproval. The "questions" are in the form of statements that seem either definitely favorable or definitely unfavorable toward the matter under consideration. The answers are given scores (or weights) ranging from one to the number of available answers, with the highest weight going to the answer showing the most favorable attitude toward the subject of the survey. The following questions from the Minnesota Survey of Opinions designed to measure the amount of "anti-US law" feelings illustrate this procedure:The weights (shown by the numbers below the answers) are not shown on the actual questionnaire and, therefore, are not seen by the respondents. A person who feels that US laws are unjust would score lower than one who feels that they are just. The stronger the feeling, the higher (or lower) the score. The scoring is consistent with the attitude being measured. Whether "agree" or "disagree" gets the higher weight actually makes no difference. But for ease in interpreting the results of the questionnaire, the weighting scheme should remain consistent throughout the survey.
One procedure for constructing Likert-type questions is as follows (adapted from Selltiz, et al., 1963; pp 367-368):
The intensity question, with its scaled answers and average scores, can supply quantitative information about your respondents' attitudes toward the subject of your survey. The interested reader is encouraged to learn and use other scales, such as the Thurstone, Guttman, and Semantic Differential scales, by studying some of the references in the bibliography.
A number of studies have been conducted over the years attempting to determine the limits of a person's ability to discriminate between words typically found on rating or intensity scales. The results of this research can be of considerable value when trying to decide on the right set of phrases to use in your rating or intensity scale. When selecting phrases for a 4-, 5-, 7-, or 9-point Likert scale, you should choose phrases that are far enough apart from one another to be easily discriminated, while, at the same time, keeping them close enough that you don't lose potential information. You should also try to gauge whether the phrases you are using are commonly understood so that different respondents will interpret the meaning of the phrases in the same way. An obvious example is shown with the following 3 phrases: Strongly Agree, Neutral, Strongly Disagree
These are easily discriminated, but the gap between each choice is very large. How would a person respond on this three-point scale if they only agreed with the question being asked? There is no middle ground between Strongly Agree and Neutral. The same thing is true for someone who wants to respond with a mere disagree. Your scales must have enough choices to allow respondents to express a reasonable range of attitudes on the topic in question, but there must not be so many choices that most respondents will be unable to consistently discriminate between them. Appendix H provides several tables containing lists of phrases commonly used in opinion surveys with associated "scale values" and standard deviations (or inter-quartile range values). Also provided is a short introduction describing how these lists can be used in selecting response alternatives for your opinion surveys. The information in that appendix is derived from research done for the U.S. Army Research Institute for the Behavioral and Social Services at Fort Hood, Texas.
Like any
scientist or experimenter, surveyors must be aware of ways their
surveys might become biased and of the available means for combatting
bias. The main sources of bias in a questionnaire are:
Surveyors can expose themselves to possible nonrepresentative sample bias in two ways. The first is to actually choose an nonrepresentative sample. This bias can be eliminated by careful choice of the sample as discussed earlier in Chapter 4. The second way is to have a large number of nonreturns.
The nonreturn bias (also called non-respondent bias) can affect both the sample survey and the complete survey. The bias stems from the fact that the returned questionnaires are not necessarily evenly distributed throughout the sample. The opinions or attitudes expressed by those who returned the survey may or may not represent the attitudes or opinions of those who did not return the survey. It is impossible to determine which is true since the non-respondents remain an unknown quantity. Say, for example, a survey shows that 60 percent of those returning questionnaires favor a certain policy. If the survey had a 70 percent response rate (a fairly high rate as voluntary surveys go), then the favorable replies are actually only 42 percent of those questioned (60 percent of the 70 percent who replied), which is less than 50 percent! <197> a minority response in terms of the whole sample.
Since
little can be done to estimate the feelings of the nonreturnees,
especially in a confidential survey, the only solution is to minimize
the number of nonreturns. Miller (1970; p 81) and Selltiz et al. (1963;
p 241) offer the following techniques to get people to reply to
surveys. Some of these have already been mentioned in earlier sections
of this chapter.
Proper use of these techniques can lower the nonreturn rate to acceptable levels. Keep in mind, though, that no matter what you do, there will always be non-respondents to your surveys. Make sure the effort and resources you spend are in proportion with the return you expect to get.
The second source of bias is misinterpretations of questions. We have seen that these can be limited by clear instructions, well constructed questions, and through judicious pilot testing of the survey. Biased questions can also be eliminated by constructing the questions properly and by using a pilot test. Finally, bias introduced by untruthful answers can be controlled by internal checks and a good motivational cover letter. Although bias cannot be eliminated totally, proper construction of the questionnaire, a well-chosen sample, follow- up letters, and inducements can help control it.
This section illustrates the many diverse, and sometimes powerful factors that influence survey findings as a result of using volunteers in a survey. The conclusions expressed here regarding volunteer samples are provided to make the surveyor aware of the often profound effects of non-respondent bias on survey data.
The exclusive use of volunteers in survey research represents another major source of bias to the surveyor -- especially the novice. Although it may not be immediately evident, it is nonetheless empirically true that volunteers, as a group, possess characteristics quite different from those who do not generally volunteer. Unless the surveyor takes these differences into consideration before choosing to use an exclusively volunteer sample, the bias introduced into the data may be so great that the surveyor can no longer confidently generalize the survey's findings to the population at large, which is usually the goal of the survey.
Fortunately, research findings exist which describe several unique characteristics of the volunteer subject. By using these characteristics appropriately, the surveyor may avoid inadvertent biases and pitfalls usually associated with using and interpreting results from volunteer samples. The following list provides 22 conclusions about unique characteristics of the volunteer. The list is subdivided into categories representing the level of confidence to be placed in the findings. Within each category, the conclusions are listed in order starting with those having the strongest evidence supporting them. (from Rosenthall and Rosnow, The Volunteer Subject, 1975; pp 195-196):
Conclusions Warranting Maximum ConfidenceBorg and
Gall (1979) have suggested how surveyors might use this listing to
combat the effects of bias in survey research. For example, they
suggest that:
The degree to which these characteristics of volunteer samples affect research results depends on the specific nature of the investigation. For example, a study of the level of intelligence of successful workers in different occupations would probably yield spuriously high results if volunteer subjects were studied, since volunteers tend to be more intelligent than nonvolunteers. On the other hand, in a study concerned with the cooperative behavior of adults in work-group situations, the tendency for volunteers to be more intelligent may have no effect on the results, but the tendency for volunteers to be more sociable could have a significant effect. It is apparent that the use of volunteers in research greatly complicates the interpretation of research results and their generalizability to the target population, which includes many individuals who would not volunteer. (pp 190-191)
The questionnaire is the means for collecting your survey data. It should be designed with your data collection plan in mind. Each of its three parts should take advantage of the strengths of questionnaires while minimizing their weaknesses. Each of the different kinds of questions is useful for eliciting different types of data, but each should be constructed carefully with well- developed construction guidelines in mind. Properly constructed questions and well-followed survey procedures will allow you to obtain the data needed to check your hypothesis and, at the same time, minimize the chance that one of the many types of bias will invalidate your survey results.