This is G o o g l e's cache of http://www.css.edu/users/dswenson/web/question.htm as retrieved on Jun 20, 2005 21:01:44 GMT.
G o o g l e's cache is the snapshot that we took of the page as we crawled the web.
The page may have changed since that time. Click here for the current page without highlighting.
This cached page may reference images which are no longer available. Click here for the cached text only.
To link to or bookmark this page, use the following url: http://www.google.com/search?q=cache:heXqYjBvqv8J:www.css.edu/users/dswenson/web/question.htm+&hl=en&gl=us&ct=clnk&cd=1&client=firefox-a


Google is neither affiliated with the authors of this page nor responsible for its content.

CHAPTER 5 - The Questionnaire


The final step in preparing the survey is developing the data collection instrument. The most common means of collecting data are the interview and the self- or group-administered questionnaire. In the past, the interview has been the most popular data-collecting instrument. Recently, the questionnaire has surpassed the interview in popularity, especially in the military. Due to this popularity, this chapter concentrates on the development of the questionnaire.

The Questionnaire -- Pros and Cons

It is important to understand the advantages and disadvantages of the questionnaire as opposed to the personal interview. This knowledge will allow you to maximize the strengths of the questionnaire while minimizing its weaknesses. The advantages of administering a questionnaire instead of conducting an interview are:

The primary advantage is lower cost, in time as well as money. Not having to train interviewers eliminates a lengthy and expensive requirement of interviewing. The questionnaire can be administered simultaneously to large groups whereas an interview requires each individual to be questioned separately. This allows the questions to reach a given number of respondents more efficiently than is possible with the interview. Finally, the cost of postage should be less than that of travel or telephone expenses.

Recent developments in the science of surveying have led to incorporating computers into the interview process, yielding what is commonly known as computer automated telephone interview (or CATI) surveys. Advances in using this survey technique have dramatically reshaped our traditional views on the time-intensive nature and inherent unreliability of the interview technique. Yet, despite resurgence in the viability of survey interviews, instruction in the development and use of the CATI technique is well beyond the scope of this handbook.

Many surveys are constrained by a limited budget. Since a typical questionnaire usually has a lower cost per respondent, it can reach more people within a given budget (or time) limit. This can enhance the conduct of a larger and more representative sample.

The questionnaire provides a standardized data-gathering procedure. The effects of potential human errors (for example, altering the pattern of question asking, calling at inconvenient times, and biasing by "explaining") can be minimized by using a well-constructed questionnaire. The use of a questionnaire also eliminates any bias introduced by the feelings of the respondents towards the interviewer (or vice versa).

Although the point is debatable, most surveyors believe the respondent will answer a questionnaire more frankly than he would answer an interviewer, because of a greater feeling of anonymity. The respondent has no one to impress with his/her answers and need have no fear of anyone hearing them. To maximize this feeling of privacy, it is important to guard, and emphasize, the respondent's anonymity.

The primary disadvantages of the questionnaire are nonreturns, misinterpretation, and validity problems. Nonreturns are questionnaires or individual questions that are not answered by the people to whom they were sent. Oppenheim (1966) emphasizes that "the important point about these low response rates is not the reduced size of the sample, which could easily be overcome by sending out more questionnaires, but the possibility of bias. Nonresponse is not a random process; it has its own determinants, which vary from survey to survey" (p 34).

For example, you may be surveying to determine the attitude of a group about a new policy. Some of those opposed to it might be afraid to speak out, and they might comprise the majority of the nonreturns. This would introduce non-random (or systematic) bias into your survey results, especially if you found only a small number of the returns were in favor of the policy. Nonreturns cannot be overcome entirely. What we can do is try to minimize them. Techniques to accomplish this are covered later in this chapter.

Misinterpretation occurs when the respondent does not understand either the survey instructions or the survey questions. If respondents become confused, they will either give up on the survey (becoming a nonreturn) or answer questions in terms of the way they understand it, but not necessarily the way you meant it. Some view the latter problem as a more dangerous occurrence than merely nonresponding. The questionnaire instructions and questions must be able to stand on their own and must use terms that have commonly understood meanings throughout the population under study. If novel terms must be used, be sure to define them so all respondents understand your meaning.

The third disadvantage of using a questionnaire is inability to check on the validity of the answer. Did the person you wanted to survey give the questionnaire to a friend or complete it personally? Did the individual respond indiscriminately? Did the respondent deliberately choose answers to mislead the surveyor? Without observing the respondent's reactions (as would be the case with an interview) while completing the questionnaire, you have no way of knowing the true answers to these questions.

The secret in preparing a survey questionnaire is to take advantage of the strengths of questionnaires (lower costs, more representative samples, standardization, privacy) while minimizing the number of nonreturns, misinterpretations, and validity problems. This is not always as easy as it sounds. But an inventive surveyor can very often find legitimate ways of overcoming the disadvantages. We provide some suggestions below to help.

The Contents

The key to minimizing the disadvantages of the survey questionnaire lies in the construction of the questionnaire itself. A poorly developed questionnaire contains the seeds of its own destruction. Each of the three portions of the questionnaire - the cover letter, the instructions, and the questions - must work together to have a positive impact on the success of the survey.

The cover letter should explain to the respondent the purpose of the survey and motivate him to reply truthfully and quickly. If possible, it should explain why the survey is important to him, how he was chosen to participate, and who is sponsoring the survey (the higher the level of sponsorship the better). Also the confidentiality of the results should be strongly stressed. A well written cover letter can help minimize both nonreturn and validity problems. An example is given in Appendix F. In support of the statement above regarding level of sponsorship, the signature block on the letter should be as high level as you can get commensurate with the topic being investigated. For instance, a survey about Air Force medical issues or policy should be signed by the Air Force Surgeon General or higher, a survey on religious issues by the Air Force Chief of Chaplains, etc. Another tip that seems to help improve response rate is to identify the survey as official. Even though the letter is on government stationery and is signed by an military official, it may help to mark the survey itself with an OFFICIAL stamp of some sort. In general, the more official the survey appears, the less likely it is to be disregarded.

The cover letter should be followed by a clear set of instructions explaining how to complete the survey and where to return it. If the respondents do not understand the mechanical procedures necessary to respond to the questions, their answers will be meaningless. The instructions substitute for your presence, so you must anticipate any questions or problems that may arise and attempt to prevent them from occurring. If you are using ADP scanner sheets, explain how you want the respondent to fill it in - what portions to use and what portions to leave blank. Remember anonymity! If you do not want respondents to provide their names or SSANs, say so explicitly in the instructions, and tell them to leave the NAME and SSAN portions of the scan sheets blank.

If you need respondents' SSAN and/or name included on the survey for tracking or analysis purposes, you will need to put a Privacy Act Statement somewhere on the survey (refer to Chapter 2). The instructions page is usually a good place for this statement. It places it in a prominent place where all respondents will see it, but does not clutter the instrument itself or the cover letter.

The third and final part of the questionnaire is the set of questions. Since the questions are the means by which you are going to collect your data, they should be consistent with your survey plan. They should not be ambiguous or encourage feelings of frustration or anger that will lead to nonreturns or validity problems.

Types of Questions

Before investigating the art of question writing, it will be useful to examine the various types of questions. Cantelou (1964; p 57) identifies four types of questions used in surveying. The classifier or background question is used to obtain demographic characteristics of the group being studied, such as age, sex, grade, level of assignment, and so forth. This information is used when you are categorizing your results by various subdivisions such as age or grade. Therefore, these questions should be consistent with your data analysis plan. The second and most common type of question is the multiple choice or closed-end question. It is used to determine feelings or opinions on certain issues by allowing the respondent to choose an answer from a list you have provided (see Chapter 3). The intensity question, a special form of the multiple-choice question, is used to measure the intensity of the respondent's feelings on a subject. These questions provide answers that cover a range of feelings.

The intensity question is covered in greater detail later in this chapter. The final type of question is the free response or open-end question. This type requires respondents to answer the question in their own words (see Chapter 3). It can be used to gather opinions or to measure the intensity of feelings. Multiple-choice questions are the most frequently used types of questions in surveying today. It is prudent, therefore, to concentrate primarily on factors relating to their application.

Questionnaire Construction

The complex art of question writing has been investigated by many researchers. From their experiences, they offer valuable advice. Below are some helpful hints typical of those that appear most often in texts on question construction.

Keep the language simple.
Analyze your audience and write on their level. Parten (1950; p 201) suggests that writing at the sixth-grade level may be appropriate. Avoid the use of technical terms or jargon. An appropriate corollary to Murphy's Law in this case would be: If someone can misunderstand something, they will.

Keep the questions short.
Long questions tend to become ambiguous and confusing. A respondent, in trying to comprehend a long question, may leave out a clause and thus change the meaning of the question.

Keep the number of questions to a minimum.
There is no commonly agreed on maximum number of questions that should be asked, but research suggests higher return rates correlate highly with shorter surveys. Ask only questions that will contribute to your survey. Apply the "So what?" and "Who cares?" tests to each question. "Nice-to-know" questions only add to the size of the questionnaire. Having said this, keep in mind that you should not leave out questions that would yield necessary data simply because it will shorten your survey. If the information is necessary, ask the question. With the availability of desk top publishing (DTP) software, it is often possible to give the perception of a smaller survey (using smaller point/pitch type faces, etc.) even though many questions are asked. A three-page type written survey can easily be reduced to a single page using DTP techniques.

Limit each question to one idea or concept.
A question consisting of more than one idea may confuse the respondent and lead to a meaningless answer. Consider this question: "Are you in favor of raising pay and lowering benefits?" What would a yes (or no) answer mean?

Do not ask leading questions.
These questions are worded in a manner that suggests an answer. Some respondents may give the answer you are looking for whether or not they think it is right. Such questions can alienate the respondent and may open your questionnaire to criticism. A properly worded question gives no clue as to which answer you may believe to be the correct one.

Use subjective terms such as good, fair, and bad sparingly, if at all.
These terms mean different things to different people. One person's "fair" may be another person's "bad." How much is "often" and how little is "seldom?"

Allow for all possible answers.
Respondents who cannot find their answer among your list will be forced to give an invalid reply or, possibly, become frustrated and refuse to complete the survey. Wording the question to reduce the number of possible answers is the first step. Avoid dichotomous (two-answer) questions (except for obvious demographic questions such as gender). If you cannot avoid them, add a third option, such as no opinion, don't know, or other. These may not get the answers you need but they will minimize the number of invalid responses. A great number of "don't know" answers to a question in a fact-finding survey can be a useful piece of information. But a majority of other answers may mean you have a poor question, and perhaps should be cautious when analyzing the results.

Avoid emotional or morally charged questions.
The respondent may feel your survey is getting a bit too personal!

Understand the should-would question.
Selltiz, et al. (1963, p 251) note that respondents answer "should" questions from a social or moral point of view while answering "would" questions in terms of personal preference.

Formulate your questions and answers to obtain exact information and to minimize confusion.
For example, does "How old are you?" mean on your last or your nearest birthday? Does "What is your (military) grade?" mean permanent or temporary grade? As of what date? By including instructions like "Answer all questions as of (a certain date)", you can alleviate many such conflicts.

Include a few questions that can serve as checks on the accuracy and consistency of the answers as a whole.
Have some questions that are worded differently, but are soliciting the same information, in different parts of the questionnaire. These questions should be designed to identify the respondents who are just marking answers randomly or who are trying to game the survey (giving answers they think you want to hear). If you find a respondent who answers these questions differently, you have reason to doubt the validity of their entire set of responses. For this reason, you may decide to exclude their response sheet(s) from the analysis.

Organize the pattern of the questions:


Pretest (pilot test) the questionnaire.
This is the most important step in preparing your questionnaire. The purpose of the pretest is to see just how well your cover letter motivates your respondents and how clear your instructions, questions, and answers are. You should choose a small group of people (from three to ten should be sufficient) you feel are representative of the group you plan to survey. After explaining the purpose of the pretest, let them read and answer the questions without interruption. When they are through, ask them to critique the cover letter, instructions, and each of the questions and answers. Don't be satisfied with learning only what confused or alienated them. Question them to make sure that what they thought something meant was really what you intended it to mean. Use the above 12 hints as a checklist, and go through them with your pilot test group to get their reactions on how well the questionnaire satisfies these points. Finally, redo any parts of the questionnaire that are weak.

Have your questionnaire neatly produced on quality paper.
A professional looking product will increase your return rate. As mentioned earlier, desk top publishing software can be used to add a very professional touch to your questionnaire and improve the likelihood of its being completed. But always remember the adage "You can't make a silk purse out of a sow's ear." A poorly designed survey that contains poorly written questions will yield useless data regardless of how "pretty" it looks.

Finally, make your survey interesting!

Intensity Questions and the Likert Scale

As mentioned previously, the intensity question is used to measure the strength of a respondent's feeling or attitude on a particular topic. Such questions allow you to obtain more quantitative information about the survey subject. Instead of a finding that 80 percent of the respondents favor a particular proposal or issue, you can obtain results that show 5 percent of them are strongly in favor whereas 75 percent are mildly in favor. These findings are similar, but the second type of response supplies more useful information.

The most common and easily used intensity (or scaled) question involves the use of the Likert-type answer scale. It allows the respondent to choose one of several (usually five) degrees of feeling about a statement from strong approval to strong disapproval. The "questions" are in the form of statements that seem either definitely favorable or definitely unfavorable toward the matter under consideration. The answers are given scores (or weights) ranging from one to the number of available answers, with the highest weight going to the answer showing the most favorable attitude toward the subject of the survey. The following questions from the Minnesota Survey of Opinions designed to measure the amount of "anti-US law" feelings illustrate this procedure:
1. Almost anything can be fixed up in the courts if you have enough money.
Strongly Disagree (1) Disagree (2) Undecided (3) Agree (4) Strongly Agree (5)

2. On the whole, judges are honest.
Strongly Disagree (1) Disagree (2) Undecided (3) Agree (4) Strongly Agree (5)

The weights (shown by the numbers below the answers) are not shown on the actual questionnaire and, therefore, are not seen by the respondents. A person who feels that US laws are unjust would score lower than one who feels that they are just. The stronger the feeling, the higher (or lower) the score. The scoring is consistent with the attitude being measured. Whether "agree" or "disagree" gets the higher weight actually makes no difference. But for ease in interpreting the results of the questionnaire, the weighting scheme should remain consistent throughout the survey.

One procedure for constructing Likert-type questions is as follows (adapted from Selltiz, et al., 1963; pp 367-368):

  1. The investigator collects a large number of definitive statements relevant to the attitude being investigated.
  2. Conduct and score a pretest of your survey. The most favorable response to the attitude gets the highest score for each question. The respondent's total score is the sum of the scores on all questions.
  3. If you are investigating more than one attitude on your survey, intermix the questions for each attitude. In this manner, the respondent will be less able to guess what you are doing and thus more likely to answer honestly.
  4. Randomly select some questions and flip-flop the Strongly Agree -- Strongly Disagree scale to prevent the respondent from getting into a pattern of answering (often called a response set).

The intensity question, with its scaled answers and average scores, can supply quantitative information about your respondents' attitudes toward the subject of your survey. The interested reader is encouraged to learn and use other scales, such as the Thurstone, Guttman, and Semantic Differential scales, by studying some of the references in the bibliography.

A number of studies have been conducted over the years attempting to determine the limits of a person's ability to discriminate between words typically found on rating or intensity scales. The results of this research can be of considerable value when trying to decide on the right set of phrases to use in your rating or intensity scale. When selecting phrases for a 4-, 5-, 7-, or 9-point Likert scale, you should choose phrases that are far enough apart from one another to be easily discriminated, while, at the same time, keeping them close enough that you don't lose potential information. You should also try to gauge whether the phrases you are using are commonly understood so that different respondents will interpret the meaning of the phrases in the same way. An obvious example is shown with the following 3 phrases: Strongly Agree, Neutral, Strongly Disagree

These are easily discriminated, but the gap between each choice is very large. How would a person respond on this three-point scale if they only agreed with the question being asked? There is no middle ground between Strongly Agree and Neutral. The same thing is true for someone who wants to respond with a mere disagree. Your scales must have enough choices to allow respondents to express a reasonable range of attitudes on the topic in question, but there must not be so many choices that most respondents will be unable to consistently discriminate between them. Appendix H provides several tables containing lists of phrases commonly used in opinion surveys with associated "scale values" and standard deviations (or inter-quartile range values). Also provided is a short introduction describing how these lists can be used in selecting response alternatives for your opinion surveys. The information in that appendix is derived from research done for the U.S. Army Research Institute for the Behavioral and Social Services at Fort Hood, Texas.

Bias and How to Combat It

Like any scientist or experimenter, surveyors must be aware of ways their surveys might become biased and of the available means for combatting bias. The main sources of bias in a questionnaire are:

Surveyors can expose themselves to possible nonrepresentative sample bias in two ways. The first is to actually choose an nonrepresentative sample. This bias can be eliminated by careful choice of the sample as discussed earlier in Chapter 4. The second way is to have a large number of nonreturns.

The nonreturn bias (also called non-respondent bias) can affect both the sample survey and the complete survey. The bias stems from the fact that the returned questionnaires are not necessarily evenly distributed throughout the sample. The opinions or attitudes expressed by those who returned the survey may or may not represent the attitudes or opinions of those who did not return the survey. It is impossible to determine which is true since the non-respondents remain an unknown quantity. Say, for example, a survey shows that 60 percent of those returning questionnaires favor a certain policy. If the survey had a 70 percent response rate (a fairly high rate as voluntary surveys go), then the favorable replies are actually only 42 percent of those questioned (60 percent of the 70 percent who replied), which is less than 50 percent! <197> a minority response in terms of the whole sample.

Since little can be done to estimate the feelings of the nonreturnees, especially in a confidential survey, the only solution is to minimize the number of nonreturns. Miller (1970; p 81) and Selltiz et al. (1963; p 241) offer the following techniques to get people to reply to surveys. Some of these have already been mentioned in earlier sections of this chapter.

Use follow-up letters.
These letters are sent to the non-respondents after a period of a couple of weeks asking them again to fill out and return the questionnaire. The content of this letter is similar to that of the cover letter. If you are conducting a volunteer survey, you should anticipate the need for following up with non-respondents and code the survey in some unobtrusive way to tell who has and who has not yet responded. If you don't do that, but still need to get in touch with non-respondents, consider placing ads in local papers or base bulletins, announcements at commander's call, or notices posted in public places. If at all possible, provide a fresh copy of the survey with the follow- up letter. This often increases return rate over simply sending out a letter alone.

Use high-level sponsorship.
This hint was mentioned in an earlier section. People tend to reply to surveys sponsored by organizations they know or respect. If you are running a military survey, obtain the highest ranking sponsorship you can. Effort spent in doing this will result in a higher percentage of returns. If possible, use the letterhead of the sponsor on your cover letter.

Make your questionnaire attractive, simple to fill out, and easy to read.
A professional product usually gets professional results.

Keep the questionnaire as short as possible.
You are asking for a person's time, so make your request as small as possible.

Use your cover letter to motivate the person to return the questionnaire
One form of motivation is the have the letter signed by an individual known to be respected by the target audience for your questionnaire. In addition, make sure the individual will be perceived by the audience as having a vested interest in the information needed.

Use inducements to encourage a reply.
These can range from a small amount of money attached to the survey to an enclosed stamped envelope. A promise to report the results to each respondent can be helpful. If you do promise a report, be sure to send it.

Proper use of these techniques can lower the nonreturn rate to acceptable levels. Keep in mind, though, that no matter what you do, there will always be non-respondents to your surveys. Make sure the effort and resources you spend are in proportion with the return you expect to get.

The second source of bias is misinterpretations of questions. We have seen that these can be limited by clear instructions, well constructed questions, and through judicious pilot testing of the survey. Biased questions can also be eliminated by constructing the questions properly and by using a pilot test. Finally, bias introduced by untruthful answers can be controlled by internal checks and a good motivational cover letter. Although bias cannot be eliminated totally, proper construction of the questionnaire, a well-chosen sample, follow- up letters, and inducements can help control it.

Bias in Volunteer Samples

This section illustrates the many diverse, and sometimes powerful factors that influence survey findings as a result of using volunteers in a survey. The conclusions expressed here regarding volunteer samples are provided to make the surveyor aware of the often profound effects of non-respondent bias on survey data.

The exclusive use of volunteers in survey research represents another major source of bias to the surveyor -- especially the novice. Although it may not be immediately evident, it is nonetheless empirically true that volunteers, as a group, possess characteristics quite different from those who do not generally volunteer. Unless the surveyor takes these differences into consideration before choosing to use an exclusively volunteer sample, the bias introduced into the data may be so great that the surveyor can no longer confidently generalize the survey's findings to the population at large, which is usually the goal of the survey.

Fortunately, research findings exist which describe several unique characteristics of the volunteer subject. By using these characteristics appropriately, the surveyor may avoid inadvertent biases and pitfalls usually associated with using and interpreting results from volunteer samples. The following list provides 22 conclusions about unique characteristics of the volunteer. The list is subdivided into categories representing the level of confidence to be placed in the findings. Within each category, the conclusions are listed in order starting with those having the strongest evidence supporting them. (from Rosenthall and Rosnow, The Volunteer Subject, 1975; pp 195-196):

Conclusions Warranting Maximum Confidence

  1. Volunteers tend to be better educated than nonvolunteers, especially when personal contact between investigator and respondent is not required.
  2. Volunteers tend to have higher social-class status than nonvolunteers, especially when social class is defined by respondents' own status rather than by parental status.
  3. Volunteers tend to be more intelligent than nonvolunteers when volunteering is for research in general, but not when volunteering is for somewhat less typical types of research such as hypnosis, sensory isolation, sex research, small-group and personality research.
  4. Volunteers tend to be higher in need for social approval than nonvolunteers.
  5. Volunteers tend to be more sociable than nonvolunteers.


Conclusions Warranting Considerable Confidence

  1. Volunteers tend to be more arousal-seeking than nonvolunteers, especially when volunteering is for studies of stress, sensory isolation, and hypnosis.
  2. Volunteers tend to be more unconventional than nonvolunteers, especially when volunteering is for studies of sex behavior.
  3. Females are more likely than males to volunteer for research in general, more likely than males to volunteer for physically and emotionally stressful research (e.g., electric shock, high temperature, sensory deprivation, interviews about sex behavior).
  4. Volunteers tend to be less authoritarian than nonvolunteers.
  5. Jews are more likely to volunteer than Protestants, and Protestants are more likely to volunteer than Roman Catholics.
  6. Volunteers tend to be less conforming than nonvolunteers when volunteering is for research in general, but not when subjects are female and the task is relatively "clinical" (e.g., hypnosis, sleep, or counseling research).


Conclusions Warranting Some Confidence

  1. Volunteers tend to be from smaller towns than nonvolunteers, especially when volunteering is for questionnaire studies.
  2. Volunteers tend to be more interested in religion than nonvolunteers, especially when volunteering is for questionnaire studies.
  3. Volunteers tend to be more altruistic than nonvolunteers.
  4. Volunteers tend to be more self-disclosing than nonvolunteers.
  5. Volunteers tend to be more maladjusted than nonvolunteers, especially when volunteering is for potentially unusual situations (e.g., drugs, hypnosis, high temperature, or vaguely described experiments) or for medical research employing clinical rather than psychometric definitions of psychopathology.
  6. Volunteers tend to be younger than nonvolunteers, especially when volunteering is for laboratory research and especially if they are female.


Conclusions Warranting Minimum Confidence

  1. Volunteers tend to be higher in need for achievement than non-volunteers, especially among American samples.
  2. Volunteers are more likely to be married than nonvolunteers, especially when volunteering is for studies requiring no personal contact between investigator and respondent.
  3. Firstborns are more likely than laterborns to volunteer, especially when recruitment is personal and when the research requires group interaction and a low level of stress.
  4. Volunteers tend to be more anxious than nonvolunteers, especially when volunteering is for standard, nonstressful tasks and especially if they are college students.
  5. Volunteers tend to be more extraverted than nonvolunteers when interaction with others is required by the nature of the research.

Borg and Gall (1979) have suggested how surveyors might use this listing to combat the effects of bias in survey research. For example, they suggest that:

The degree to which these characteristics of volunteer samples affect research results depends on the specific nature of the investigation. For example, a study of the level of intelligence of successful workers in different occupations would probably yield spuriously high results if volunteer subjects were studied, since volunteers tend to be more intelligent than nonvolunteers. On the other hand, in a study concerned with the cooperative behavior of adults in work-group situations, the tendency for volunteers to be more intelligent may have no effect on the results, but the tendency for volunteers to be more sociable could have a significant effect. It is apparent that the use of volunteers in research greatly complicates the interpretation of research results and their generalizability to the target population, which includes many individuals who would not volunteer. (pp 190-191)

Summary

The questionnaire is the means for collecting your survey data. It should be designed with your data collection plan in mind. Each of its three parts should take advantage of the strengths of questionnaires while minimizing their weaknesses. Each of the different kinds of questions is useful for eliciting different types of data, but each should be constructed carefully with well- developed construction guidelines in mind. Properly constructed questions and well-followed survey procedures will allow you to obtain the data needed to check your hypothesis and, at the same time, minimize the chance that one of the many types of bias will invalidate your survey results.


Previous Chapter Next Chapter Return to Preface and TOC