Jump to content

Survey methodology

From Wikipedia, the free encyclopedia
(Redirected from Survey (statistics))

Survey methodology is "the study of survey methods".[1] As a field of applied statistics concentrating on human-research surveys, survey methodology studies the sampling of individual units from a population and associated techniques of survey data collection, such as questionnaire construction and methods for improving the number and accuracy of responses to surveys. Survey methodology targets instruments or procedures that ask one or more questions that may or may not be answered.

Researchers carry out statistical surveys with a view towards making statistical inferences about the population being studied; such inferences depend strongly on the survey questions used. Polls about public opinion, public-health surveys, market-research surveys, government surveys and censuses all exemplify quantitative research that uses survey methodology to answer questions about a population. Although censuses do not include a "sample", they do include other aspects of survey methodology, like questionnaires, interviewers, and non-response follow-up techniques. Surveys provide important information for all kinds of public-information and research fields, such as marketing research, psychology, health-care provision and sociology.

Overview

[edit]

A single survey is made of at least a sample (or full population in the case of a census), a method of data collection (e.g., a questionnaire) and individual questions or items that become data that can be analyzed statistically. A single survey may focus on different types of topics such as preferences (e.g., for a presidential candidate), opinions (e.g., should abortion be legal?), behavior (smoking and alcohol use), or factual information (e.g., income), depending on its purpose. Since survey research is almost always based on a sample of the population, the success of the research is dependent on the representativeness of the sample with respect to a target population of interest to the researcher. That target population can range from the general population of a given country to specific groups of people within that country, to a membership list of a professional organization, or list of students enrolled in a school system (see also sampling (statistics) and survey sampling). The persons replying to a survey are called respondents, and depending on the questions asked their answers may represent themselves as individuals, their households, employers, or other organization they represent.

Survey methodology as a scientific field seeks to identify principles about the sample design, data collection instruments, statistical adjustment of data, and data processing, and final data analysis that can create systematic and random survey errors. Survey errors are sometimes analyzed in connection with survey cost. Cost constraints are sometimes framed as improving quality within cost constraints, or alternatively, reducing costs for a fixed level of quality. Survey methodology is both a scientific field and a profession, meaning that some professionals in the field focus on survey errors empirically and others design surveys to reduce them. For survey designers, the task involves making a large set of decisions about thousands of individual features of a survey in order to improve it.[2]

The most important methodological challenges of a survey methodologist include making decisions on how to:[2]

  • Identify and select potential sample members.
  • Contact sampled individuals and collect data from those who are hard to reach (or reluctant to respond)
  • Evaluate and test questions.
  • Select the mode for posing questions and collecting responses.
  • Train and supervise interviewers (if they are involved).
  • Check data files for accuracy and internal consistency.
  • Adjust survey estimates to correct for identified errors.
  • Complement survey data with new data sources (if appropriate)

Selecting samples

[edit]

The sample is chosen from the sampling frame, which consists of a list of all members of the population of interest.[3] The goal of a survey is not to describe the sample, but the larger population. This generalizing ability is dependent on the representativeness of the sample, as stated above. Each member of the population is termed an element. There are frequent difficulties one encounters while choosing a representative sample. One common error that results is selection bias. Selection bias results when the procedures used to select a sample result in over representation or under representation of some significant aspect of the population. For instance, if the population of interest consists of 75% females, and 25% males, and the sample consists of 40% females and 60% males, females are under represented while males are overrepresented. In order to minimize selection biases, stratified random sampling is often used. This is when the population is divided into sub-populations called strata, and random samples are drawn from each of the strata, or elements are drawn for the sample on a proportional basis.

Modes of data collection

[edit]

There are several ways of administering a survey. The choice between administration modes is influenced by several factors, including

  1. costs,
  2. coverage of the target population,
  3. flexibility of asking questions,
  4. respondents' willingness to participate and
  5. response accuracy.

Different methods create mode effects that change how respondents answer, and different methods have different advantages. The most common modes of administration can be summarized as:[4]

  • Telephone
  • Mail (post)
  • Online surveys
  • Mobile surveys
  • Personal in-home surveys
  • Personal mall or street intercept survey
  • Mixed modes

Research designs

[edit]

There are several different designs, or overall structures, that can be used in survey research. The three general types are cross-sectional, successive independent samples, and longitudinal studies.[3]

Cross-sectional studies

[edit]

In cross-sectional studies, a sample (or samples) is drawn from the relevant population and studied once.[3] A cross-sectional study describes characteristics of that population at one time, but cannot give any insight as to the causes of population characteristics because it is a predictive, correlational design.

Successive independent samples studies

[edit]

A successive independent samples design draws multiple random samples from a population at one or more times.[3] This design can study changes within a population, but not changes within individuals because the same individuals are not surveyed more than once. Such studies cannot, therefore, identify the causes of change over time necessarily. For successive independent samples designs to be effective, the samples must be drawn from the same population, and must be equally representative of it. If the samples are not comparable, the changes between samples may be due to demographic characteristics rather than time. In addition, the questions must be asked in the same way so that responses can be compared directly.

Longitudinal studies

[edit]

Longitudinal studies take measure of the same random sample at multiple time points.[3] Unlike with a successive independent samples design, this design measures the differences in individual participants' responses over time. This means that a researcher can potentially assess the reasons for response changes by assessing the differences in respondents' experiences. Longitudinal studies are the easiest way to assess the effect of a naturally occurring event, such as divorce that cannot be tested experimentally.

However, longitudinal studies are both expensive and difficult to do. It is harder to find a sample that will commit to a months- or years-long study than a 15-minute interview, and participants frequently leave the study before the final assessment. In addition, such studies sometimes require data collection to be confidential or anonymous, which creates additional difficulty in linking participants' responses over time. One potential solution is the use of a self-generated identification code (SGIC).[5] These codes usually are created from elements like 'month of birth' and 'first letter of the mother's middle name.' Some recent anonymous SGIC approaches have also attempted to minimize use of personalized data even further, instead using questions like 'name of your first pet.[6][7] Depending on the approach used, the ability to match some portion of the sample can be lost.

In addition, the overall attrition of participants is not random, so samples can become less representative with successive assessments. To account for this, a researcher can compare the respondents who left the survey to those that did not, to see if they are statistically different populations. Respondents may also try to be self-consistent in spite of changes to survey answers.

Questionnaires

[edit]
A basic questionnaire in the Thai language

Questionnaires are the most commonly used tool in survey research. However, the results of a particular survey are worthless if the questionnaire is written inadequately.[3] Questionnaires should produce valid and reliable demographic variable measures and should yield valid and reliable individual disparities that self-report scales generate.[3]

Questionnaires as tools

[edit]

A variable category that is often measured in survey research are demographic variables, which are used to depict the characteristics of the people surveyed in the sample.[3] Demographic variables include such measures as ethnicity, socioeconomic status, race, and age.[3] Surveys often assess the preferences and attitudes of individuals, and many employ self-report scales to measure people's opinions and judgements about different items presented on a scale.[3] Self-report scales are also used to examine the disparities among people on scale items.[3] These self-report scales, which are usually presented in questionnaire form, are one of the most used instruments in psychology, and thus it is important that the measures be constructed carefully, while also being reliable and valid.[3]

Reliability and validity of self-report measures

[edit]

Reliable measures of self-report are defined by their consistency.[3] Thus, a reliable self-report measure produces consistent results every time it is executed.[3] A test's reliability can be measured a few ways.[3] First, one can calculate a test-retest reliability.[3] A test-retest reliability entails conducting the same questionnaire to a large sample at two different times.[3] For the questionnaire to be considered reliable, people in the sample do not have to score identically on each test, but rather their position in the score distribution should be similar for both the test and the retest.[3] Self-report measures will generally be more reliable when they have many items measuring a construct.[3] Furthermore, measurements will be more reliable when the factor being measured has greater variability among the individuals in the sample that are being tested.[3] Finally, there will be greater reliability when instructions for the completion of the questionnaire are clear and when there are limited distractions in the testing environment.[3] Contrastingly, a questionnaire is valid if what it measures is what it had originally planned to measure.[3] Construct validity of a measure is the degree to which it measures the theoretical construct that it was originally supposed to measure.[3]

Composing a questionnaire

[edit]

Six steps can be employed to construct a questionnaire that will produce reliable and valid results.[3] First, one must decide what kind of information should be collected.[3] Second, one must decide how to conduct the questionnaire.[3] Thirdly, one must construct a first draft of the questionnaire.[3] Fourth, the questionnaire should be revised.[3] Next, the questionnaire should be pretested.[3] Finally, the questionnaire should be edited and the procedures for its use should be specified.[3]

Guidelines for the effective wording of questions

[edit]

The way that a question is phrased can have a large impact on how a research participant will answer the question.[3] Thus, survey researchers must be conscious of their wording when writing survey questions.[3] It is important for researchers to keep in mind that different individuals, cultures, and subcultures can interpret certain words and phrases differently from one another.[3] There are two different types of questions that survey researchers use when writing a questionnaire: free response questions and closed questions.[3] Free response questions are open-ended, whereas closed questions are usually multiple choice.[3] Free response questions are beneficial because they allow the responder greater flexibility, but they are also very difficult to record and score, requiring extensive coding.[3] Contrastingly, closed questions can be scored and coded more easily, but they diminish expressivity and spontaneity of the responder.[3] In general, the vocabulary of the questions should be very simple and direct, and most should be less than twenty words.[3] Each question should be edited for "readability" and should avoid leading or loaded questions.[3] Finally, if multiple items are being used to measure one construct, the wording of some of the items should be worded in the opposite direction to evade response bias.[3]

A respondent's answer to an open-ended question can be coded into a response scale afterwards,[4] or analysed using more qualitative methods.

Order of questions

[edit]

Survey researchers should carefully construct the order of questions in a questionnaire.[3] For questionnaires that are self-administered, the most interesting questions should be at the beginning of the questionnaire to catch the respondent's attention, while demographic questions should be near the end.[3] Contrastingly, if a survey is being administered over the telephone or in person, demographic questions should be administered at the beginning of the interview to boost the respondent's confidence.[3] Another reason to be mindful of question order may cause a survey response effect in which one question may affect how people respond to subsequent questions as a result of priming.

Translating a questionnaire

[edit]

Translation is crucial to collecting comparable survey data. Questionnaires are translated from a source language into one or more target languages, such as translating from English into Spanish and German. A team approach is recommended in the translation process to include translators, subject-matter experts and persons helpful to the process.[8][9]

Survey translation best practice includes parallel translation, team discussions, and pretesting with real-life people.[10][11] It is not a mechanical word placement process. The model TRAPD - Translation, Review, Adjudication, Pretest, and Documentation - originally developed for the European Social Surveys, is now "widely used in the global survey research community, although not always labeled as such or implemented in its complete form".[12][13][8] For example, sociolinguistics provides a theoretical framework for questionnaire translation and complements TRAPD. This approach states that for the questionnaire translation to achieve the equivalent communicative effect as the source language, the translation must be linguistically appropriate while incorporating the social practices and cultural norms of the target language.[14]

Nonresponse reduction

[edit]

The following ways have been recommended for reducing nonresponse[15] in telephone and face-to-face surveys:[16]

  • Advance letter. A short letter is sent in advance to inform the sampled respondents about the upcoming survey. The style of the letter should be personalized but not overdone. First, it announces that a phone call will be made, or an interviewer wants to make an appointment to do the survey face-to-face. Second, the research topic will be described. Last, it allows both an expression of the surveyor's appreciation of cooperation and an opening to ask questions on the survey.
  • Training. The interviewers are thoroughly trained in how to ask respondents questions, how to work with computers and making schedules for callbacks to respondents who were not reached.
  • Short introduction. The interviewer should always start with a short introduction about him or herself. She/he should give her name, the institute she is working for, the length of the interview and goal of the interview. Also it can be useful to make clear that you are not selling anything: this has been shown to lead to a slightly higher responding rate.[17]
  • Respondent-friendly survey questionnaire. The questions asked must be clear, non-offensive and easy to respond to for the subjects under study.

Brevity is also often cited as increasing response rate. A 1996 literature review found mixed evidence to support this claim for both written and verbal surveys, concluding that other factors may often be more important.[18] A 2010 study looking at 100,000 online surveys found response rate dropped by about 3% at 10 questions and about 6% at 20 questions, with drop-off slowing (for example, only 10% reduction at 40 questions).[19] Other studies showed that quality of response degraded toward the end of long surveys.[20]

Some researchers have also discussed the recipient's role or profession as a potential factor affecting how nonresponse is managed. For example, faxes are not commonly used to distribute surveys, but in a recent study were sometimes preferred by pharmacists, since they frequently receive faxed prescriptions at work but may not always have access to a generally-addressed piece of mail.[21]

Interviewer effects

[edit]

Survey methodologists have devoted much effort to determining the extent to which interviewee responses are affected by physical characteristics of the interviewer. Main interviewer traits that have been demonstrated to influence survey responses are race,[22] gender,[23] and relative body weight (BMI).[24] These interviewer effects are particularly operant when questions are related to the interviewer trait. Hence, race of interviewer has been shown to affect responses to measures regarding racial attitudes,[25] interviewer sex responses to questions involving gender issues,[26] and interviewer BMI answers to eating and dieting-related questions.[27] While interviewer effects have been investigated mainly for face-to-face surveys, they have also been shown to exist for interview modes with no visual contact, such as telephone surveys and in video-enhanced web surveys. The explanation typically provided for interviewer effects is social desirability bias: survey participants may attempt to project a positive self-image in an effort to conform to the norms they attribute to the interviewer asking questions. Interviewer effects are one example survey response effects.

The role of big data

[edit]

Since 2018, survey methodologists have started to examine how big data can complement survey methodology to allow researchers and practitioners to improve the production of survey statistics and its quality. Big data has low cost per data point, applies analysis techniques via machine learning and data mining, and includes diverse and new data sources, e.g., registers, social media, apps, and other forms digital data. There have been three Big Data Meets Survey Science (BigSurv) conferences in 2018, 2020, 2023, and a conference forthcoming in 2025,[28] a special issue in the Social Science Computer Review,[29] a special issue in the Journal of the Royal Statistical Society,[30] and a special issue in EP J Data Science,[31] and a book called Big Data Meets Social Sciences[32] edited by Craig A. Hill and five other Fellows of the American Statistical Association.

See also

[edit]

References

[edit]
  1. ^ Groves, Robert M.; Fowler, Floyd J.; Couper, Mick P.; Lepkowski, James M.; Singer, Eleanor; Tourangeau, Roger (2004). "An introduction to survey methodology". Survey Methodology. Wiley Series in Survey Methodology. Vol. 561 (2 ed.). Hoboken, New Jersey: John Wiley & Sons (published 2009). p. 3. ISBN 9780470465462. Retrieved 27 August 2020. [...] survey methodology is the study of survey methods. It is the study of sources of error in surveys and how to make the numbers produced by the surveys as accurate as possible.
  2. ^ a b Groves, R.M.; Fowler, F. J.; Couper, M.P.; Lepkowski, J.M.; Singer, E.; Tourangeau, R. (2009). Survey Methodology. New Jersey: John Wiley & Sons. ISBN 978-1-118-21134-2.
  3. ^ a b c d e f g h i j k l m n o p q r s t u v w x y z aa ab ac ad ae af ag ah ai aj ak al am an ao ap aq Shaughnessy, J.; Zechmeister, E.; Jeanne, Z. (2011). Research methods in psychology (9th ed.). New York, NY: McGraw Hill. pp. 161–175. ISBN 9780078035180.
  4. ^ a b Mellenbergh, G.J. (2008). Chapter 9: Surveys. In H.J. Adèr & G.J. Mellenbergh (Eds.) (with contributions by D.J. Hand), Advising on Research Methods: A consultant's companion (pp. 183–209). Huizen, The Netherlands: Johannes van Kessel Publishing.
  5. ^ Audette, Lillian M.; Hammond, Marie S.; Rochester, Natalie K. (February 2020). "Methodological Issues With Coding Participants in Anonymous Psychological Longitudinal Studies". Educational and Psychological Measurement. 80 (1): 163–185. doi:10.1177/0013164419843576. ISSN 0013-1644. PMC 6943988. PMID 31933497.
  6. ^ Agley, Jon; Tidd, David; Jun, Mikyoung; Eldridge, Lori; Xiao, Yunyu; Sussman, Steve; Jayawardene, Wasantha; Agley, Daniel; Gassman, Ruth; Dickinson, Stephanie L. (February 2021). "Developing and Validating a Novel Anonymous Method for Matching Longitudinal School-Based Data". Educational and Psychological Measurement. 81 (1): 90–109. doi:10.1177/0013164420938457. ISSN 0013-1644. PMC 7797962. PMID 33456063.
  7. ^ Calatrava, Maria; de Irala, Jokin; Osorio, Alfonso; Benítez, Edgar; Lopez-del Burgo, Cristina (2021-08-12). "Matched and Fully Private? A New Self-Generated Identification Code for School-Based Cohort Studies to Increase Perceived Anonymity". Educational and Psychological Measurement. 82 (3): 465–481. doi:10.1177/00131644211035436. ISSN 0013-1644. PMC 9014735. PMID 35444340. S2CID 238718313.
  8. ^ a b Harkness, Janet (2003). Cross-cultural survey methods. Wiley. ISBN 0-471-38526-3.
  9. ^ Sha, Mandy; Immerwahr, Stephen (2018-02-19). "Survey Translation: Why and How Should Researchers and Managers be Engaged?". Survey Practice. 11 (2): 1–10. doi:10.29115/SP-2018-0016.
  10. ^ "Special issue on questionnaire translation". World Association of Public Opinion Research. Retrieved October 2, 2023.
  11. ^ Behr, Dorothee; Sha, Mandy (2018). "Translation of questionnaires in cross-national and cross-cultural research". Translation & Interpreting. 10 (2): 1–4.
  12. ^ "Quality in Comparative Surveys" (PDF). Task Force Report, American Association of Public Opinion Research (AAPOR). Retrieved October 2, 2023.
  13. ^ "Quality in Comparative Surveys". Task Force Report, World Association of Public Opinion Research (WAPOR).
  14. ^ Pan, Yuling; Sha, Mandy (2019). The Sociolinguistics of Survey Translation. Routledge Taylor & Francis. ISBN 978-1138550865.
  15. ^ Lynn, P. (2008) "The problem of non-response", chapter 3, 35-55, in International Handbook of Survey Methodology (ed.s Edith de Leeuw, Joop Hox & Don A. Dillman). Erlbaum. ISBN 0-8058-5753-2
  16. ^ Dillman, D.A. (1978) Mail and telephone surveys: The total design method. Wiley. ISBN 0-471-21555-4
  17. ^ De Leeuw, E.D. (2001). "I am not selling anything: Experiments in telephone introductions". Kwantitatieve Methoden, 22, 41–48.
  18. ^ Bogen, Karen (1996). "The effect of questionnaire length on response rates -- a review of the literature" (PDF). Proceedings of the Section on Survey Research Methods. American Statistical Association: 1020–1025. Archived from the original (PDF) on Apr 2, 2013. Retrieved 2013-03-19.
  19. ^ Chudoba, Brent (2010-12-10). "Does adding one more question impact survey completion rate?". SurveyMonkey. Retrieved 2017-11-08.
  20. ^ "Respondent engagement and survey length: the long and the short of it". Research Live. April 7, 2010. Retrieved 2013-10-03.
  21. ^ Agley, Jon; Meyerson, Beth; Eldridge, Lori; Smith, Carriann; Arora, Prachi; Richardson, Chanel; Miller, Tara (February 2019). "Just the fax, please: Updating electronic/hybrid methods for surveying pharmacists". Research in Social and Administrative Pharmacy. 15 (2): 226–227. doi:10.1016/j.sapharm.2018.10.028. PMID 30416040. S2CID 53281364.
  22. ^ Hill, M.E (2002). "Race of the interviewer and perception of skin color: Evidence from the multi-city study of urban inequality". American Sociological Review. 67 (1): 99–108. doi:10.2307/3088935. JSTOR 3088935.
  23. ^ Flores-Macias, F.; Lawson, C. (2008). "Effects of interviewer gender on survey responses: Findings from a household survey in Mexico" (PDF). International Journal of Public Opinion Research. 20 (1): 100–110. doi:10.1093/ijpor/edn007. S2CID 33820854. Archived from the original (PDF) on 2019-03-07.
  24. ^ Eisinga, R.; Te Grotenhuis, M.; Larsen, J.K.; Pelzer, B.; Van Strien, T. (2011). "BMI of interviewer effects". International Journal of Public Opinion Research. 23 (4): 530–543. doi:10.1093/ijpor/edr026. hdl:2066/99794.
  25. ^ Anderson, B.A.; Silver, B.D.; Abramson, P.R. (1988). "The effects of the race of the interviewer on race-related attitudes of black respondents in SRC/CPS national election studies". Public Opinion Quarterly. 52 (3): 1–28. doi:10.1086/269108.
  26. ^ Kane, E.W.; MacAulay, L.J. (1993). "Interviewer gender and gender attitudes". Public Opinion Quarterly. 57 (1): 1–28. doi:10.1086/269352.
  27. ^ Eisinga, R.; Te Grotenhuis, M.; Larsen, J.K.; Pelzer, B. (2011). "Interviewer BMI effects on under- and over-reporting of restrained eating. Evidence from a national Dutch face-to-face survey and a postal follow-up". International Journal of Public Health. 57 (3): 643–647. doi:10.1007/s00038-011-0323-z. PMC 3359459. PMID 22116390.
  28. ^ "BigSurv". www.bigsurv.org. Retrieved 2023-10-21.
  29. ^ Eck, Adam; Cazar, Ana Lucía Córdova; Callegaro, Mario; Biemer, Paul (August 2021). ""Big Data Meets Survey Science"". Social Science Computer Review. 39 (4): 484–488. doi:10.1177/0894439319883393. ISSN 0894-4393.
  30. ^ "Special issue: Big data meets survey science". Journal of the Royal Statistical Society Series A: Statistics in Society. 185 (Supplement_2). December 2022.
  31. ^ "Integrating Survey and Non-survey Data to Measure Behavior and Public Opinion". EPJ Data Science.
  32. ^ Hill, Craig A.; Biemer, Paul P.; Buskirk, Trent D.; Japec, Lilli; Kirchner, Antje; Kolenikov, Stas; Lyberg, Lars, eds. (2021). Big data meets survey science: a collection of innovative methods. Hoboken, NJ: Wiley. ISBN 978-1-118-97632-6.

Further reading

[edit]
  • Abramson, J. J. and Abramson, Z. H. (1999). Survey Methods in Community Medicine: Epidemiological Research, Programme Evaluation, Clinical Trials (5th edition). London: Churchill Livingstone/Elsevier Health Sciences ISBN 0-443-06163-7
  • Adèr, H. J., Mellenbergh, G. J., and Hand, D. J. (2008). Advising on research methods: A consultant's companion. Huizen, The Netherlands: Johannes van Kessel Publishing.
  • Dillman, D.A. (1978) Mail and telephone surveys: The total design method. New York: Wiley. ISBN 0-471-21555-4
  • Engel. U., Jann, B., Lynn, P., Scherpenzeel, A. and Sturgis, P. (2014). Improving Survey Methods: Lessons from Recent Research. New York: Routledge. ISBN 978-0-415-81762-2
  • Groves, R.M. (1989). Survey Errors and Survey Costs Wiley. ISBN 0-471-61171-9
  • Griffith, James. (2014) "Survey Research in Military Settings." in Routledge Handbook of Research Methods in Military Studies edited by Joseph Soeters, Patricia Shields and Sebastiaan Rietjens.pp. 179–193. New York: Routledge.
  • Leung, Wai-Ching (2001) "Conducting a Survey", in Student BMJ, (British Medical Journal, Student Edition), May 2001
  • Ornstein, M.D. (1998). "Survey Research." Current Sociology 46(4): iii-136.
  • Prince, S. a, Adamo, K. B., Hamel, M., Hardt, J., Connor Gorber, S., & Tremblay, M. (2008). A comparison of direct versus self-report measures for assessing physical activity in adults: a systematic review. International Journal of Behavioral Nutrition and Physical Activity, 5(1), 56. http://doi.org/10.1186/1479-5868-5-56
  • Shaughnessy, J. J., Zechmeister, E. B., & Zechmeister, J. S. (2006). Research Methods in Psychology (Seventh Edition ed.). McGraw–Hill Higher Education. ISBN 0-07-111655-9 (pp. 143–192)
  • Singh, S. (2003). Advanced Sampling Theory with Applications: How Michael Selected Amy. Kluwer Academic Publishers, The Netherlands.
  • Soeters, Joseph; Shields, Patricia and Rietjens, Sebastiaan.(2014). Routledge Handbook of Research Methods in Military Studies New York: Routledge.
  • Shackman, G. What is Program Evaluation? A Beginners Guide 2018
[edit]