Quantitative research has played an important role in TESOL for a long time, but over the years the standards have shifted somewhat. In part because of the educational role TESOL Quarterly plays in modeling research in the field, it is of particular concern that published research articles meet current standards. To support this goal, the following guidelines and references are provided for quantitative research papers submitted to TESOL Quarterly.
Explain the point of the study. What problem is being addressed? Why is it interesting or important from a theoretical perspective? Briefly review the literature, emphasizing pertinent and relevant findings, methodological issues, and gaps in understanding. Conclude the introduction with a statement of purpose, your research questions, and, where relevant, your hypotheses; clearly explain the rationale for each hypothesis.
Explain your study in enough detail that it could be replicated.
Participants. Clearly state whether there is a population that you would ideally want to generalize to; explain the characteristics of that population. Explain your sampling procedure. If you are using a convenience sample, be sure to say so. Arguments for representativeness can be strengthened by comparing characteristics of the sample with that of the population on a range of variables. Describe the characteristics and the size of the sample. When appropriate, describe how participants were assigned to groups.
Measures. Summarize all instruments in terms of both descriptions and measurement properties (i.e., reliability and validity). Provide estimates of the reliability of the scores in your sample in addition to reliability estimates provided by test publishers, other researchers, or both; when you make judgments about performance or when language samples are coded for linguistic characteristics, include estimates of classification dependability or coder agreement.
Procedure. Describe the conditions under which you administered your instruments.
- Design: Make clear what type of study you have done–was your study evaluating a priori hypotheses, or was it exploratory in order to generate hypotheses? Was it a meta-analysis? Explain your design, and state whether your comparisons were within subjects, between subjects, or both. Refer to standard works such as textbooks for study designs. Describe the methods used to deal with experimenter bias if you collected the data yourself. If you assigned participants to subgroups, explain how you did so. If you used random assignment, tell the readers how the randomization was done (e.g., coin toss, random numbers table, computerized random numbers generation). If you did not use random assignment, explain relevant covariates and the way you measured and adjusted for them, either statistically or by design. Describe the characteristics and the size of the subgroups. In place of the terms experimental group and control group, use treatment group and contrast group.
- Variables. Define the variables in the study. Make explicit the link between the theoretical constructs and the way(s) they have been operationalized in your study. Define the role of each variable in your study (e.g., dependent, independent, moderating, control). Explain how you measured or otherwise observed the variables.
- Power and sample size. Provide information on the sample size and the process that led to the decision to use that size. Provide information on the anticipated effect size as you have estimated it from previous research. Provide the alpha level used in the study, discussing the risk of Type I error. Provide the power of your study (calculate it using a standard reference such as Cohen, 1988, or a computer program). Discuss the risk of Type II error.
- Explain the data collected and their statistical treatment as well as all relevant results in relation to your research questions. Interpretation of results is not appropriate in this section.
- Report unanticipated events that occurred during your data collection. Explain how the actual analysis differs from the planned analysis. Explain your handling of missing data.
- Explain the techniques you used to "clean" your data set.
- Choose a minimally sufficient statistical procedure; provide a rationale for its use and a textbook reference for it. Specify any computer programs used.
- Describe the assumptions for each procedure and the steps you took to ensure that they were not violated.
- When using inferential statistics, provide the descriptive statistics, confidence intervals, and sample sizes for each variable as well as the value of the test statistic, its direction, the degrees of freedom, and the significance level (report the actual p value).
- Always supplement the reporting of an actual p value with a measure of effect magnitude (e.g., measures of strength of association or measures of effect size). Briefly contextualize the magnitude of the effect in theoretical and practical terms. Confidence intervals for the effect magnitudes of principal outcomes are recommended.
- If you use multiple statistical analyses (e.g., t tests, analyses of variance, correlations), make the required adjustments to the alpha level (e.g., a Bonferroni correction).
- Avoid inferring causality, particularly in nonrandomized designs or without further experimentation.
- Use tables to provide exact values; present all values with two places to the right of the decimal point.
- Use figures to convey global effects. Keep figures small in size; include graphic representations of confidence intervals whenever possible.
- Always tell the reader what to look for in tables and figures.
Interpretation. Clearly state your findings for each of your research questions and their associated hypotheses. State similarities and differences with effect sizes reported in the literature. Discuss whether features of the methodology and analysis are strong enough to support strong conclusions.
Conclusions. Note the weaknesses of your study. Identify theoretical and practical implications of your study. Discuss limitations and suggest improvements to your study. Provide recommendations for future research that are thoughtful and grounded both in terms of your results and in the literature.
References and Further Reading on Quantitative Research
Abelson, R. P. (1997). On the surprising longevity of flogged horses: Why there is a case for the significance test. Psychological Science, 8, 12-15.
American Psychological Association. (1994). Publication manual of the American Psychological Association (4th ed.). Washington, DC: Author.
American Psychological Association. (2001). Publication manual of the American Psychological Association (5th ed.). Washington, DC: Author.
Anderson, D. (2000). Problems with the hypothesis testing approach. Retrieved January 29, 2003, from http://www.cnr.colostate.edu/~anderson/quotes.pdf
Bailar, J. C., & Mosteller, F. (1988). Guidelines for statistical reporting in articles for medical journals. Annals of Internal Medicine, 108, 266-273.
Baugh, F. (2002). Correcting effect sizes for score reliability: A reminder that measurement and substantive issues are linked inextricably. Educational and Psychological Measurement, 62, 254-263.
Bird, K. D. (2002). Confidence intervals for effect sizes in analysis of variance. Educational and Psychological Measurement, 62, 197-226.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale, NJ: Erlbaum.
Cohen, J. (1994). The earth is round (p < .05). American Psychologist, 49, 997-1003.
Cook, T. D., Cooper, H., Cordray, D. S., Hartman, H., Hedges, L. V., Light, R., et al. (Eds.). (1992). Meta-analysis for explanation: A casebook. New York: Russell Sage Foundation.
Cumming, G., & Finch, S. (2001a). ESCI: Exploratory Software for Confidence Intervals [Computer software]. Victoria, Australia: La Trobe University. Available fromhttp://www.psy.latrobe.edu.au/esci
Cumming, G., & Finch, S. (2001b). A primer on the understanding, use, and calculation of confidence intervals that are based on central and noncentral distributions. Educational and Psychological Measurement, 61, 532-574.
Fan, X., & Thompson, B. (2001). Confidence intervals about score reliability coefficients, please: An EPM guidelines editorial. Educational and Psychological Measurement, 61, 517-531.
Gall, M. D., Gall, J. P., & Borg, W. R. (2002). Educational research: An introduction (7th ed.). Boston: Allyn & Bacon.
Hatch, E., & Lazaraton, A. (1991). The research manual: Design and statistics for applied linguistics. New York: Newbury House.
Hinkle, D. E., Wiersma, W., & Jurs, S. G. (2003). Applied statistics for the behavioral sciences (5th ed.). Boston: Houghton Mifflin.
Huberty, C. J. (1993). Historical origins of statistical testing practices: The treatment of Fisher versus Neyman-Pearson views in textbooks. Journal of Experimental Education, 61,317-333.
Huberty, C. J. (2002). A history of effect size indices. Educational and Psychological Measurement, 62, 227-240.
Hunter, J. E. (1997). Needed: A ban on the significance test. Psychological Science, 8, 3-7.
Kirk, R. E. (1996). Practical significance: A concept whose time has come. Educational and Psychological Measurement, 56, 746-759.
Minium, E. W. (1978). Statistical reasoning in psychology and education. New York: Wiley.
Mittag, K. C., & Thompson, B. (2000). A national survey of AERA members' perceptions of statistical significance tests and other statistical issues. Educational Researcher, 29(4), 14-20.
Montgomery, D. (2000). Design and analysis of experiments (5th ed.). New York: Wiley.
Myers, J. L., & Well, A. D. (1995). Research design and statistical analysis. Hillsdale, NJ: Erlbaum.
Parkhurst, D. F. (1997). Commentaries on significance testing. Retrieved January 29, 2003, from http://www.indiana.edu/~stigtsts/
Roberts, J. K., & Henson, R. (2002). Correcting for bias in estimating effect sizes. Educational and Psychological Measurement, 62, 241-253.
Rosenthal, R. (1994). Parametric measures of effect size. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 231-244). New York: Russell Sage Foundation.
Schmidt, F. L. (1996). Statistical significance testing and cumulative knowledge in psychology: Implications for the training of researchers. Psychological Methods, 1, 115—129.
Shadish, W., Robinson, L., & Lu, C. (1999). ES: A Computer Program for Effect Size Calculation [Computer software]. St. Paul, MN: Assessment Systems.
Smithson, M. J. (2001). Correct confidence intervals for various regression effect sizes and parameters: The importance of noncentral distributions in computing intervals. Educational and Psychological Measurement, 61, 605-632.
Smithson, M. J. (2002). Scripts and software for noncentral confidence interval and power calculations. Retrieved January 29, 2003, from http://www.anu.edu.au/psychology/staff/mike/CIstuff/CI.html
Thompson, B. (1999). Journal editorial policies regarding statistical significance tests: Heat is to fire as p is to importance. Educational Psychology Review, 11, 157-169.
Thompson, B. (2000). Various editorial policies regarding statistical significance tests and effect sizes. Retrieved January 29, 2003, from http://www.coe.tamu.edu/~bthompson/journals.htm
Thompson, B. (2002). What future quantitative social science research could look like: Confidence intervals for effect sizes. Educational Researcher, 31(3), 25-32.
Thompson, B., & Vacha-Haase, T. (2000). Psychometrics is datametrics: The test is not reliable. Educational and Psychological Measurement, 60, 174-195.
Thompson, W. L. (2000). 326 articles/books questioning the indiscriminate use of statistical hypothesis tests in observational studies. Retrieved January 29, 2003, from http://www.cnr.colostate.edu/~anderson/thompson1.html
Vacha-Haase, T., Nilsson, J. E., Reetz, D. R., Lance, T. S., & Thompson, B. (2000). Reporting practices and APA editorial policies regarding statistical significance and effect size. Theory and Psychology, 10, 413-425.
Wilkinson, L., & Task Force on Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations [Electronic version]. American Psychologist, 54,594—604. Retrieved January 29, 2002, from http://www.apa.org/journals/amp/amp548594.html