March 2014’s topic is going to require both sides of the debate to provide research evidence as to the effects of single-gender and co-ed classrooms. This means a lot of clash is going to be over who’s research is “better.” How do you determine which research is “better” and then explain to the judge your comparison? One way to analyze your own research sources and your opponents’ is to look into the methodology.
: a set of methods, rules, or ideas that are important in a science or art
: a particular procedure or set of procedures
This is a word you will hear both in debate and in your science classes, your social science classes, and perhaps on the news from time to time. It’s important to understand in debate and in life because the methods and procedures used to do research affect the results of the research.
Knowing what methodology is, however, is not enough. We need to now how to think about it and what to use if for in round. For help with this I’m going to use a page titled, Assessing Methodology from George Washington University (Last Update: June 29, 2000). It thoroughly outlines the components of methodology, but here I will cover the most important pieces and adapt them for debate. I highly encourage you to read the entire page on your own.
“Research design specifies what group(s) for which data will be collected, to which group(s) and when the intervention will occur, and when the data will collected from each group. The strength of a design, and the possible biases inherent in a design, depend on the type of questions being addressed in the research.”
Design is an overall look at how the research is being conducted. To know the strength of a design, however, takes an expertise in research methods and knowledge of the topic being studied. Design is not the easiest point of attack for you, a debater. If you can find an expert who criticizes a research study’s design, however, this would be a great piece of rebuttal evidence or something that should make you question using the evidence yourself.
“Sometimes a study involves the entire population of interest, but more often it involves only a small portion of the students, employees, families, schools, communities, or other ‘units of analysis.’ Sampling serves three purposes:
- It reduces the costs and time required to do the research;
- It often improves the quality of information by allowing more intensive data collection than would otherwise be possible; and,
- it reduces the burden on respondents.”
Every study selects a sample. This sample can have a huge effect on the results. Sample size, sample demographics (for human studies), sample location, and other characteristics of the sample will affect the generalizability of research findings.
To generalize is:
: to make a general statement or form a general opinion;especially : to state an opinion about a larger group that is based on a smaller number of people or things within that group
: to apply (something specific, such as a theory or rule) to larger group
When evidence is applied to the resolution, you should make sure that the larger topic can be based on the sample of the research. If the two are not similar, if there are marked differences, perhaps the research’s findings should not be generalized to the topic. For example, perhaps you find a study that looked only at single-gender schools for girls, but not boys – this would not be a complete picture of single-gender classrooms. Perhaps studies in other countries should be discounted if the public schools systems are very dissimilar. You want to think about the characteristics of the sample. Generally the larger the sample size the more representative the sample, so you can prefer studies conducted with larger sample sizes. If the sample’s characteristics are not similar and there are other design flaws, however, large sample size does not make for a good study by itself.
“The means of data collection in social science are diverse. For instance, one can observe and code or note, administer tests of skills, administer various personality and attitude inventories, interview people in person or by phone, mail out questionnaires, content-analyze transcripts of dialogue, and review official documents.
There are two key elements of data collection in quantitative research: the instruments and the data collection procedures. The term “instruments” in the social sciences usually refers to written forms on which the researchers or the people being studied record information. Mechanical and electrical measure are also occasionally used.
Two concepts are central to quantitative measurement: reliability and validity. Reliability means the instrument consistently gives the same value when measuring any given level of a phenomena. Validity means that the value yielded by the instrument is accurate. Reliability is necessary but not sufficient for valid measurement. ”
GWU does a good job explaining these concepts and I don’t have much to add. Know how the data was collected in your research evidence.
One note GWU makes is very, VERY important.
“Virtually all data collection methods have their shortcomings and potential biases. Experienced researchers, both quantitative and qualitative, know it is best to try to measure the most important variables with multiple items and/or multiple means, and then compare the results. “
Your studies should admit their shortcomings and biases. KNOW THESE. You should be able to defend the findings in spite of these shortcomings. No study is perfect. You should also know that your opponents’ research will have these same shortcomings. If the shortcomings overwhelm the positive aspects of the study, make sure to point this out the judge.
“In quantitative research, well established statistical procedures are usually used.”
Trust the expert statisticians on picking apart quantitative data analysis. Don’t pretend you know what you’re talking about here (unless you do).
“The data analysis of qualitative research is generally inductive, interactive, and iterative. It usually involves the identification of categories, themes, relations among both, and the cross verification of tentative answers to descriptive, associational, and causal questions. The analysis is often described or implied in the discussion of the findings. Competent and careful qualitative data analysis is usually indicated by the researcher exhibiting healthy skepticism, drawing on multiple lines of evidence, and testing his or her early findings with subsequent evidence.”
Well said, GWU. Qualitative findings are usually much easier to decipher and understand. In both types of study, you will find the data analysis’s conclusions in the conclusion or discussion section of the research paper. I don’t suggest reading the actual data analysis segments. This usually employs technical language and the conclusion will explain the findings in more understandable terms.
Other questions you can ask about research: (all from GWU)
“Is the design suitable for the types of questions to be answered?
- “Is the sample likely to be representative of the population or sub-population of interest and was data secured from a large portion of the initial sample?”
- “Are the data collection instruments and procedures likely to have measured all the important characteristics with reasonable accuracy? “
- “Are the quantitative analysis procedures appropriate, does the qualitative analysis cross-verify important findings, and does all the data analysis appear to have been done with care?”
- All of these areas and questions give you entry points to understanding your evidence as well as analyzing and attacking your opponent’s evidence.
One study that looks promising is a study done by Janet Hyde, a professor of psychology at the University of Wisconsin-Madison. She claims that, “the study was the largest and most thorough effort to examine the issue to date.” See the following review here.
“We looked at 184 studies, representing the testing of 1.6 million students in grades K-12 from 21 nations, for outcomes related to science and mathematics performance, educational attitudes and aspirations, self-concept and gender stereotyping,” says Hyde. “From these, we selected 57 studies that corrected for factors like parental education and economics, which are known to benefit children’s school performance.”
The study, published in the online Psychological Bulletin Feb. 3, used an analytical technique called meta-analysis, which draws conclusions from multiple studies of an issue. “We are trying to shed some light by putting together studies that applied different methods to different populations,” says Hyde. “If you do this right, the whole becomes greater than the sum of its parts.”
As far as methodology, the sampling looks strong, very representative, comprehensive in it’s measures, and very careful in the data it selected. Take a look at the study yourself.
Ask for the methodology of your opponents’ qualitative and quantitative research.
Know your studies’ methologies.
Use methodology as a rebuttal tool, a factor in impact calculus, and ultimately another way to narrow down the round to what should matter for the judge.