Author: Suhan Yao Date: Monday, February 1, 2016 3:24:15 PM CST Subject: Question # 1_Matheson_Skills for EvaluatorsUnique Domains of Knowledge and Skills for EvaluatorsIn the Matheson article, it is suggested the evaluators must know “how to search for unintended and side effects, how to determine values within different points of view, how to deal with controversial issues and values, and how to synthesize facts and values”. (p 186)How is it possible to develop these skills? (You can talk about all of the skills or choose one or more)
In my response, I will discuss contradictory statements by Mathison. Quote Scriven, who originally was interviewed for the referenced article in question, and answer the questions:
- Which methods would not be good for evaluating unintended side effects
- How to determine values within different points of view
- How to synthesize facts and values
My response does not answer how to deal with controversial values and issues but does reference a few controversial values and issues brought up by this reading. Though, I do reference where you can find Scriven’s response to this question.
According to p. 185 of Mathison, S. (2007), clinical trials and evaluations differ from educational evaluations because of their quasi-experimental or regression discontinuity designs.
Defining quasi-experimental designs
The Encyclopedia of Evaluation references quasi-experimental designs as does Shandish, W. R., Cook, T.D., & Campbell, D. T. (2002). According to Crabb, A and Leroy, P. (2008), “A (quasi-) experiment offers an important advantage. It is the only method that has the potential to provide certainty regarding casual relationships between a (policy) intervention and an effect. On the other hand there are a number of important disadvantages (I will paraphrase these below).” They also reference, “one of the first books on (quasi-) experimental designs in the social sciences was How to Experiment in Education by McCall (1923), but Mathison’s paper is specifically referencing a divergence from educational and social science paradigms.
Quasi experiment disadvantages paraphrased from Crabb, A and Leroy, P. (2008)
- Doesn’t account for context (references context as an explanatory variable)
- Policy evaluation does not suffice to ascertain a causal relationship
- The possibility of randomizing (random allocation of individuals to the experiment and control groups) depends on the willingness of those individuals to participate
- Some casual relationships are too complex to model
- Side effects are not studied (e.g. consequences, unanticipated effects are out of scope)
- Unsuitable for policy processes and policy implantation
Defining Discontinuity Designs
Regression discontinuity design is one of three classes of quasi-experimental design (Harry T. Reis, Charles M. Judd). The other two are interrupted time series design and nonequivalent control group design.
Guido W. Imbens (2007) has a good definition of it in his abstract, “In regression discontinuity (RD) designs for evaluating causal effects of interventions, assignment to a treatment is determined at least partly by the value of an observed covariate lying on either side of a fixed threshold. These designs were first introduced in the evaluation literature by Thistlewaite and Campbell [1960. Regression-discontinuity analysis: an alternative to the ex-post Facto experiment. Journal of Educational Psychology 51, 309–317] With the exception of a few unpublished theoretical papers, these methods did not attract much attention in the economics literature until recently. Starting in the late 1990s, there has been a large number of studies in economics applying and extending RD methods. In this paper we review some of the practical and theoretical issues in implementation of RD methods.”
Evaluative Questions
On pages 185-186 of Mathison, S. (2007), Sandra begins to create questions that new methods could answer, including “feasibility, practicability, needs, costs, intended and unintended outcomes, ethics, and justifiability; as well as research versus evaluation.” She does this because she wants us to consider an approach to the evaluation process that diverges from that of most social science research methods. She illustrates this point once on page 185 and repeats it on 186 just before the question Suhan Yao asks us to discuss.
Mathison references Scriven (Coffman, 2003-2004) who she says, “suggests evaluators must also know how to search for unintended and side effects, how to determine values within different points of view, how to deal with controversial issues and values, and how to synthesize facts and values.
In this paragraph she references the following resource, which is no longer active on Harvard’s website:
Coffman, J. (2003-2004), Winter). Michael Scriven on the differences between evaluation and social science research. The evaluation Exchange, 9(4). www.gse.harvard.edu/hfrp/eval/issue24/expert.html
I did, however, find multiple references to Scriven’s research. According to The Evaluation Exchange, Michael Scriven is a world renowned evaluator and the professor of evaluation at the University of Auckland, New Zealand and a professor of psychology at Claremont Graduate University in California.
You can find the interview that Mathison references in her research, it is number 6 in my references below.
I would like to point out that the Mathison paraphrases Scriven and takes this interview slightly out of contexts to suit her research. In the interview by the Harvard Evaluation Exchange, Scriven answers three questions:
• How are evaluation and social science research different?
• What unique skills needs do evaluators need?
• Why aren’t the differences between evaluation and social science research widely understood or accepted?
Mathison is connecting her conclusion, on page 195 to her definition of separation to evaluation and social science research on page 185. She does this by emphasizing Scriven’s work. Scriven does note that evaluation is a process that, “identifies relevant values or standards that apply to what is being evaluated, performs empirical investigation using techniques from the social sciences, and then integrates conclusions with the standards into an overall evaluation or set of evaluations (Scriven, 1991).
This sentence is paragraph 1 and his first question response in the Evaluation Exchange. He goes on to identify how social science research, “does not aim for or achieve evaluative conclusions.” This backs up Mathison and her summation of research on other evaluators, methods, and models to conclude that evaluation and research are different; especially in the methods needed for data collection and analysis Mathison (2007 p. 195).
This sentence is paragraph 1 and his first question response in the Evaluation Exchange. He goes on to identify how social science research, “does not aim for or achieve evaluative conclusions.” This backs up Mathison and her summation of research on other evaluators, methods, and models to conclude that evaluation and research are different; especially in the methods needed for data collection and analysis Mathison (2007 p. 195).
In her conclusion, Mathison goes on to state that evaluation-specific methods are different from social science research, but seems to later contradict herself. Mathison states in her conclusion that evaluation and research are different in how evaluators make judgements, their accuracy, criteria, particularization-generalization, decision-oriented-conclusion-oriented, the field’s maturity, and focus (or lack of). The second paragraph seems to contradict her hypothesis, when she says, “As evaluation matures as a discipline with a clearer sense of its unique focus, the question of how evaluation is different from research may wane. However as long as evaluation methodology continues to overlap substantially with that used in the social sciences and as long as evaluators come to the profession from more traditional social science backgrounds, this will remain a fundamental issue for evaluation.”
This not only assumes that all evaluators have strong ties to social science backgrounds, but contradicts her references to Scriven (Coffman, 2003-2004).
“Social science research, by contrast, does NOT aim for or achieve evaluative conclusions. It is restricted to empirical (rather than evaluative) research, and bases its conclusions only on factual results- that is, observed, measured, or calculated data.Social science research does not establish standards or values and then integrate them with factual results to reach evaluative conclusions. In fact, the dominant social science doctrine for many decades prided itself on being value free. So for the moment, social science research excludes evaluation.”
Scriven and Mathison agree on the point where Scriven states, “in deference to social science research, it must be stressed again that without using social science methods, little evaluation can be done. One cannot say, however, that evaluation is the application of social science methods to solve social problems. It is much more than that.” ….” Note, however, that this is changing as social science is being asked to be more involved with serious social problems, interventions, or issues. In order to do so, social science will have to incorporate evaluation or evaluative elements.”
They are both riddled with contradictory statements, but seem to somewhat agree on the social science methodology. I think this can be attributed to her questions regarding the values of government agencies, but maybe I’m just reading this wrong. This is also why I tried to introduce the above definitions of methodology she briefly referenced to back up her statements. Crabbé (2008) references a series of methods to experiment design and evaluation and describes a few ideas that seem to be implied to be antiquated like McCall (1923). Crabbé for example states that experiments are, “primarily conducted in laboratory research. Outside a laboratory setting, their field of application is rather limited.” I disagree with this statement, based on my own experience, but overall like her definitions of experiments, research, and process approach. I should also point out that she is mostly referencing policy research and experimentation methodology; whereas Scriven and Mathison cross reference social science (education), policy-oriented nature of evaluation, psychology, metaevaluation, governement’s role, and health related programs.
How to synthesize facts and values
Scriven states that the ability to synthesize is the key cognitive skills needed for evaluation. “Synthesis includes everything from making sure that judgments are balanced to reconciling multiple evaluations (which may be contradictory) of the same program, policy, or product (Scriven, 1991).”
Quotes from the Evaluation Thesaurus, Scriven 1991:
From Google:
References
- Mathison, Sandra (2007). What is the difference between evaluation and research? And why do we care? In N.L. Smith & P. Brandon (Eds.). Fundamental issues in evaluation. New York: Guilford Publishers.
- Shandish, W. R., Cook, T.D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized casual inference. New York: Houghton Mifflin.
- Crabbé, Ann, Leroy, Pieter (2008). The Handbook of Environmental Policy Evaluation. Earthscan. https://books.google.com/books?id=rHUN_LSf-fYC&pg=PA66&dq=quasi-experimental+evaluation&hl=en&sa=X&ved=0ahUKEwjck7yR3NfKAhWKPB4KHeRNDxgQ6AEILTAE#v=onepage&q&f=false
- Harry T. Reis, Charles M. Judd p. 49 https://books.google.com/books?id=72fLAgAAQBAJ&pg=PA49&dq=Discontinuity+Designs&hl=en&sa=X&ved=0ahUKEwjs_Ob739fKAhXKXB4KHTXaADEQ6AEIJjAC#v=onepage&q=Discontinuity%20Designs&f=false
- Guido W. Imbens (2007). Regression discontinuity designs: A guide to practice.
- Coffman, J. (2003-2004), Winter). Michael Scriven on the differences between evaluation and social science research. Volume IX. Issue Topic: Reflecting on the Past and Future of Evaluation. Ask the Expert. The evaluation Exchange, 9(4). http://www.hfrp.org/evaluation/the-evaluation-exchange/issue-archive/reflecting-on-the-past-and-future-of-evaluation/michael-scriven-on-the-differences-between-evaluation-and-social-science-research
- Scriven, M. (1991). Evaluation thesaurus (4th ed.). Newbury Park, CA: Sage. https://us.sagepub.com/en-us/nam/evaluation-thesaurus/book3562 orhttps://books.google.com/books?id=koL0Fs_ZSvQC&lpg=PP1&pg=PA109#v=onepage&q&f=false
- Google (2016) Define synthesize. https://www.google.com/search?q=define+synthesize&oq=define+synthesize&aqs=chrome..69i57j0l5.1560j0j4&sourceid=chrome&es_sm=122&ie=UTF-8
Share This Post
No comments:
Post a Comment
Backlinks will be hidden. Crude comments may be removed.