Why do we care about the difference between evaluation and research? How does it influence our professional development or practical work as evaluators?
Hi Desarae,Thank you for sharing with us so many great resources!
In your response, you talked about how Mathison and Quote Scriven contradictorily frame the difference between evaluation and research (e.g. methodology, particularization-generalization, and so on). I believe they are not the only two evaluators/educators who have discussed about the difference between evaluation and research.
So my question is: Why do we care about the difference between evaluation and research? How does it influence our professional development or practical work as evaluators?
- Suhan Yao
For the question: "Why do we care about the difference between evaluation and research?" I think that Michael Quinn Patton (1998) has a very interesting answer; "The purpose of making such distinctions, then must guide the distinctions made. In my practice, most clients prefer and value the distinction. They want to be sure they're involved in evaluation, not research. The distinction is meaningful and helpful to them, and making the distinction helps engender a commitment from them to be actively involved--and deepens the expectation for use."
I think what Patton is saying is that we care because the stakeholders care. Why do the stakeholders care? Probably because they are investing something (like time and money) and so they want to be part of a solution, not an extensive knowledge hunt that has no goal of improving the topic that needs to be evaluated.
This influences the work of evaluators because it can put pressure on them to find a problem that needs fixed when there might not be one. If there isn't one, the stakeholders will probably question why the evaluation was conducted in the first place and might even feel that their time and money was wasted.
-Kayla Brown
Kayla,
I very much agree with your summation and appreciate the Patton example. My own experiences seem to parallel Patten and your description of Stakeholder motives. Sometimes projects may be political, ego, or needs based - regardless of the motive it is an evaluators goal to enlighten stakeholder on the values and potential end-goals of evaluation. Unfortunately, sometimes stakeholders have been misguided or worked with under-trained evaluators. When this happens negative connotations, misapprehension, or unintended expectations may be set by approaching evaluation OR research processes.
I would like to also mention that sometimes, even in large corporate structures, research may be valuable and necessary to be performed alongside evaluation. While evaluation and research often deviate in their criteria, metrics, and end-goals sometimes they overlap. Many organizations including nonprofits, government agencies, contracting firms, advertising agencies, the military, business corporations, small businesses, and startups see the value in expanding their internal knowledge base. This can be conducted while business resources are in use and post-briefs can be written to expand the knowledge of teams. These briefs may include things like pattern libraries, style guides, training tutorials, creating facilitator groups, creating SharePoint sites for resources, writing wikis, collaborating with online training meetings, lessons learned, documentation on newly learned code, or documented resources used in the project (just to name a few).
A lot of bigger businesses even have internal policies that promote or require research alongside everyday business. Google is an excellent example of a research and development culture. Google has a policy of building innovative products and releasing unfinished or poorly evaluated products (in their beta form) in an effort to release early, find bugs fast, and fix on the go. In some cases, this has failed miserably and in other cases it has been wildly successful (Picassa, Google Wave, Google Glass, Google Search, Google Earth are all part of this structure). Google though, as a company is just now really starting to appreciate the value of UI/UX designers and initially really focused on A/B based testing (science choices) versus a holistic approach to design - that has recently slowly trickled into their culture. My own experience very much reflects this and there are a number of TED Talks and Huffington post articles that describe in lengthy detail this model if anyone is interested, but I'm afraid I've veered off topic and would like to jump back to research vs. evaluation.
In your final paragraph you state that, " This influences the work of evaluators because it can put pressure on them to find a problem that needs fixed when there might not be one. If there isn't one, the stakeholders will probably question why the evaluation was conducted in the first place and might even feel that their time and money was wasted." I think that sometimes evaluators who do not pre-plan and meet regularly with *primary* stakeholders may find themselves in a bind to address defensible criteria. However, if you use proven methodology and run through UX/project management best practices it is significantly less likely you will meet with these issues. Please don't take these statements as facts, I'm only writing these to expand the conversations but would like to broaden this discussion about stakeholders and evaluation.
The process is key because if an evaluator has documentation like visionary documents, contractual agreements, business requirements, and very specifically knows what is and is not in scope of their evaluation - if the evaluation ends and a stakeholder is unhappy that the results did not encompass a broader range of issues those would be out of scope. If the contracts, meeting notes, and requirements all back this up - the client has to agree that those items are out of scope but may be an opportunity to expand the work, give the evaluator more money, and continue their evaluations on those broader topics. It's important to note the words 'if' and 'opportunity' in the above sentences because everything depends on how well the evaluator plans ahead. It also very much depends how well an evaluator documents important meetings, follows up to conversations with emails that give a proven overview of those meetings, and gets pre-approved signed contracts/business requirements. The word opportunity is important because out of scope items are not the end of the world, they are certainly not a bad thing, and definitely not items that should be addressed within a budget where they where not agreed upon (you will probably not complete your original goal without running out of money if you do).
What I'm trying to get at is, if you have a goal, you preset the questions you are trying to answer, know your scope, and have written criteria - it's pretty unlikely you will end an evaluation without knowing what problems you needed to look into. You may realize that there is not a problem with your current program, but there may be expansion opportunities, wasted budgets, or it may justify the continuance of a program; however it will not end without you knowing the criteria, goals, and questions you where trying to address. Rarely, in IT, do I start an evaluation without an end-plan in mind and managing metrics are fairly easy because computers are excellent at tracking all my data.
I do agree that stakeholder should feel like their time and money was wasted if you end an evaluation without anything to show for it. Every evaluation should end with some kind of defensible result. Even evaluations of tools and surveys can end with the result of better written surveys, they can end with a written brief, or the evaluation may end with analytical analysis of behavior flows. This criteria was preset for you. If I'm evaluating a website for accessibility the result is that I know the site is or is not accessibility friendly. That statement over-simplifies deeper results and final deliverable(s) though. My final deliverable(s) may include a business needs document and executive summary presentation on why the site needs to be revised to be accessibility friendly. Note that my deliverable does not initially include the work of making the site accessible (though it likely will result in that eventually and may be part of the scope of the evaluation or might not). Sometimes my evaluations include a budget to do work WHILE I evaluate a project, the same way the scope may include research while I evaluate.
These are just personal examples and I would love to read other class mate's experiences, especially those that differ from my own.
-Desarae Veit
Share This Post
No comments:
Post a Comment
Backlinks will be hidden. Crude comments may be removed.