Competency 2.11:
At the start of this course I rated myself a 3. I could pull themes out of interview data, but if someone had asked me to explain exactly why I organized the data the way I did, I would not have had a clean answer. I was making judgment calls I could not fully name.
That changed when I worked through this evaluation using Braun and Clarke's thematic analysis framework. Before this course, "qualitative coding" to me meant reading through data and grouping things that seemed similar. Braun and Clarke made me realize that is only partially true. Their framework has specific phases: familiarizing yourself with the data, generating initial codes, building themes, then going back and checking whether the themes actually hold up against the raw data. Following those steps in sequence meant I kept catching myself making leaps I would have previously let slide. I would construct a theme, go back to the transcripts to check it, and realize two of the quotes I had grouped together were actually saying different things. Without the framework giving me a reason to go back and check, I would have left that error in.
Running JASP at the same time pushed this further. I had quantitative output sitting next to qualitative themes, and I had to figure out what each one was actually contributing and where they overlapped. That forced a level of precision about what each analysis method can and cannot do that I had never needed before when I was only working with one type of data at a time.
The AI-assisted coding piece added a different kind of challenge. When a tool is generating initial codes for you, the question shifts from "what do I see in this data" to "do I actually agree with what this tool is seeing, and can I justify any place where I override it." That turned out to require more active analytic judgment than coding from scratch, not less, because I was responsible for every decision the tool made that I let stand. Working through that is what moved me to a 6. By the end I could explain every step of my analysis process, defend each decision, and describe exactly where my judgment was doing the work.
Competency 2.14:
At a 4, I was already doing more than restating findings. I could look at data and say something meaningful about what it suggested. What I was not doing well was handling situations where the data did not give me a clean answer, where qualitative and quantitative findings were pulling in different directions, and where the evidence was solid enough to be useful but not solid enough to support a strong universal recommendation.
The data analysis in the evaluation work put me directly in that situation. I had quantitative results showing improvement on one measure alongside qualitative themes where participants were expressing real frustration with the same program component the numbers were praising. Previously my response would’ve been to pick the more compelling narrative and lead with it. What I learned to do instead was write a recommendation that held both things at once. That is a harder sentence to write than "the program is working" or "participants are unhappy," and it is also a more honest and useful one.
The limitations piece was equally concrete. In the evaluation, the data had real constraints: the sample was small, the timeframe was short, and some of the measures were self-reported. Learning to work those limitations into the recommendation itself rather than footnoting them at the end changed how I wrote conclusions entirely. Instead of "the program improved participant outcomes," it became "within the scope of this evaluation, and given the sample size”. That framing is not weaker. It is more accurate, and it gives whoever is reading the report the information they need to decide how much weight to put on it. Getting comfortable writing that way, consistently and without feeling like I was undermining my own work, is what the jump from 4 to 6 actually looked like for me.
The Not So Good:
Competency 1.8:
I came into this course rating myself a 1 on this competency and I am leaving it the same way, which is itself worth reflecting on. The rating did not move because I am still not sure what this competency is actually asking me to do in practice. I understand the words — evaluation should serve the public good, evaluators should be attentive to equity and justice — but when I try to connect that to concrete decisions I would make inside an evaluation, it gets blurry fast. Which stakeholders count as the public? Whose definition of justice applies when stakeholders disagree? At what point does an evaluator's commitment to social justice start shaping findings in ways that compromise objectivity — and is objectivity even the right goal to begin with? I do not have settled answers to any of those questions, and I think that uncertainty is exactly why the rating stayed at 1.
Competency 1.3:
I moved from a 2 to a 3 on this competency, which is real progress, but the honest version of that progress is that I went from not knowing evaluation approaches existed as a deliberate choice to knowing they exist without yet feeling confident about how to choose between them. In the evaluation work this semester, I selected an approach because it seemed like a reasonable fit for the question we were trying to answer. I did not work through a systematic comparison of alternatives and land on a decision I could fully defend. I picked what seemed to make sense and moved forward.
Right now when I look at an evaluation question I have one or two methods that come to mind. An evaluator at the higher end of this scale would be running through a broader menu and making an explicit case for the choice they land on. I am not there yet, and getting there is going to require more exposure to actual evaluation practice than a single course can provide.
Acknowledgement: Portions of this reflection were developed with the assistance of AI writing tools. All ratings, course experiences, and personal interpretations are my own. AI was used to help articulate and organize my thinking, not to generate the substance of my self-assessment.
Comments
Post a Comment