Skip to main content

Turns Out I've Been Evaluating Things This Whole Time!

A self-assessment that was more revealing than expected

If someone asked me to rate myself as an evaluator right now, on a formal scale from 1 to 6, I would land at a 3. Not falsely modest, not overconfident. Just honest about where I actually am versus where I would like to be. I feel grounded in some foundational skills, especially the ones tied to reflection and growth, but I am still developing the more structured, theoretical side of evaluation: program logic, design frameworks, and the discipline of naming the assumptions that sit underneath everything we do.

The interesting part of doing this self-assessment was realizing I have been doing informal evaluation for years without calling it that. It turns out that is both encouraging and a little humbling, because "informal" and "rigorous" are not the same thing, and the gap between them is exactly where my growth needs to happen.

Where I feel most confident
Two competencies stood out immediately because they are things I actively practice in my current role as a teacher, not just in theory. Competency 1.6, the ability to identify personal areas of competence and growth, shows up in how I approach the end of every unit. I am almost always replaying what landed and what fell flat, then adjusting before the next time around. After a recent unit where students could recall facts accurately but struggled to apply the concepts to anything new, I reworked my approach to include more hands-on and application-based activities. I did not call it evaluation at the time. But Stevahn, King, Ghere, and Minnema (2005) would recognize it as reflective practice, which they identify as a core evaluator competency: using evidence and self-awareness to improve your work in an ongoing way.

Competency 1.7 connects to how I approach professional growth more broadly. I seek out feedback from colleagues, adjust based on what I see in student outcomes, and try to stay current with instructional strategies even when the familiar ones are tempting to keep. The AEA Evaluator Competencies frame this kind of ongoing development as essential to professional practice, and it is one area where my habits and the formal standards genuinely align. The vocabulary may be new to me, but the behavior is not.

Where I have real work to do
These two growth areas feel different from my strengths because they require a kind of structured, explicit thinking I have not had to practice much. Competency 2.9, using program logic and theory, asks you to map out the full causal chain: if we do this, then that should happen, which should lead to this outcome. I plan lessons, I set goals, and I generally have a sense of how I expect things to unfold. But I rarely build that reasoning out on paper before I start, and I rarely hold myself accountable to it afterward. The IBSTPI Evaluator Competencies make clear that evaluation should be grounded in theory and evidence rather than just experience and instinct, which means this is not a minor gap. It is a foundational shift in how I approach planning.

Competency 2.5 surprised me the most. I would have expected assumption-identification to come naturally to someone who thinks carefully about data. But there is a meaningful difference between analyzing results and interrogating the beliefs that were already in place before you collected a single data point. When I plan instruction, I sometimes assume students have certain background knowledge, or that a strategy will work because it worked with a different group last year. Those assumptions shape everything downstream, including how I interpret what I am seeing. Stevahn et al. (2005) describe systematic inquiry as a central evaluator competency, and part of that is asking what we were already believing before we started. If the starting beliefs are off, the conclusions can be too, even when the data collection itself is careful and thorough.

It is a bit like realizing your measuring tape has been slightly off the whole time. The individual measurements all looked reasonable. The problem was upstream, and invisible until something stopped adding up.

What surprised me most
Honestly, it was how much weight assumptions carry in evaluation. I have always focused on outcomes: what happened, what the data shows, what needs to change. But competency 2.5 made me sit with a different question: what were we already assuming before we looked at any of that? That layer of inquiry feels less visible than the data itself, which is probably why I had underweighted it. Good evaluation is not just about collecting information carefully. It is about questioning the foundation underneath the whole effort, which is a different kind of discipline and one I need to build deliberately.

What I plan to do about it
The growth area I am carrying into Module 2 is competency 2.5, identifying the assumptions that underlie methodologies and program logic. It is the one that caught me most off guard during this self-assessment, and it is also the one I think will quietly affect everything else if I leave it unexamined. In Module 2, my group will be working together to build an evaluation proposal, and that is actually a perfect place to practice this. When we start deciding what to evaluate and how to evaluate it, there will be all kinds of unspoken beliefs baked into those decisions, things like who we think the program is working for, what we expect the data to show, or why we assume a particular method will give us useful information. My specific action is to name those beliefs out loud before our group moves forward, rather than letting them stay invisible in the background. Evidence that I am improving would look like this: I am the person in the group asking "wait, what are we assuming here?" before we lock in a direction, our proposal includes a section that actually addresses the assumptions behind our approach, and I stop treating the starting conditions as obvious and start treating them as something worth examining. That shift is small in practice but it changes the quality of everything that follows.

Comments

Popular posts from this blog

Blueprints for Better Learning: How Theory Shapes My Design

  Introduction When designing learning experiences, theory isn’t just an academic checklist — it’s the blueprint for creating materials that work. Without a strong theoretical foundation, even the most visually appealing or technologically advanced asset risks falling flat. Learning theories help us answer critical questions: How will learners stay engaged? How will they process the content? What motivates them to apply what they’ve learned?  Over the past five modules, I’ve applied various theories and frameworks from behaviorism and gamification to Self-Determination Theory and the Community of Inquiry (CoI) model to create learning experiences that are engaging, structured, and grounded in research. This post walks through each of those applications, highlighting how theory guided my design choices.

Online and Blended Learning: More Than a Modality

When I think about online and blended learning, I don’t define it by platforms, tools, or whether students are physically present. Instead, I define it by how learning is designed, supported, and experienced. Online and blended learning, at their best, are intentional learning environments that extend access, personalize learning, and challenge traditional assumptions about where and when learning happens. What Does Online and Blended Learning Look Like to Me? To me, online and blended learning look less like a digital replica of a face-to-face classroom and more like a carefully choreographed experience. It includes clear structure, predictable routines, meaningful interaction, and opportunities for reflection. As Conceição and Howles (2020) emphasize, effective online learning environments are intentionally designed with attention to learner engagement, cognitive presence, and instructor presence. In practice, this means learning activities that encourage dialogue (discussion boards ...