The Problem With "Expertise" (And How to Break It Apart)
Last week I introduced the DK/PK/EJ framework — the three components that explain why my daughters and I drew three different fish from the same tutorial. We all had the same Domain Knowledge (we knew what a fish looked like) and the same Process Knowledge (the tutorial steps). The difference was Evaluative Judgment: my older daughter's eye for depth, my younger daughter's choice to prioritize playfulness, and my own mediocre calibration of when the drawing was actually finished.
Naming the components is one thing. Trying to decompose your own expertise against them is another.
The Problem With "I Just Know"
Ask an expert how they do what they do, and I suspect you'll hear some variation of "I just know" — or as my kids would say, "I can just cook." Ask for details and you might hear crickets.
Real expertise fuses domain knowledge, process knowledge, and evaluative judgment into something that feels like intuition, or taste, or just solid execution. The job gets done well because 10,000+ hours are encoded in your brain without a formal way to express what's actually happening.
That works when you're doing the work yourself. You don't need to explain it. You don't need to decompose it. You just do it. And if you're a manager, or a manager of managers, you hire for it and trust you made a good hiring decision.
That fusion breaks the moment you try to hand it to someone that doesn't have your 10,000 hours. And so, the work is just "off" — and it's hard to explain, truly explain, why.
What Happened When I Tried It On Myself
Two weeks ago I wrote about my content workflow. The system gets me 60% of the way to a publishable draft. The last 40% is voice, rhythm, the choice of one word over another. I called them "intangibles I haven't been able to extract into a markdown file yet."
Now I have language for what that 40% actually is.
The 60% I solved? That was DK, PK, and even some EJ-1. I learned the craft: opening types, title pairings, what makes a hook land. I documented my process: brief, outline, draft, print, edit, publish. I even wrote down my criteria: specific enough to disagree with, one idea fully developed, personal voice not generic. All of that can go in a file. All of that I can hand to Claude.
The 40% I couldn't solve? That's EJ-3: calibration. "How do I know when I'm done?" I don't. I edit past the point of improvement. My confidence doesn't match my accuracy. I wrote that I track edit intensity on a 0-3 scale, but I never asked why I'm still rewriting so much.
The framework gave me the diagnosis. The 40% isn't "intangibles." It's a specific gap in my ability to calibrate when something is good enough to ship.
What Decomposing Expertise Enables
Once I had the diagnosis, I could do something about it.
I stopped asking Claude to revise the draft with prompts that I thought would improve quality. That's a calibration question I can't even answer myself. Instead, I gave it my criteria and asked it to evaluate against those. The model isn't calibrating. It's applying criteria I specified. I delegated what I could name. I kept what I couldn't.
I also have a target now. Before decomposition, "getting better at writing" was vague. After decomposition, I can see the specific gap: EJ-3. I can tell when the title doesn't match the opening, or when the opening doesn't flow with the rest of the article. What I can't tell is when it finally does. I keep editing because I don't trust my own signal that says "this is done."
And when I review AI output, I know where to focus. DK and PK? Probably fine, spot-check. EJ-1 and EJ-2? Check carefully, make sure it's applying my criteria and not inventing its own. EJ-3? That's still mine. I decide when it ships.
The decomposition tells me where to trust the AI and where to stay in the loop.
The Meta-Skill for the AI Era
Here's what I've come to believe: the skill that matters most in the AI era isn't prompting. It isn't coding. It's the ability to make tacit knowledge explicit.
If you can decompose your expertise into components you can name, you can delegate the parts that AI handles well. You can improve the specific gaps you couldn't see before. You can bring someone new up to speed in weeks instead of years because they're not starting from "just get a feel for it."
If you can't decompose, you stay stuck. You can't specify what you want. You don't know what's broken. Your expertise stays locked in your head, which makes you a bottleneck at best and replaceable at worst.
The framework isn't the insight. The act of decomposing is.
Try It Yourself
Pick one thing you're good at. Something you do well but have never written down.
Then answer these questions:
Domain Knowledge: What do I actually know?
- What concepts, terminology, or mental models do I use?
- What would someone need to learn before they could do this?
Process Knowledge: What do I actually do?
- What are the steps, in order?
- What intermediate artifacts do I create?
- Where are the decision points?
EJ-1 (Criteria): What am I optimizing for?
- What makes this "good" vs. "just okay"?
- What are my non-negotiables?
- What tradeoffs do I accept?
EJ-2 (Discrimination): How do I tell better from worse?
- If I saw two versions, how would I pick?
- What signals do I look for?
- Can I give examples of good and bad?
EJ-3 (Calibration): How do I know when I'm done?
- What tells me it's ready to ship?
- Do I tend to over-edit or under-edit?
- Is my confidence usually accurate?
If you can answer these clearly, you've decomposed your expertise. You now have something you can delegate, improve, and teach.
If you can't answer them — if everything comes out vague — that's the work. The discomfort of not being able to articulate is the starting point.
Next week: If DK/PK/EJ describes the components of expertise, what happens when we map people to their profiles? Eight archetypes emerge — from the Architect to the Apprentice — and they might predict who thrives, who transforms, and who gets displaced in the AI era.
© 2026 Marigold Labs, Inc. All rights reserved.