Summaries > Self-improvement > Experience > Why 10 Years of Experience Might Tea...

Why 10 Years Of Experience Might Teach You Less Than 90 Days Of This

TLDR Skills-based career growth powered by AI replaces title-based roles with outcome-driven practice. Five repeatable skills—Judgment, Orchestration, Coordination, Taste, and Updating—are built via rubric-driven evaluation of concrete artifacts, starting with one key document like a product decision and expanding across teams and even hiring. AI serves as a consistent scorer to speed feedback while humans define criteria and guide progress, keeping heavy thinking in the loop. The approach targets fuzzy outcomes, delayed/noisy feedback, and low repetition, using repeatable drills and team-wide rubrics to scale improvement over time.

Key Insights

Shift to a skills-based career framework rather than titles

The main shift is moving from a jobs-based format to a skills-based framework for roles and career growth, where progress is measured by outcomes rather than titles. In practice, map roles to core repeatable skills—such as Judgment, Orchestration, Coordination, Taste, and Updating—and let AI help measure outcomes. This approach is inspired by the idea that athletes train and knowledge workers don’t, enabling skills to exist independently of titles. The first practical step is to identify a critical artifact that represents decision-making or work quality and use it as the anchor for skills development. Over time, outcomes become the primary measure that guides advancement rather than job labels.

Anchor learning on concrete artifacts with a rubric

Practically, replace vague feedback with a concrete rubric-driven process that applies to real artifacts. Artifacts include decision documents, architecture docs, CSMS call summaries, and pipeline plans. Create a 1–5 rubric with criteria such as clarity, at least two real options, explicit stakes and metrics, a clear recommendation, and surfaced risks and trade-offs. Score artifacts against the rubric, annotate the rationale, and use this scoring as the basis for feedback. This artifact-based rubric makes skill development scalable and trackable across teams.

Build a repeatable, AI-assisted evaluation cycle

Collect 3–5 relevant artifacts and create a rubric (1–5) with criteria like clarity and risk, then annotate and score. Provide the rubric and annotated examples to an LLM to score new documents, citing the reacting parts and explaining each score. Ask the AI to suggest edits to raise a given dimension and publish the rationale for future reference. This creates a repeatable pattern for evaluating work and tracking progress over a quarter. Over time, you build a scalable skill-building loop that reduces reliance on sporadic praise.

Run weekly drills to practice core skills

Design weekly drills focused on Judgment, Orchestration, Coordination, Taste, and Updating. For example, write a one-page decision document for a messy situation and compare it to a model-generated stronger version to identify gaps. Define specifications with goals, inputs, outputs, and constraints to train orchestration skills. Use executive updates to practice updating rationales and communicating evolving plans. Repeat weekly to gradually improve judgment and related capabilities with AI feedback.

Scale to teams and integrate into hiring

At the team level, define the rubric collaboratively and wire up an LLM to critique documents before human review. Run short, 10-minute weekly practice sessions on AI-flagged growth areas to strengthen both team and individual capabilities. Treat rubric scores as directional signals rather than precise measures or promotion criteria. Use the same rubric in hiring by assigning realistic take-home tasks and live sessions to assess how candidates think under pressure. Align hiring with ongoing development so growth is embedded in the workflow.

Foster transparent AI use and manage expectations

Recognize that much AI usage is shadow AI, and openly acknowledge it so teams can improve rather than hide it. Focus on outcomes and skill development, not surveillance or punishment. Keep conversations constrained to reveal whether a candidate or teammate maintains a healthy relationship with AI while delivering results. Rubric scores will be noisy and should not be treated as precise promotions criteria. Start small by picking one habit, naming and measuring a skill, and using AI to train so knowledge work becomes a repeatable practice. AI coaching can make growth more accessible for individuals and teams.

Questions & Answers

What is the main argument of the talk?

Shift from a jobs-based format to a skills-based format for roles and career growth, enabled by AI and measured by outcomes rather than titles.

What are the five repeatable skills for the AI era?

Judgment, Orchestration, Coordination, Taste, and Updating.

What are the three structural barriers that hinder practice?

Fuzzy outcomes, delayed or noisy feedback, and low repetition of meaningful work.

What artifacts demonstrate these skills?

Judgment appears in decision documents; Orchestration in handoffs and project plans; Coordination in emails and meeting notes; Taste in UX and design choices; Updating in evolving plans and rationales.

What is the practical first step to start applying this approach?

Pick one important artifact (e.g., a product manager decision document) and have trusted colleagues specify concrete criteria to judge its quality (e.g., decision stated, two real options, explicit stakes and metrics, clear recommendation, risks/trade-offs surfaced).

How should you use a rubric and AI to build a scalable skill-building practice?

Collect 3–5 real examples, annotate and score them with a rubric (1–5) across criteria like clarity and risk; then give the rubric and annotated examples to an LLM to score new documents, explain scores, and suggest edits; repeat to create a pattern for evaluating work and tracking progress.

What is the practical way to run practice drills and team adoption?

Weekly tasks to write a one-page decision document for messy situations, compare to a model-generated stronger version, identify gaps; develop drills for orchestration and executive updates; define rubric together at team level and use a team LLM to critique docs before human review; hold 10-minute weekly practice sessions on AI-flagged growth areas.

How is this approach applied to hiring and what are the intended outcomes?

Use the same rubric for hiring: give candidates a take-home task to produce or repair a decision document, conduct a live session to adjust constraints, critique an AI-generated doc to assess performance under pressure; aim to align hiring with ongoing development rather than solely evaluating static skills.

Summary of Timestamps

Main argument: shift from a jobs based format to a skills based format for roles and career growth, enabled by AI and measured by outcomes rather than titles. The 2019 blog by Tyler Cohen inspired this closer look at practicing skills. Knowledge work should focus on developing skills rather than labels.
The speaker critiques hiring tools and job postings that assume specific skills belong to a role. The world will increasingly let skills exist independently of titles, enabling flexible paths.
In knowledge work, practice should be narrow with repeatable feedback loops, like a pianist practicing scales, so AI can help strengthen skills.
Five repeatable skills emerge for the AI era: Judgment, Orchestration, Coordination, Taste, and Updating. These show up in artifacts such as decision documents, handoffs and project plans, emails and meeting notes, UX and design choices, and evolving plans and rationales.
AI is a tool that applies a consistent rubric and provides scalable feedback, not a magical brain.
Practical first step: pick one important artifact such as a product manager decision document and ask trusted colleagues to specify concrete criteria to judge its quality. Example criteria include a one sentence decision, at least two real options, explicit stakes and metrics, a clear recommendation, and surfaced risks and trade offs.
The idea is to apply this rubric driven approach to multiple artifacts to build a repeatable skill building practice in the age of AI.
The core idea is a rubric driven process that scales skill development for thinking and writing across artifacts, with humans critiquing first and AI as a consistent scorer. Start by collecting artifacts such as architecture docs and pipeline summaries, create a clear 1–5 rubric with criteria like clarity and risk, gather 3–5 real examples, annotate them, score them, and record rationale. Then give the rubric and annotated examples to a large language model to score new documents, quote reacting parts, explain each score, and suggest edits to raise a dimension. This yields a repeatable pattern for evaluating work and tracking progress over a quarter.
The practice loops are drills: weekly tasks to write a one page decision document for messy situations, compare to a model generated stronger version, identify gaps, and practice weekly to improve judgment. Create similar drills for orchestration and executive updates.
At the team level, define the rubric together, wire up a team LLM to critique docs before human review, and run 10 minute weekly practice sessions on AI flagged growth areas to strengthen both team and individual capabilities without tying to performance ratings.
The same rubric can be used in hiring: give candidates a realistic take home task to produce or repair a decision document, conduct a live session to adjust constraints, and critique an AI generated doc to assess how they think under pressure, ensuring hiring aligns with ongoing development.
The overarching point, echoed by Tyler, is to provide a shared concrete lens for what good looks like so AI can augment decision making and scale improvement rather than replace human judgment.
The core idea is to work as a team and openly use AI, recognizing that roughly two thirds of AI usage is shadow AI and should be acknowledged rather than hidden so teams can improve.
The goal is outcomes and skill development, not surveillance or punishment, and live conversations with constraints reveal whether a candidate or teammate maintains a healthy relationship with AI while still producing results.
The practice loops push people to clarify decisions, surface risk, and articulate trade offs, so AI can speed progress but heavy thinking remains essential.
Rubric scores will be noisy and should not be treated as precise measures or bases for promotions; the aim is to get better and become useful, not to surveil every document. Start small by picking one habit, naming and measuring a skill, and using AI to train so knowledge work becomes a repeatable practice.
This approach helps individuals and teams grow as they become better at articulating choices and handling constraints, even when AI is involved, linking back to the Tyler question about coaching costs.

Related Summaries