Addy Osmani's open-source project Agent Skills, now at 26,000 stars on GitHub, provides a framework for making AI coding agents follow the same disciplined workflows that senior engineers use. The core insight is that current AI coding agents default to the shortest path to "done" — they write code, declare victory, and move on. They skip specs, tests, reviews, and scope discipline. Agent Skills encodes those missing steps as reusable, composable workflows.
What a skill actually is
A skill is a markdown file with frontmatter that gets injected into the agent's context when needed. It is not reference documentation or an essay on best practices. It is a workflow: a sequence of steps with checkpoints that produce evidence, ending in a defined exit criterion. The distinction is critical. A 2,000-word essay on testing best practices gets read and ignored. A workflow that says "write the failing test first, run it, watch it fail, write the minimum code to pass, watch it pass, refactor" gives the agent something to do and the developer something to verify.
The six lifecycle phases
The repo organizes twenty skills around six lifecycle phases, with seven slash commands on top:
- Define (
/spec): Decide what you're actually building. - Plan (
/plan): Break the work down. - Build (
/build): Implement in vertical slices. - Verify (
/test): Prove it works. - Review (
/review): Catch what slipped through. - Ship (
/ship): Get it to users safely. /code-simplifysits across the bottom.
This maps onto the SDLC that functioning engineering organizations already run — Google's design doc → review → implementation → readability review → launch checklist, or Amazon's working-backwards memo and bar raiser. What's new is that most AI coding agents skip these phases by default.
Five load-bearing design decisions
Process over prose: Workflows are agent-actionable; essays are not. The same applies to human teams.
Anti-rationalization tables: Each skill includes a table of common excuses an agent might use to skip the workflow, paired with a written rebuttal. Examples: "This task is too simple to need a spec" → "Acceptance criteria still apply. Five lines is fine. Zero lines is not." "I'll write tests later" → "Later is the load-bearing word. There is no later." LLMs are excellent at rationalization; these tables are pre-written rebuttals.
Verification is non-negotiable: