Agent Skills Work for Humans Too
Agent Skills are the hot topic in AI-assisted development, but their real power is something simpler: documented workflows that anyone can follow.
Agent Skills Work for Humans Too
Coding agents are a powerful technique. The ability to hand off complex, multi-step tasks to an AI assistant that can explore your codebase, make decisions, and execute a plan has become a genuine productivity multiplier. And at the center of this evolution is a concept that's rapidly becoming the hot topic: Agent Skills.
But the more I work with Skills, the more I realize they're not just for AI. They're for us too.
What Are Agent Skills?
Anthropic defines Skills as "folders of instructions, scripts, and resources that Claude loads dynamically to improve performance on specialized tasks." You can read more in the official Skills documentation and the Agent Skills spec.
In practice, a Skill is a markdown file that teaches an agent how to do something specific. It might describe:
- How to set up a new feature branch and run the test suite
- The steps to deploy to staging and verify the deployment
- How to investigate a production incident using your team's preferred debugging flow
- The process for reviewing and merging a pull request
Skills are typically stored as markdown files in a project directory (like .claude/skills/), and they get loaded into the agent's context when invoked.
What's exciting is that Skills are quickly becoming a standard. They're supported across Claude.ai, Claude Code, the Claude Agent SDK, and the Claude API. Hugging Face has adopted them. Other agent frameworks are following suit. I'm very excited to see where this goes.
A Mental Model for Skills
Think of a Skill as a recipe card. It has:
- A clear goal: What are we trying to accomplish?
- Prerequisites: What do we need before we start?
- Steps: The actual procedure, in order
- Quality checks: How do we know it worked?
This structure works whether the person following the recipe is a human or an agent. The instructions don't care who's reading them. They just need to be clear.
A Real Example
Here's a skill I use called ios-app-icon-generator. It's a two-phase process for creating production-ready iOS app icons:
# iOS App Icon Generator
## Phase 1: Visual Philosophy
Before any design work, develop a written philosophy (2-3 paragraphs) addressing:
- The core concept and emotional intent
- Visual metaphor representing the app's purpose
- Color palette psychology
- Silhouette clarity at small sizes
### Design Principles
- Simplicity: One focal element. No more than 2-3 colors. No text (illegible at small sizes).
- Distinctiveness: Stand out among competing apps. Avoid generic symbols.
- Scalability: Readable from 16x16 notifications to 1024x1024 App Store displays.
- No photography.
## Phase 2: Icon Generation
Produce a self-contained HTML artifact with embedded SVG that:
- Renders the 1024x1024 master design
- Applies iOS superellipse rounding (not standard border-radius)
- Displays a preview grid of all 13 required sizes
- Includes download functionality for each size
### Technical Requirements
- Use `viewBox="0 0 1024 1024"`
- Implement iOS squircle mask via clip path or superellipse formula (n≈5)
## Quality Standards
App Store-chart caliber. No dated glossy effects, no disappearing hairline
details, no clip-art aesthetics. Maintain optical balance across all sizes.
Notice something? That's just documentation. Good documentation. The kind you'd write for any teammate joining the project.
An agent can follow this. But so can you. A designer could use this same skill file as a checklist for their own icon creation process. The instructions don't care who's reading them.
How Skills Get Invoked
In tools like Claude Code, you can invoke a skill by name:
Use the "ios-app-icon-generator" skill
Or the agent might recognize from context that a skill applies and load it automatically.
The skill file gets loaded into context, and the agent follows the instructions step by step.
Natural Language Is the Interface
Here's what makes Skills interesting: they're largely natural language. Yes, they sometimes include helper templates, code snippets, or references to more deterministic programs. But the core of a skill is written in plain English.
I've said to others before that prompts are a form of software. Skills make that idea concrete. You're writing instructions that get executed. The fact that they're written in natural language instead of a programming language doesn't make them less powerful. If anything, it makes them more accessible.
This means Skills aren't really a new technology. They're a new framing for something we've always needed: clear, actionable documentation of how things should be done.
And that leads to the insight I keep coming back to:
Humans can be the runtime too.
When you write a skill that says "develop a visual philosophy first, then generate the icon at all required sizes," an AI agent can execute that. But so can a human designer. The same document works for both.
Build Your Own Skills Marketplace
One thing I've found useful is creating a personal collection of skills. I keep mine at github.com/GhostScientist/skills. It's organized by category:
Writing Skills
turn-this-feature-into-a-blog-post: Converts code implementations into technical posts using a "What → Why → How" structure
Design Skills
ios-app-icon-generator: Produces complete iOS app icon sets in all required dimensions
Research Skills
paper-to-intuition: Breaks down academic papers into layered understandingimplement-paper-from-scratch: Step-by-step guidance for implementing research papersreviewer-2-simulator: Constructive criticism on paper drafts before peer review
Think of it as a personal toolkit. When you encounter a task you do repeatedly, write it down as a skill. Over time, you build a library of your own best practices.
Your team can do the same thing. A shared skills repository becomes a living document of "how we do things here."
Skills in the Wild: Hugging Face's Model Trainer
To see how far skills can go, look at what Hugging Face built. They created an hf-llm-trainer skill that teaches agents how to fine-tune language models. The skill encodes everything: which GPU to pick for your model size, how to configure authentication, when to use LoRA versus full fine-tuning, and how to handle the dozens of other decisions that go into a successful training run.
You can tell an agent something like "Fine-tune Qwen3-0.6B on the open-r1/codeforces-cots dataset for instruction following" and it works. The skill selects the right GPU tier, configures TRL correctly, submits the job, monitors progress, and pushes the finished model to the Hub.
I used this approach to create Qwen2.5-Coder-7B-Agentic-CoT-LoRA, a LoRA adapter fine-tuned for agentic chain-of-thought reasoning. The skill handled all the complexity. I just described what I wanted.
The Hugging Face skills are open source at github.com/huggingface/skills. They're a great example of skills that combine natural language guidance with deterministic tooling.
Why This Matters Even Without AI
Here's my honest take: writing Skills is worthwhile even if your team isn't using AI agents yet.
The exercise of defining Skills forces you to:
-
Document your workflows explicitly. No more tribal knowledge living only in senior engineers' heads.
-
Identify gaps and inconsistencies. When you try to write down "how we deploy," you often discover that different team members do it differently.
-
Create onboarding material for free. New engineers can read your Skills to understand how your team actually works.
-
Reduce cognitive load. Even experienced developers forget steps. A skill file is a checklist you can follow without thinking.
-
Prepare for the future. When you do adopt AI-assisted tooling, your Skills are ready to go.
The best Skills aren't written for AI. They're written for clarity. And clear instructions work well for both humans and machines.
A Different Way to Think About Documentation
Traditional documentation often falls into two categories:
- Reference docs: exhaustive but hard to use for specific tasks
- Tutorials: great for learning but rarely match your exact situation
Skills occupy a different niche. They're task-oriented runbooks: step-by-step guides for specific, repeatable workflows in your actual codebase, with your actual tools.
When you frame documentation this way, it becomes more actionable. You're not writing for some hypothetical reader. You're writing for someone (or something) that needs to do this exact thing, right now.
Start Small
You don't need to document every possible workflow on day one. Start with:
- The thing you explained to a teammate last week
- The deployment process everyone always forgets a step of
- The debugging workflow for that one finicky service
- The review checklist for PRs touching the database
Write it as if you were teaching a capable but unfamiliar colleague. Be specific. Include the actual commands. Mention the gotchas.
Then put it somewhere your team can find it. Whether that's a .claude/skills/ directory, a docs/runbooks/ folder, or a wiki page, the format matters less than the existence.
The Runtime Is Flexible
Agent Skills represent something bigger than AI tooling. They represent a shift toward explicit, executable documentation. Instructions clear enough that anyone (or anything) can follow them.
So yes, write Skills for your AI agents. But remember: you're also writing them for your future self. For the new hire who joins next month. For the teammate covering while you're on vacation.
Prompts are software, and with skills/prompts, humans can be the runtime too! Beep boop.