A personal tool · Claude Pro & Max · Anthropic API

For the work you said you'd do.

Paste a YouTube link, drop a PDF, share an article — Enact turns it into a Claude skill that activates the next time you sit down to do the work it taught you.

Watch 90 seconds Bring your own Anthropic key
01

Stop saving things you'll never apply.

The article you bookmarked. The video you said you'd come back to. The PDF you bought and read once. They're the promises you made to yourself. Enact is the bit that helps you keep them.

02

What's a Claude skill?

A skill is the way Claude learns something new — from you, permanently. Anthropic ships the format. Most people don't write one because the format is unforgiving and you can't tell if it works until it doesn't.

A skill is a small file Claude reads when something relevant comes up. Format: a SKILL.md with a tiny YAML header and a paragraph of instructions. Write it once, Claude uses it forever.

Enact also writes DESIGN.md — same idea, but for visual / system sources. A brand-guidelines PDF becomes a DESIGN.md Claude reads when it's rendering UI. A workflow video becomes a SKILL.md Claude reads when you're doing that work. Source decides format.

The catch: most skills you write don't fire when you need them. Description has to use the words you'll actually say. Wrong by a synonym, skill stays asleep.

03

What's broken about it.

Three failure modes between "I want to teach Claude this" and "Claude actually does it." All three are documented by Anthropic. None of the three are solved by Anthropic.

01

The format is unforgiving.

YAML frontmatter, exact directory paths, exact field names. Get a key wrong, the skill silently fails to load.

02

Most skills don't trigger.

Claude loads a skill only when its description matches what you're working on. Wrong phrasing, skill stays dormant.

03

You can't measure it.

No test, no score, no signal — until the moment you needed it and it didn't show up.

From the source
Claude Code docs
Troubleshooting
If a skill seems to stop influencing behavior after the first response, the content is usually still present and the model is choosing other tools or approaches. Strengthen the skill's description and instructions so the model keeps preferring it.
— Anthropic, code.claude.com/docs/en/skills
04 / Enact in one paragraph

We close the loop.

Give us a source. We write the SKILL.md or DESIGN.md. We measure the trigger rate before delivery and refuse to ship below threshold. We show you when each one fires in your real work. Two install targets — Claude Code or the Anthropic platform. You bring the Anthropic key. $12/month, flat, refused-and-rewritten if it doesn't pass.

05

Two install targets. One workflow.

The same generated SKILL.md works in two places — your local Claude Code, or your Anthropic platform workspace. You pick the install target after generation. One eval, one delivery, two destinations.

Target 01

Claude Code (local)

~/.claude/skills/<name>/SKILL.md

For your machine. Buyer: Claude Pro / Max users in Claude Code daily.

  • One-time-token CLI installer
  • Live-detection: no restart needed
  • Pre-approved tools per skill
Target 02

Anthropic platform (API)

client.beta.skills.create(files=…)

For your apps. Buyer: developers building on the Anthropic API.

  • One API call, post-generation
  • Composable with xlsx · pptx · pdf · docx
  • Workspace-private, version-tracked
Same artifact. Same eval. Same feed. You pick the target.
06

Three steps. One moment that matters.

The first two steps are the price of admission. The third — the moment a skill you saved last Tuesday shows up while you're working today — is the product.

Step 01

Paste, drop, or share.

YouTube link, PDF, or article URL. Enact pulls the transcript and structure. You don't have to summarize anything.

https://youtu.be/jeff-su-email
YouTube PDF Article Podcast
Step 02

Enact writes a skill.

Generated, then measured by the eval harness. If it doesn't activate reliably, it doesn't ship. That's the moat — quiet, but it's there.

jeff-su · email triage 0.91 trigger · 9 / 10 passed
Step 03

Watch it fire.

Installs into your Claude Code, or pushes to your Anthropic workspace. The next time you're working on something it knows about, it shows up. You'll see it in the activation feed.

Fired 14:02
jeff-su · email triage used in your Claude session
07

What each thing does.

Three actors. Clear roles. You bring sources and an Anthropic key. Enact writes, measures, and tracks. Claude reads the skill and does the work. No part of this loop relies on a thing you have to imagine.

Actor 01You Actor 02Enact Actor 03Claude
Bring Bring the source: a YouTube link, a PDF, an article URL. Anything you've consumed and want to apply. Write Pull the transcript, structure the content, write the SKILL.md in directive-template phrasing. Read Read the skill description on every relevant prompt. Decide if it applies.
Authorize Paste your Anthropic API key once. Generation cost is yours, billed to your Anthropic account. Measure Run the eval harness: 10 generated test queries, score the trigger rate, refuse to ship below threshold. Apply When relevant, load the skill content and apply its instructions to the work in front of you.
Use Use Claude Code (or any Claude API app) the way you already do. Don't change your workflow. Track Watch for activation events in your Claude session. Show you which skills fired, when, and how often. Signal Tell Enact (via local hook or API event) when a skill activated. The activation feed updates.
Pay $12 / month flat to Enact. No metered generation, no compute markup. Earn Pay nothing in compute. Enact's marginal cost per user is near zero by design. Earn Earn its keep — by being the place the work actually happens.
08

The moat is measurement.

Most "skill generation" tools let you ship whatever Claude wrote. We don't. The thing that's different here is what we refuse to deliver.

Trigger phrasing matters more than content. Passive descriptions fire 37–77% of the time. Directive descriptions fire up to 20× more reliably.

Every generated artifact runs through the harness — test queries, measured trigger rate, auto-rewrite if it doesn't pass. Below threshold, we refuse to deliver. You get a refusal with a reason, not a quiet failure.

The activation feed isn't a feature. It's the product. You pay for skills that show up, not skills that exist.

Threshold ≥ 0.70 trigger rate
Test queries 10 per skill · 7 trigger / 3 control
Iterations Up to 3 description rewrites
Fallback Refuse · do not deliver
This skill 0.91 jeff-su · email triage
Source Paul's directive-template research
Anthropic admits
this gap exists
Check the description includes keywords users would naturally say.
— Anthropic, Claude Code skills documentation, Troubleshooting "Skill not triggering"
Built for the buyer who has Claude Pro / Max Or an Anthropic API key A library of saved videos A pattern of not finishing
09

What the buyer actually says.

Quotes pulled from the bible's voice samples — placeholders for real testimonials once Enact is in users' hands. The voice is locked; the names will follow.

⚠ Aspirational · placeholder copy until real users speak
"I have 200 saved YouTube videos and I never opened them again. The first time the Karpathy one fired in my Claude — that's when I knew."
Buyer profile · Claude Pro user
"Made one, used it twice the same week, and then realized: this is the first productivity tool I've kept past month two."
Buyer profile · Knowledge worker
"I don't trust most AI tools to do anything reliably. I trust this one because it tells me when it didn't pass its own test."
Buyer profile · Engineer
10

Two tiers. Plain.

Bring your own Anthropic key — Enact's marginal cost per user is near zero, and your generation volume is your API bill, not ours. The price covers the eval harness, the activation surface, and the work of refusing what doesn't pass.

Free
$0 / month

For trying it. The eval harness, the activation feed, and one source type.

  • 10 skills per month
  • YouTube source only
  • Eval harness · full
  • Activation feed · full
  • Claude Code install only
  • BYOK · Anthropic key required
11

Honest answers.

If a question matters and isn't here, ask. The list will grow.

What's the eval harness, really?

A measurement layer that generates 10 test queries per skill — 7 that should trigger it, 3 that shouldn't. We run them, score the trigger rate, and refuse to ship the skill if it falls below 0.70. Up to three automatic description rewrites are attempted before refusal.

Why bring my own key?

Because that keeps the math clean. Generation volume is your API bill, not ours, which means power users don't make the company unprofitable. Enact's price covers the eval harness, the activation surface, and the work — not your compute.

I'm a developer using the Anthropic API. Is this for me?

Yes. The "Push to platform" toggle uploads the generated skill to your Anthropic workspace via client.beta.skills.create(). Same SKILL.md, second install target. You can use the skill in any messages.create call from your apps.

What's a DESIGN.md, and why two output formats?

SKILL.md is for behavior — workflows, checklists, how to do something. DESIGN.md is for visual / system reference — color, type, spacing, brand voice, how something looks. A workflow video produces a SKILL.md. A brand-guidelines PDF produces a DESIGN.md. Some sources produce both. Source decides format.

Will this work without Claude Code installed locally?

For the platform target — yes, just an Anthropic key. For the Claude Code target — you need Claude Code on your machine for skill installation. Activation feedback uses a small local helper that watches your Claude session for fired skills.

What sources are supported today?

YouTube videos with transcripts, PDFs (uploaded or linked), and HTML articles (Mozilla Readability extracts the readable text). Podcasts are next once a transcription step lands.

What if the skill doesn't activate when I need it?

It shouldn't — that's the whole point of the eval gate. But if it doesn't, the library shows you which skills haven't fired yet and lets you manually test the trigger. If it still won't fire, regenerate from the same source. The eval harness will rewrite the description on iteration.

Is this affiliated with Anthropic?

No. Enact is independently built. We use the Anthropic API the same way any developer does. We follow Anthropic's plugin and Claude Code architecture closely so skills you generate stay compatible.

What about other tools that generate skills?

They generate. We measure. The technical-difference is the eval harness and the refusal to ship a skill that doesn't pass its own test. The buyer-difference is the activation feed — you can see what fired, not just what got generated.