GitHub Agentic Workflows (technical preview, Feb 2026) let you describe automation in plain Markdown — and compile it into a hardened Actions workflow. Here's what the architecture looks like, when to use it, and how to try it today.
There's a GitHub Actions YAML file on most teams that nobody touches. You know the one. Somebody wrote it eighteen months ago, it mostly works, and everyone is quietly afraid of breaking it. If you want to change the logic — add a triage step, route issues differently, post a Slack notification on CI failure — you reread the YAML for twenty minutes, make a small edit, push, and pray the indentation is right.
In February 2026, GitHub shipped a technical preview that changes this equation. Not by making YAML easier. By getting rid of it for a class of workflows where determinism was never the point anyway.
What GitHub Agentic Workflows Actually Are
GitHub Agentic Workflows launched as a technical preview on February 13, 2026. The headline: define automation behavior in plain Markdown, and the gh aw compile command produces the GitHub Actions YAML for you.
But the more important story isn't the compilation step. It's what kind of automation this unlocks.
Traditional GitHub Actions workflows are deterministic. If PR is opened → run tests → if tests pass → merge. Every branch has a defined outcome. That's exactly right for build and release pipelines. You want reproducibility there.
Agentic workflows are for the other category: tasks where you want intelligent judgment, not a decision tree. Triage a new issue and label it correctly based on its content. Review a PR for patterns that match known security anti-patterns. Investigate why CI failed and post a summary of what changed. Scan open issues weekly and surface the ones that are trending. These tasks require understanding context, not following a script. That's where the AI engine — Copilot, Claude Code, or OpenAI Codex — runs inside the workflow.
The result is what GitHub calls "Continuous AI" alongside traditional CI/CD. Not a replacement. An addition.
The File Format: Markdown With Frontmatter
Every agentic workflow lives in .github/workflows/ as a .md file. The structure combines YAML frontmatter (triggers, permissions, tools, safe outputs) with plain Markdown instructions for the AI agent.
Here's what an issue triage workflow looks like:
``markdown
on:
issues:
types: [opened]
permissions:
issues: read
tools:
• github
safe-outputs:
add-labels:
allowed: [bug, enhancement, question, documentation, duplicate]
max: 3
add-comment:
max: 1
Issue Triage
You are a helpful assistant that triages newly opened GitHub issues.
When a new issue is opened:
Read the issue title and body carefully
Determine the appropriate label from the allowed list
If the issue is a duplicate of an existing open issue, say so in a comment
If the issue is missing reproduction steps, ask for them politely
Add the most appropriate label
Do not close issues. Do not merge pull requests. Limit comments to one per issue.
`
The frontmatter defines the trigger (on: issues: opened), the permissions the workflow runs with (read-only), which tools the agent can call, and critically — which write operations are permitted and how many.
The Markdown below the frontmatter is the natural language instruction set. The AI reads it, reads the GitHub context (the issue content, labels, repository), and decides what to do.
!Split-screen view: an engineer typing plain Markdown instructions on the left, the AI-compiled Actions workflow appearing on the right
The Compile Step
The .md file is your source of truth. You never edit the compiled output directly.
`bash
Install the gh extension
gh extension install github/gh-aw
Compile your .md source into a hardened .lock.yml
gh aw compile
Run a workflow manually for testing
gh aw run issue-triage
`
Running gh aw compile generates a .lock.yml file alongside your .md. This is the actual GitHub Actions workflow — complete with SHA-pinned dependencies, sandboxed execution environment, and the Safe Outputs security layer applied. Both files get committed. The .md is what humans edit; the .lock.yml is what GitHub Actions runs.
!An engineer in a dark office watching a terminal run gh aw compile, a hardened lock file materialising beside the readable Markdown source
The Security Model: Safe Outputs
The biggest concern with "AI that can write to your repo" is obvious: what happens when the agent does something unexpected? GitHub's answer is Safe Outputs — a permission model that enforces least-privilege at the workflow level before a single line of agent output touches your repository.
Here's how it works in practice:
The agent job runs with read-only permissions. It can read issues, PRs, commit history, and repository state. It cannot write anything directly. When it wants to take an action — post a comment, add a label, open a PR — it signals that through a buffer. The Safe Outputs subsystem intercepts those signals, validates them against your frontmatter configuration, and only then executes the write operation in a separate job.
A more restrictive configuration might look like this:
`yaml
safe-outputs:
add-comment:
max: 1
hide-older-comments: true
add-labels:
allowed: [bug, enhancement, question]
blocked: ["~", "[bot]"]
max: 2
create-issue:
title-prefix: "[ai] "
labels: [automation]
max: 3
expires: 7
`
max: 1 means the agent can add at most one comment per run, no matter what its output says. allowed: [bug, enhancement, question] means it can only apply those three labels — any other label the agent tries to add is silently dropped. title-prefix: "[ai] " means every issue the agent creates is visibly marked. expires: 7 means agent-created issues auto-close after seven days if no human has engaged with them.
Pull requests are never merged automatically. That's a hard constraint in the system, not a configuration option.
!A security specialist monitors a dashboard where a transparent permission grid intercepts agent write requests before they reach the repository
The effect is that you can grant a workflow "add labels and post one comment" authority without granting it "do whatever it decides is right" authority. The agent is opinionated; the Safe Outputs layer is the final word.
Six Things Teams Are Using This For
The use cases GitHub highlighted at launch map neatly onto the chronic maintenance tasks that accumulate on every team:
Continuous Triage — Label and route new issues as they arrive. Home Assistant, which has thousands of open issues, uses this to surface trending problems without human triage bandwidth.
Continuous Documentation — Keep READMEs and API docs aligned with recent code changes. The workflow reads the diff and updates the relevant documentation sections.
Continuous Code Simplification — Scan recently merged code for complexity hotspots and open a PR with a suggested refactor. The PR still requires human review and approval.
Continuous Test Improvement — Assess test coverage gaps after merges and add tests where coverage dropped. Again — opens PRs, doesn't merge them.
Continuous Quality Hygiene — When CI fails, analyze the failure, compare it to recent changes, and post a structured summary to the PR: what changed, what failed, what to look at first.
Continuous Reporting — Generate a weekly or daily issue summarizing repository health: open issues trending up or down, PR age, coverage trajectory, recent releases. Alex Devkar at Carvana cited "built-in controls" as what gave their team confidence to use this across complex systems.
None of these are high-stakes write operations. They're the exact class of task where a capable AI assistant saves hours per week but where you also want clear bounds on what it can actually change.
Try It Yourself
Here's the full setup from zero to running agentic workflow, using Claude Code as the AI engine:
Step 1: Prerequisites
`bash
GitHub CLI v2.0.0+
gh --version
Install the gh-aw extension
gh extension install github/gh-aw
Verify
gh aw --help
`
Step 2: Add your AI engine credentials
In your repository settings → Secrets and variables → Actions, add:
`
ANTHROPIC_API_KEY=your-key-here
`
Or if using Copilot: COPILOT_GITHUB_TOKEN. Or OpenAI: OPENAI_API_KEY.
Step 3: Create your first workflow
Use the interactive wizard to scaffold a starting point:
`bash
gh aw add-wizard githubnext/agentics/daily-repo-status
`
Or create .github/workflows/issue-triage.md manually with this starter:
`markdown
on:
issues:
types: [opened]
permissions:
issues: read
tools:
• github
safe-outputs:
add-labels:
allowed: [bug, enhancement, question, documentation]
max: 2
add-comment:
max: 1
Issue Triage Agent
When a new issue is opened, analyze it and:
Determine the most appropriate label from the allowed list
If reproduction steps are missing for a bug report, post a comment asking for them
If it looks like a duplicate of an existing open issue, note that in a comment
Keep comments concise and friendly. Do not close the issue.
`
Step 4: Compile and commit
`bash
Generate the .lock.yml
gh aw compile
Commit both files
git add .github/workflows/issue-triage.md .github/workflows/issue-triage.lock.yml
git commit -m "Add issue triage agentic workflow"
git push
`
Step 5: Test it
`bash
Trigger manually to verify behavior before waiting for a real issue
gh aw run issue-triage
`
After the run, check your repository's Actions tab — you'll see a workflow run with the agent's reasoning logged. The write operations (label additions, comments) appear in a separate downstream job.
Step 6: Iterate on the Markdown
The only file you'll edit going forward is issue-triage.md. When you change the instructions or adjust Safe Outputs limits, run gh aw compile` again to regenerate the lock file. Commit both.
The full workflow definition and the security constraints live in the same file, in plain English (mostly). A developer who has never touched GitHub Actions YAML can read it and understand exactly what the automation does and what it's allowed to change.
What This Doesn't Replace
GitHub Agentic Workflows are not continuous integration. They don't run your tests, build your artifacts, or deploy your services. Anything that requires determinism — build pipelines, deployment gates, automated merges on passing tests — stays in standard Actions YAML. That boundary is intentional and worth respecting.
The practical split: use traditional Actions for your engineering system of record (build, test, deploy), and agentic workflows for the maintenance and intelligence layer on top. One is a machine; the other is a thoughtful assistant.
The Bottom Line
The dominant mental model for automation has been: describe every step, handle every branch, account for every case. That model is right for pipelines that control production. It's overkill — and often impractical — for the category of tasks that require judgment: triage, review, summarization, documentation.
GitHub Agentic Workflows change the interface for that second category. You write what you want the automation to do, in plain language, and specify what it's allowed to change. The AI figures out the how; the Safe Outputs model enforces the limits.
For teams already using Claude Code or Copilot in their editors, this is the natural extension: the same assistant, running in the background on the repository lifecycle, surfacing what matters, doing the maintenance work that accumulates silently in every backlog.
The technical preview is available now. The setup takes about fifteen minutes. The first workflow you'll thank yourself for is probably issue triage.
For more on the agentic tooling layer: The LLM Isn't the Bottleneck Anymore. The Ecosystem Is., MCP Hit 97 Million Downloads. Your Security Team Hasn't.