Automate SEO Audits with an AI Agent
Manual SEO audits are slow and get skipped. An AI agent that runs automatically on every deploy, applying fixes without waiting for you, is a better model. Here's how to build one using the SEOLint API and Claude.
The agentic SEO loop
The pattern is simple. The agent runs on a trigger (deploy, schedule, or manual), scans the target URL, reads the structured results, and applies fixes to the codebase.
- TriggerDeploy completes, schedule fires, or you ask Claude manually
- ScanPOST /api/v1/scan → get scanId → poll until complete
- ReadParse issues[] (each has severity, title, fix_prompt)
- FixClaude Code reads fix_prompt, edits the file, commits the change
- VerifyRe-scan after fix to confirm the issue is resolved
Minimal agent script
This Node.js script scans a URL and prints the fix prompts. Pipe the output into Claude Code with your codebase open. It will apply the fixes.
// seo-agent.mjs
const API_KEY = process.env.SEOLINT_API_KEY
const BASE = "https://seolint.dev/api/v1"
const url = process.argv[2]
if (!url) { console.error("Usage: node seo-agent.mjs <url>"); process.exit(1) }
// 1. Start scan
const { scanId, pollUrl } = await fetch(`${BASE}/scan`, {
method: "POST",
headers: { "Content-Type": "application/json", Authorization: `Bearer ${API_KEY}` },
body: JSON.stringify({ url }),
}).then(r => r.json())
// 2. Poll until complete
let result
for (let i = 0; i < 20; i++) {
result = await fetch(pollUrl, { headers: { Authorization: `Bearer ${API_KEY}` } }).then(r => r.json())
if (result.status === "complete") break
await new Promise(r => setTimeout(r, 3000))
}
// 3. Print fix prompts for AI agent
const critical = result.issues.filter(i => i.severity === "critical")
console.log(`Found ${critical.length} critical issues on ${url}:\n`)
for (const issue of critical) {
console.log(`## ${issue.title}`)
console.log(issue.fix_prompt)
console.log()
}Run with: SEOLINT_API_KEY=sl_live_xxx node seo-agent.mjs https://mysite.com
Let Claude Code apply the fixes
With the MCP server connected, you can ask Claude Code to do the full loop in one command:
# In your project directory with Claude Code claude "Scan https://mysite.com for SEO issues using the seolint tool, then find the relevant files in this codebase and apply all critical fixes. Commit the changes with a message describing what was fixed."
Claude uses the MCP server to get the scan results, then reads your codebase to find the right files and applies each fix autonomously.
Scheduled scan with GitHub Actions
Run nightly. If critical issues are found, open a GitHub issue with the full markdown report so your AI agent can pick it up on the next session.
name: Nightly SEO Audit
on:
schedule:
- cron: "0 6 * * *" # 6am UTC daily
workflow_dispatch:
jobs:
seo-audit:
runs-on: ubuntu-latest
steps:
- name: Scan site
id: scan
env:
API_KEY: ${{ secrets.SEOLINT_API_KEY }}
run: |
SCAN=$(curl -s -X POST https://seolint.dev/api/v1/scan \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{"url": "${{ vars.SITE_URL }}"}')
POLL=$(echo $SCAN | jq -r '.pollUrl')
for i in $(seq 1 20); do
R=$(curl -s "$POLL" -H "Authorization: Bearer $API_KEY")
[ "$(echo $R | jq -r '.status')" = "complete" ] && break
sleep 3
done
CRITICAL=$(echo $R | jq '[.issues[] | select(.severity=="critical")] | length')
echo "critical=$CRITICAL" >> $GITHUB_OUTPUT
echo "poll_url=$POLL" >> $GITHUB_OUTPUT
- name: Open issue if critical issues found
if: steps.scan.outputs.critical != '0'
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
API_KEY: ${{ secrets.SEOLINT_API_KEY }}
run: |
REPORT=$(curl -s "${{ steps.scan.outputs.poll_url }}/markdown" \
-H "Authorization: Bearer $API_KEY")
gh issue create \
--title "SEO regression: $(date +%Y-%m-%d)" \
--body "$REPORT" \
--label "seo,ai-fix-ready"The ai-fix-ready label signals to your AI agent (or your team) that this issue has structured fix prompts attached and is ready to be resolved automatically.
FAQ
Can an AI agent apply SEO fixes automatically?
Yes. The SEOLint API returns a fix_prompt for every issue. When you pipe this into Claude Code or Cursor with access to your codebase, the agent can apply the fixes without any manual intervention.
How often should an AI agent run a scan?
On every deploy is the most effective pattern, the same way you run tests. For static sites, you can also run a nightly scheduled scan via GitHub Actions cron.
Does the API require authentication?
Yes. Pass your API key as a Bearer token in the Authorization header. Keep it in an environment variable. Never hardcode it in your workflow files.
What if the scan finds issues in a pull request?
The GitHub Actions workflow can post a PR comment with the full markdown report. Claude can then read the comment and open a follow-up commit with the fixes applied.
Wire SEO into your AI workflow
API key ready instantly. MCP server, REST API, and GitHub Actions included.