#601

Global Rank · of 601 Skills

de-slopify AI Agent Skill

View Source: oakoss/agent-skills

Safe

Installation

npx skills add oakoss/agent-skills --skill de-slopify

42

Installs

De-Slopify

Overview

De-slopify is a methodology for removing telltale signs of AI-generated content from documentation, prose, and code. LLMs produce statistically regular output with characteristic vocabulary, punctuation habits, and structural patterns that make text and code feel inauthentic. Some patterns appear over 1,000x more frequently in LLM output than human writing.

When to use: Before publishing READMEs, after AI-assisted writing sessions, during documentation reviews, when reviewing AI-generated code for over-engineering, before committing prose or code that an LLM touched.

When NOT to use: On code logic or algorithms where correctness matters more than style. On technical specifications where precision outweighs voice. On content that was already human-written and reads naturally.

Quick Reference

Category Pattern Fix
Punctuation Emdash overuse Semicolons, commas, colons, or split into two sentences
Phrase "Here's why" / "Here's why it matters" Explain why directly without the lead-in
Phrase "It's not X, it's Y" "This is Y" or restate the distinction
Phrase "Let's dive in" / "Let's get started" Delete; just start the content
Phrase "It's worth noting" / "Keep in mind" Delete the hedge; state the fact
Phrase "At its core" / "In essence" / "Fundamentally" Delete; say the thing directly
Vocabulary "delve", "tapestry", "landscape", "nuanced" Replace with plain, specific language
Vocabulary "revolutionize", "cutting-edge", "game-changer" Replace with concrete claims or delete
Structure Uniform sentence length throughout Mix short (5-word) and long (20+ word) sentences
Structure Perfectly balanced lists of exactly 3 items Vary list length; humans use 2, 4, or odd counts
Structure Generic claims without specifics Add names, dates, numbers, or first-person detail
Sycophancy "Great question!" / "Absolutely!" Delete; answer the question directly
Meta "Let me break this down..." / "Let me explain" Delete the preamble; just break it down
Structure Numbered lists where a sentence suffices Use a sentence; reserve lists for genuinely parallel items
Closer "In conclusion" / "To summarize" Delete or replace with a specific takeaway
Code Over-commented trivial functions Remove comments that restate the code
Code Unnecessary abstractions and design patterns Flatten to the simplest working solution
Code Verbose or overly descriptive variable names Use domain-appropriate concise names
Code Defensive error handling on every operation Handle errors only where failure is realistic

Common Mistakes

Mistake Correct Pattern
Replacing every emdash mechanically Evaluate context; sometimes an emdash is the right choice
Editing code blocks for style Focus on prose; leave code examples and technical syntax untouched
Removing all structure to sound casual Keep headers, tables, and lists intact; rewrite prose only
Over-correcting into choppy fragments Read aloud after editing; recombine sentences that lost flow
Applying fixes without defining target voice Set persona, tone, and audience before starting edits
Running regex replacements instead of reading Manual line-by-line review is required; context determines fixes
Ignoring AI code smells Review AI-generated code for over-engineering, verbose names, and unnecessary abstractions
Removing all LLM-typical words unconditionally Some flagged words are perfectly natural in context; use judgment

Delegation

  • Scan a repository for documentation files that need de-slopifying: Use Explore agent
  • Rewrite an entire documentation site to remove AI artifacts: Use Task agent
  • Plan a documentation voice guide and editorial workflow: Use Plan agent
  • Review AI-generated code for slop patterns: Use code-reviewer agent

For systematic quality auditing across 12 dimensions (architecture, security, testing, performance, etc.), use the quality-auditor skill.

References

Installs

Installs 42
Global Rank #601 of 601

Security Audit

ath Safe
socket Safe
Alerts: 0 Score: 90
snyk Low
EU EU-Hosted Inference API

Power your AI Agents with the best open-source models.

Drop-in OpenAI-compatible API. No data leaves Europe.

Explore Inference API

GLM

GLM 5

$1.00 / $3.20

per M tokens

Kimi

Kimi K2.5

$0.60 / $2.80

per M tokens

MiniMax

MiniMax M2.5

$0.30 / $1.20

per M tokens

Qwen

Qwen3.5 122B

$0.40 / $3.00

per M tokens

How to use this skill

1

Install de-slopify by running npx skills add oakoss/agent-skills --skill de-slopify in your project directory. Run the install command above in your project directory. The skill file will be downloaded from GitHub and placed in your project.

2

No configuration needed. Your AI agent (Claude Code, Cursor, Windsurf, etc.) automatically detects installed skills and uses them as context when generating code.

3

The skill enhances your agent's understanding of de-slopify, helping it follow established patterns, avoid common mistakes, and produce production-ready output.

What you get

Skills are plain-text instruction files — not executable code. They encode expert knowledge about frameworks, languages, or tools that your AI agent reads to improve its output. This means zero runtime overhead, no dependency conflicts, and full transparency: you can read and review every instruction before installing.

Compatibility

This skill works with any AI coding agent that supports the skills.sh format, including Claude Code (Anthropic), Cursor, Windsurf, Cline, Aider, and other tools that read project-level context files. Skills are framework-agnostic at the transport level — the content inside determines which language or framework it applies to.

Data sourced from the skills.sh registry and GitHub. Install counts and security audits are updated regularly.

EU Made in Europe

Chat with 100+ AI Models in one App.

Use Claude, ChatGPT, Gemini alongside with EU-Hosted Models like Deepseek, GLM-5, Kimi K2.5 and many more.

Customer Support