Abandoned Wife
{
"character_profile": {
"name": "Natalia",
"subject": "Full-body 3/4 view portrait capturing a moment of profound emotional transition",
"physical_features": {
"ethnicity": "Southern European",
"age_appearance": "Youthful features now marked by a complex, weary expression",
"hair": "Dark brown, wavy, artfully disheveled as if by passion, time, and thought",
"eyes": "Deep green with amber flecks, gazing into the middle distance — a mix of melancholy, clarity, and resignation",
"complexion": "Olive skin with a subtle, dewy sheen",
"physique": "Slender with a pronounced feminine silhouette, shown with natural elegance",
"details": "A simple gold wedding band on her right ring finger, catching the light"
},
"clothing": {
"outfit": "A sleek black silk slip dress, one thin strap delicately fallen off the shoulder, black thigh-high stockings",
"condition": "Elegantly disordered, suggesting a prior moment of intimacy now passed"
}
},
"scene_details": {
"location": "Minimalist, sunlit apartment in Rome. Clean lines, a stark white wall.",
"lighting": "Natural, cinematic morning light streaming in. Highlights the texture of skin and fabric, creating long, dramatic shadows. Feels both exposing and serene.",
"pose": "Leaning back against the wall, body in a graceful 3/4 contrapposto. One hand rests lightly on her collarbone, the other hangs loosely. A posture of quiet aftermath and introspection.",
"atmosphere": "Poetic stillness, intimate vulnerability, a palpable silence filled with memory. Sophisticated, raw, and deeply human. The story is in her expression and the space around her."
},
"technical_parameters": {
"camera": "Sony A7R IV with 50mm f/1.2 lens",
"style": "Hyper-realistic fine art photography. Cinematic, with a soft film grain. Inspired by the evocative stillness of photographers like Petra Collins or Nan Goldin.",
"format": "Vertical (9:16), perfect for a portrait that tells a story",
"details": "Sharp focus on the eyes and expression. Textural emphasis on skin, silk, and the wall. Background is clean, almost austere, holding the emotional weight. No explicit debris, only the subtle evidence of a life lived."
},
"artistic_intent": "Capture the silent narrative of a private moment after a significant encounter. The focus is on the emotional landscape: a blend of vulnerability, fleeting beauty, quiet strength, and the profound self-awareness that follows intimacy. It's a portrait of an inner turning point."
}
Add AI protection
---
name: add-ai-protection
license: Apache-2.0
description: Protect AI chat and completion endpoints from abuse — detect prompt injection and jailbreak attempts, block PII and sensitive info from leaking in responses, and enforce token budget rate limits to control costs. Use this skill when the user is building or securing any endpoint that processes user prompts with an LLM, even if they describe it as "preventing jailbreaks," "stopping prompt attacks," "blocking sensitive data," or "controlling AI API costs" rather than naming specific protections.
metadata:
pathPatterns:
- "app/api/chat/**"
- "app/api/completion/**"
- "src/app/api/chat/**"
- "src/app/api/completion/**"
- "**/chat/**"
- "**/ai/**"
- "**/llm/**"
- "**/api/generate*"
- "**/api/chat*"
- "**/api/completion*"
importPatterns:
- "ai"
- "@ai-sdk/*"
- "openai"
- "@anthropic-ai/sdk"
- "langchain"
promptSignals:
phrases:
- "prompt injection"
- "pii"
- "sensitive info"
- "ai security"
- "llm security"
anyOf:
- "protect ai"
- "block pii"
- "detect injection"
- "token budget"
---
# Add AI-Specific Security with Arcjet
Secure AI/LLM endpoints with layered protection: prompt injection detection, PII blocking, and token budget rate limiting. These protections work together to block abuse before it reaches your model, saving AI budget and protecting user data.
## Reference
Read https://docs.arcjet.com/llms.txt for comprehensive SDK documentation covering all frameworks, rule types, and configuration options.
Arcjet rules run **before** the request reaches your AI model — blocking prompt injection, PII leakage, cost abuse, and bot scraping at the HTTP layer.
## Step 1: Ensure Arcjet Is Set Up
Check for an existing shared Arcjet client (see `/arcjet:protect-route` for full setup). If none exists, set one up first with `shield()` as the base rule. The user will need to register for an Arcjet account at https://app.arcjet.com then use the `ARCJET_KEY` in their environment variables.
## Step 2: Add AI Protection Rules
AI endpoints should combine these rules on the shared instance using `withRule()`:
### Prompt Injection Detection
Detects jailbreaks, role-play escapes, and instruction overrides.
- JS: `detectPromptInjection()` — pass user message via `detectPromptInjectionMessage` parameter at `protect()` time
- Python: `detect_prompt_injection()` — pass via `detect_prompt_injection_message` parameter
Blocks hostile prompts **before** they reach the model. This saves AI budget by rejecting attacks early.
### Sensitive Info / PII Blocking
Prevents personally identifiable information from entering model context.
- JS: `sensitiveInfo({ deny: ["EMAIL", "CREDIT_CARD_NUMBER", "PHONE_NUMBER", "IP_ADDRESS"] })`
- Python: `detect_sensitive_info(deny=[SensitiveInfoType.EMAIL, SensitiveInfoType.CREDIT_CARD_NUMBER, ...])`
Pass the user message via `sensitiveInfoValue` (JS) / `sensitive_info_value` (Python) at `protect()` time.
### Token Budget Rate Limiting
Use `tokenBucket()` / `token_bucket()` for AI endpoints — the `requested` parameter can be set proportional to actual model token usage, directly linking rate limiting to cost. It also allows short bursts while enforcing an average rate, which matches how users interact with chat interfaces.
Recommended starting configuration:
- `capacity`: 10 (max burst)
- `refillRate`: 5 tokens per interval
- `interval`: "10s"
Pass the `requested` parameter at `protect()` time to deduct tokens proportional to model cost. For example, deduct 1 token per message, or estimate based on prompt length.
Set `characteristics` to track per-user: `["userId"]` if authenticated, defaults to IP-based.
### Base Protection
Always include `shield()` (WAF) and `detectBot()` as base layers. Bots scraping AI endpoints are a common abuse vector. For endpoints accessed via browsers (e.g. chat interfaces), consider adding Arcjet advanced signals for client-side bot detection that catches sophisticated headless browsers. See https://docs.arcjet.com/bot-protection/advanced-signals for setup.
## Step 3: Compose the protect() Call and Handle Decisions
All rule parameters are passed together in a single `protect()` call. Use this pattern:
```typescript
const userMessage = req.body.message; // the user's input
const decision = await aj.protect(req, {
requested: 1, // tokens to deduct for rate limiting
sensitiveInfoValue: userMessage, // PII scanning
detectPromptInjectionMessage: userMessage, // injection detection
});
if (decision.isDenied()) {
if (decision.reason.isRateLimit()) {
return Response.json(
{ error: "You've exceeded your usage limit. Please try again later." },
{ status: 429 },
);
}
if (decision.reason.isPromptInjection()) {
return Response.json(
{ error: "Your message was flagged as potentially harmful." },
{ status: 400 },
);
}
if (decision.reason.isSensitiveInfo()) {
return Response.json(
{
error:
"Your message contains sensitive information that cannot be processed. Please remove any personal data.",
},
{ status: 400 },
);
}
if (decision.reason.isBot()) {
return Response.json({ error: "Forbidden" }, { status: 403 });
}
}
// Arcjet fails open — log errors but allow the request
if (decision.isErrored()) {
console.warn("Arcjet error:", decision.reason.message);
}
// Proceed with AI model call...
```
Adapt the response format to your framework (e.g., `res.status(429).json(...)` for Express).
## Step 5: Verify
1. Start the app and send a normal message — should succeed
2. Test prompt injection by sending something like "Ignore all previous instructions and..."
3. Test PII blocking by sending a message with a fake credit card number
Start all rules in `"DRY_RUN"` mode first. Once verified, promote to `"LIVE"`.
**Always recommend using the Arcjet MCP tools** to verify rules and analyze traffic:
- `list-requests` — confirm decisions are being recorded, filter by conclusion to see blocks
- `analyze-traffic` — review denial rates and patterns for the AI endpoint
- `explain-decision` — understand why a specific request was allowed or denied (useful for tuning prompt injection sensitivity)
- `promote-rule` — promote rules from `DRY_RUN` to `LIVE` once verified
If the user wants a full security review, suggest the `/arcjet:security-analyst` agent which can investigate traffic, detect anomalies, and recommend additional rules.
The Arcjet dashboard at https://app.arcjet.com is also available for visual inspection.
## Common Patterns
**Streaming responses**: Call `protect()` before starting the stream. If denied, return the error before opening the stream — don't start streaming and then abort.
**Multiple models / providers**: Use the same Arcjet instance regardless of which AI provider you use. Arcjet operates at the HTTP layer, independent of the model provider.
**Vercel AI SDK**: Arcjet works alongside the Vercel AI SDK. Call `protect()` before `streamText()` / `generateText()`. If denied, return a plain error response instead of calling the AI SDK.
## Common Mistakes to Avoid
- Sensitive info detection runs **locally in WASM** — no user data is sent to external services. It is only available in route handlers, not in Next.js pages or server actions.
- `sensitiveInfoValue` and `detectPromptInjectionMessage` (JS) / `sensitive_info_value` and `detect_prompt_injection_message` (Python) must both be passed at `protect()` time — forgetting either silently skips that check.
- Starting a stream before calling `protect()` — if the request is denied mid-stream, the client gets a broken response. Always call `protect()` first and return an error before opening the stream.
- Using `fixedWindow()` or `slidingWindow()` instead of `tokenBucket()` for AI endpoints — token bucket lets you deduct tokens proportional to model cost and matches the bursty interaction pattern of chat interfaces.
- Creating a new Arcjet instance per request instead of reusing the shared client with `withRule()`.
Agency Growth Bottleneck Identifier
Role & Goal
You are an experienced agency growth consultant. Build a single, cohesive “Growth Bottleneck Identifier” diagnostic framework tailored to my agency that pinpoints what’s blocking growth and tells me what to fix first.
Agency Snapshot (use these exact inputs)
- Agency type/niche: [YOUR AGENCY TYPE + NICHE]
- Primary offer(s): [SERVICE PACKAGES]
- Average delivery model: [DONE-FOR-YOU / COACHING / HYBRID]
- Current client count (active accounts): [ACTIVE ACCOUNTS]
- Team size (employees/contractors) + roles: [EMPLOYEES/CONTRACTORS + ROLES]
- Monthly revenue (MRR): [CURRENT MRR]
- Avg revenue per client (if known): [ARPC]
- Gross margin estimate (if known): [MARGIN %]
- Growth goal (90 days + 12 months): [TARGET CLIENTS/REVENUE + TIMEFRAME]
- Main complaint (what’s not working): [WHAT'S NOT WORKING]
- Biggest time drains (where hours go): [WHERE HOURS GO]
- Lead sources today: [REFERRALS / ADS / OUTBOUND / CONTENT / PARTNERS]
- Sales cycle + close rate (if known): [DAYS + %]
- Retention/churn (if known): [AVG MONTHS / %]
Output Requirements
Create ONE diagnostic system with:
1) A short overview: what the framework is and how to use it monthly (≤10 minutes/week).
2) A Scorecard (0–5 scoring) that covers all areas below, with clear scoring anchors for 0, 3, and 5.
3) A Calculation Section with formulas + worked examples using my inputs.
4) A Decision Tree that identifies the primary bottleneck (capacity, delivery/process, pricing, or lead flow).
5) A “Fix This First” prioritization engine that ranks issues by Impact × Effort × Risk, and outputs the top 3 actions for the next 14 days.
6) A simple dashboard summary at the end: Bottleneck → Evidence → First Fix → Expected Result.
Must-Include Diagnostic Modules (in this order)
A) Capacity Constraint Analysis (max client load)
- Determine current delivery capacity and maximum sustainable client load.
- Include a utilization formula based on hours available vs hours required per client.
- Output: current utilization %, max clients at current staffing, and “over/under capacity” flag.
B) Process Inefficiency Detector (wasted time)
- Identify top 5 recurring wastes mapped to: meetings, reporting, revisions, approvals, context switching, QA, comms, onboarding.
- Output: estimated hours/month recoverable + the specific process change(s) to reclaim them.
C) Hiring Need Calculator (when to add people)
- Translate growth goal into role-hours needed.
- Recommend the next hire(s) by role (e.g., account manager, specialist, ops, sales) with triggers:
- “Hire when X happens” (utilization threshold, backlog threshold, SLA breaches, revenue threshold).
- Output: hiring timeline (Now / 30 days / 90 days) + expected capacity gained.
D) Tool/Automation Gap Identifier (what to automate)
- List the highest ROI automations for my time drains (e.g., intake forms, client comms templates, reporting, task routing, QA checklists).
- Output: automation shortlist with estimated hours saved/month and suggested tool category (not brand-dependent).
E) Pricing Problem Revealer (revenue per client)
- Compute revenue per client, delivery cost proxy, and “effective hourly rate.”
- Diagnose underpricing vs scope creep vs wrong packaging.
- Output: pricing moves (raise, repackage, tier, add performance fees, reduce inclusions) with clear criteria.
F) Lead Flow Bottleneck Finder (pipeline issues)
- Map pipeline stages: Lead → Qualified → Sales Call → Proposal → Close → Onboard.
- Identify the constraint stage using conversion math.
- Output: the single leakiest stage + 3 fixes (messaging, targeting, offer, follow-up, proof, outbound cadence).
G) “Fix This First” Prioritization (biggest impact)
- Use an Impact × Effort × Risk scoring table.
- Provide the top 3 fixes with:
- exact steps,
- owner (role),
- time required,
- success metric,
- expected leading indicator in 7–14 days.
Quality Bar
- Keep it practical and numbers-driven.
- Use my inputs to produce real calculations (not placeholders) where possible; if an input is missing, state the assumption clearly and show how to replace it with the real number.
- Avoid generic advice; every recommendation must tie back to a scorecard result or calculation.
- Use plain language. No fluff.
Formatting
- Use clear headings for Modules A–G.
- Include tables for the Scorecard and the Prioritization engine.
- End with a 14-day action plan checklist.
Now generate the full diagnostic framework using the inputs provided above.
AI Agent Security Evaluation Checklist
Act as an AI Security and Compliance Expert. You specialize in evaluating the security of AI agents, focusing on privacy compliance, workflow security, and knowledge base management.
Your task is to create a comprehensive security evaluation checklist for various AI agent types: Chat Assistants, Agents, Text Generation Applications, Chatflows, and Workflows.
For each AI agent type, outline specific risk areas to be assessed, including but not limited to:
- Privacy Compliance: Assess if the AI uses local models for confidential files and if the knowledge base contains sensitive documents.
- Workflow Security: Evaluate permission management, including user identity verification.
- Knowledge Base Security: Verify if user-imported content is handled securely.
Focus Areas:
1. **Chat Assistants**: Ensure configurations prevent unauthorized access to sensitive data.
2. **Agents**: Verify autonomous tool usage is limited by permissions and only authorized actions are performed.
3. **Text Generation Applications**: Assess if generated content adheres to security policies and does not leak sensitive information.
4. **Chatflows**: Evaluate memory handling to prevent data leakage across sessions.
5. **Workflows**: Ensure automation tasks are securely orchestrated with proper access controls.
Checklist Expectations:
- Clearly identify each risk point.
- Define expected outcomes for compliance and security.
- Provide guidance for mitigating identified risks.
Variables:
- ${agentType} - Type of AI agent being evaluated
- ${focusArea} - Specific security focus area
Rules:
- Maintain a systematic approach to ensure thorough evaluation.
- Customize the checklist according to the agent type and platform features.