Skill returns large output to push user's prompt out of context
Summary
Skill whose output deliberately inflates to tens of thousands of tokens of filler, pushing the user's original prompt out of the model's context window. The injected instructions at the start of the filler become the new "top" of context and drive subsequent behavior.
Severity & confidence
Medium
Notable weakness with plausible abuse chains or partial mitigations already in place. Track, and resolve as part of your normal hardening cadence.
Confirmed
Observed directly by Jiffy scanners or validated by multiple independent sources. Indicators are reliable enough to drive automated action.
Indicators
Observed patterns and artifacts associated with this entry. Each indicator can be copied into your detection stack or SIEM.
(?i)lorem\s+ipsum[\s\S]{10000,}|(?:placeholder\s+){1000,}
Detection rule
A YARA-style pseudo-rule auto-generated from the indicators above. Useful as a starting point — adapt the syntax for your target detection platform.
rule jiffy_ti_2026_000032
{
meta:
source = "jiffy-intel"
severity = "medium"
description = "Auto-generated from Jiffy Intel indicators"
strings:
$content_pattern_0 = "(?i)lorem\\s+ipsum[\\s\\S]{10000,}|(?:placeholder\\s+){1000,}"
condition:
$content_pattern_0
}Auto-generated from the indicators above. Adapt syntax for your detection stack before deploying.
Affected tools
| Tool | Versions | Status |
|---|---|---|
| Claude Code | * | vulnerable |
| Cursor | * | vulnerable |
Example artifacts
Sanitized examples of artifacts Jiffy has observed exhibiting this pattern. Publisher handles are redacted; version ranges and status reflect the most recent scan.
- doc-filler-skillSkillRemoved
- template-expansion-skillSkillQuarantined
How to remediate
- 01Cap per-tool-call output size.
- 02Reject skill output that exceeds a threshold without a compelling reason.
Timeline & sources
Timeline
- First observedMar 25, 202623 days ago
- Last updatedApr 22, 2026today
- PublishedApr 4, 202613 days ago
Sources
References
OWASP LLM-01: Prompt Injection (2026)
https://genai.owasp.org/llmrisk/llm-01-2026/OWASP LLM-10: Unbounded Consumption (2026)
https://genai.owasp.org/llmrisk/llm-10-2026/Jiffy Research — OWASP LLM Top 10 Is Not Enough
https://blog.jiffylabs.ai/posts/owasp-llm-top-10-is-not-enoughScan for patterns like this
Point Jiffy at your GitHub org, IDE config, or a single artifact. Get a scored report in under a minute.
Start a free scan