A new term is taking over the tech world this week: sloppypasta — the act of pasting raw, unedited AI output directly into conversations, documents, or emails.
The website stopsloppypasta.ai went viral over the weekend, making a simple but powerful argument: writing used to be proof that someone thought about something. LLMs broke that contract. Generating text is now essentially free, but reading and verifying it still takes real human effort.
The result? A growing trust problem. When you receive a wall of AI-generated text, you don't know what the sender actually checked, what they understand, or what might be hallucinated. Studies from MIT Media Lab and Anthropic show that heavy AI delegation creates measurable "cognitive debt" — people who outsource their thinking to AI end up understanding less and remembering less.
Why this matters
- If you use AI tools at work, editing and validating output before sharing it isn't just polite — it's how you maintain credibility.
- For teams building products (like us), the standard should be higher: AI assists the work, it doesn't replace the thinking.
- The backlash against low-effort AI content is real and growing. Quality, human-reviewed content will stand out more, not less.
AI is an incredible tool. But copy-paste isn't a strategy.
Links
- Stop Sloppypasta → https://stopsloppypasta.ai
- Hacker News discussion → https://news.ycombinator.com/item?id=43386481
- MIT Media Lab: Your Brain on ChatGPT → https://www.media.mit.edu/projects/your-brain-on-chatgpt/overview/
- Anthropic: AI Assistance and Coding Skills → https://www.anthropic.com/research/ai-assisted-coding