Free Robots.txt Tester

Fetch any live website's robots.txt in real time and instantly test whether any URL path is allowed or blocked by Googlebot, Bingbot, Yandex and 40+ crawlers — using RFC 9309-compliant rule matching.

✓ Live Fetch via CORS Proxy ✓ RFC 9309 Rule Matching ✓ 40+ Bot Presets ✓ Batch URL Testing ✓ Paste Custom robots.txt ✓ No Sign-up
Step 1 — Load robots.txt
Fetch from a live website URL, or paste the robots.txt content manually
💡 Enter any website domain — the tool fetches its real, live robots.txt via a CORS proxy. No simulation.
How to Use This Robots.txt Tester
  1. Load robots.txt — Enter any website URL to fetch its live robots.txt file in real time, or switch to "Paste" tab to test a custom robots.txt.
  2. Review parsed rules — The tool parses all user-agent blocks, Disallow/Allow rules, crawl-delay and sitemap directives and displays them in a clear table.
  3. Test a URL path — Type any path (e.g. /admin/) in the tester, choose your bot, and get an instant ALLOWED/BLOCKED verdict with the exact matching rule shown.
  4. Batch test — Paste multiple paths to test them all at once against your chosen bot and see a full results table with blocked/allowed counts.
  5. Understand the verdict — Rule matching follows RFC 9309: the most specific (longest) matching path wins. Allow beats Disallow on equal specificity.
Frequently Asked Questions
How do I test if Googlebot can crawl a page?
Enter your domain in Step 1, click "Fetch robots.txt", then in Step 2 type your page path (e.g. /blog/my-post/) and select "Googlebot" from the dropdown. Click Test to get an instant ALLOWED or BLOCKED verdict.
Why does my page show as blocked even though I didn't mean to block it?
Common causes: a Disallow: / rule blocking the whole site, a catch-all pattern like Disallow: /*, or a path prefix that unintentionally matches your page (e.g. Disallow: /private also blocks privacy-policy/). The tester shows you exactly which rule matched.
What does RFC 9309 say about rule matching?
RFC 9309 (the Robots Exclusion Protocol standard) defines: (1) Rule matching is case-sensitive. (2) The most specific rule wins — the one with the longest matching path pattern takes priority. (3) If an Allow and Disallow rule have equal specificity, Allow wins. (4) Patterns support * (match any sequence) and $ (end-of-path anchor). (5) Rules are applied in the order: specific UA block first, then wildcard * block.
Does blocking in robots.txt remove a page from Google?
No. Blocking a page in robots.txt only prevents Googlebot from crawling it — not from indexing it. If the page is linked from other pages, Google may still index it even without crawling. To fully remove a page from the index, use a noindex meta tag or Google Search Console's URL removal tool.
Can I test a robots.txt that isn't on a live website?
Yes — switch to the "Paste robots.txt" tab in Step 1, paste your robots.txt content directly, click "Parse & Load", and then use the tester in Step 2. This works for any robots.txt content — local files, drafts, or content from a staging environment.

About ToollLive Free Robots.txt Tester

ToollLive's free robots.txt tester is the most accurate online tool to test and validate robots.txt rules in real time — no sign-up required. Simply enter any website URL to fetch the live robots.txt file, then test whether any URL path is allowed or blocked for Googlebot, Bingbot, Yandex, DuckDuckBot and 40+ other crawlers. Our tester implements the full RFC 9309 Robots Exclusion Protocol — including wildcard matching, longest-path specificity, and Allow-beats-Disallow tie-breaking. Use the batch tester to check dozens of URLs at once, or paste a custom robots.txt for offline validation. Perfect for SEO audits, developers debugging crawl issues, and webmasters optimising crawl budget. Explore our full free SEO tools suite including our Robots.txt Generator and XML Sitemap Generator.