Logo

SEO & Security

robots.txt

The "No Trespassing" sign for your app. Control exactly which areas Google allowed to see and which remain private.

I used to think: "If I don't link it, Google won't find it."
Wrong. Google finds everything. There is nothing more embarrassing than a customer googling their company name and landing on your internal admin login page.

The robots.txt is the very first file a crawler requests. It acts as the gatekeeper. It says: "Hello Google, the blog is public, but the folder /profile is strictly off-limits."

1. Crawler Simulator

Play the gatekeeper. Edit the rules on the left and test on the right with different URLs (e.g., /admin/dashboard) to see if the bot gets blocked (Disallow) or passed (Allow).

robots.txtEditable
Bot Simulation

2. Implementation

Just like the .htaccess, this file must physically exist in the root of the deployed app. In Angular, this means it belongs in the public/ folder.

my-angular-project/
├── src/
├── angular.json
├── public/
│ ├── robots.txt New
│ ├── .htaccess
│ └── favicon.ico

The Code

The syntax is simple text. User-agent: * applies to all bots. Then follow the permissions.

public/robots.txt
# 1. Rules for ALL bots (Google, Bing, etc.)
User-agent: *

# Allowed: Everything by default
Allow: /

# Disallowed: Sensitive or technical areas
# Keeps your Google index clean from login pages or user profiles.
Disallow: /admin/
Disallow: /profile/
Disallow: /api/

# 2. Sitemap Location (Crucial!)
# Tells the bot where to find all valid links.
Sitemap: https://adenui.com/sitemap.xml
💡

Link your Sitemap: Always add the absolute path to your sitemap at the bottom. This is the fastest way to let Google know about new pages in your app.