Skip to main content

Robots.txt File and Indexing

Manage site indexing with a robots.txt file.


This section provides information on how to use a robots.txt file to manage indexing for Front-End Sites.

Robots.txt File

A robots.txt file instructs search engine crawlers on which URLs and files can be accessed on your domain. This allows you to prevent crawling and indexing to specified areas of your website.

Add a robots.txt file to a Next.js Project

You can add a robots.txt file to your Front-End Sites project using the Next.js static file serving feature. The steps below are intended as a guide to help you get started. Refer to Next.js Crawling and Indexing documentation for more information.

  1. Navigate to your root directory and then open the public folder.

  2. Create a file named robots.txt and specify which URLs you want to block or allow crawler access to. For example:

    //robots.txt
    
    # Block crawler access
    User-agent: *
    Disallow: /billing
    
    User-agent: *
    Disallow: /user-profile/
    
    # Allow crawler access
    User-agent: *
    Allow: /home
    
    User-agent: *
    Allow: /products
    

More Resources