This section provides information on how to use a
robots.txt file to manage indexing for Front-End Sites.
robots.txt file instructs search engine crawlers on which URLs and files can be accessed on your domain. This allows you to prevent crawling and indexing to specified areas of your website.
You can add a
robots.txt file to your Front-End Sites project using the Next.js static file serving feature. The steps below are intended as a guide to help you get started. Refer to Next.js Crawling and Indexing documentation for more information.
Navigate to your root directory and then open the
Create a file named
robots.txtand specify which URLs you want to block or allow crawler access to. For example:
//robots.txt # Block crawler access User-agent: * Disallow: /billing User-agent: * Disallow: /user-profile/ # Allow crawler access User-agent: * Allow: /home User-agent: * Allow: /products