Robots.txt

By default, the robots.txt allows all robots to crawl the website. This can be modified in the nuxt.config.ts file.

Bloggrify uses the @nuxtjs/robots module to generate the robots.txt file.

You can read more about the module @nuxtjs/robots on their official documentation.

Let's say you want to block all robots from crawling your website. You can do this by adding the following configuration in the nuxt.config.ts file:

robots: { 
    UserAgent: '*',
    Disallow: '/'
    }

If you want to disallow Yandex from crawling your website, you can do this by adding the following configuration in the nuxt.config.ts file:

robots: { 
    UserAgent: 'Yandex',
    Disallow: '/'
    }
Don't forget to configure your site URL in your .env file !!

Before version 2.0.0

By default, the robots.txt allows all robots to crawl the website. It's possible to modify the robots.txt file in the app.config.ts file.

For example if you don't want to block yandex:

    robots: [
    {
        UserAgent: "Yandex",
        Disallow: ["/"],
    },
],

The default configuration allows all robots to crawl the website.

Don't forget to configure URL with the url of your website at the top of the app.config.ts file.