By default, indexation is disabled. To enable it, you need to set the SITE_INDEXABLE option to true in the .env file or in your environment variables.
SITE_INDEXABLE=true
Then, the robots.txt allows all robots to crawl the website. This can be modified in the nuxt.config.ts file.
Bloggrify uses the @nuxtjs/robots module to generate the robots.txt file.
You can read more about the module @nuxtjs/robots on their official documentation.
Let's say you want to block all robots from crawling your website. You can do this by adding the following configuration in the nuxt.config.ts file:
robots: {
UserAgent: '*',
Disallow: '/'
}
If you want to disallow Yandex from crawling your website, you can do this by adding the following configuration in the nuxt.config.ts file:
robots: {
UserAgent: 'Yandex',
Disallow: '/'
}
By default, the robots.txt allows all robots to crawl the website. It's possible to modify the robots.txt file in the app.config.ts file.
For example if you don't want to block yandex:
robots: [
{
UserAgent: "Yandex",
Disallow: ["/"],
},
],
The default configuration allows all robots to crawl the website.