I have a robots.txt on the site root directory with the content text of:
User-agent: *
Disallow: /
By reading some stuffs they say that no crawler will engage the file content of my site like google and it will be safe for the bad crawlers not see the directory of your site and of course it is good for the security. But when I search the site on google it happen that
A description for this result is not available because of this site's robots.txt – learn more.
So my question now is how can a search engine be known the description of the site if your not allowing they're crawler not to engage the protected directory where all your file is ?