Robots exclusion standard
The robots exclusion standard, also known as the robots exclusion protocol or robots.txt protocol is a method to prevent certain bots that analyze websites or other robots that investigate all or part of a website's access, whether public or private, from adding unnecessary information to search results. Robots are frequently used by search engines to categorize files on websites, or by webmasters to correct or filter source code.
The robots.txt file
A robots.txt file on a website will function as a request that specifies that certain robots should not pay attention to specific files or directories in their search. This can be done, for example, to exclude advanced search results, or because the content of the selected directories may be misleading or not applicable to the site's overall ranking.
What should be done is to create a new file /robots.txt in the root of the website and put the desired content
In this example, we are only going to deny access to the Linguee robot
User-agent: Linguee Bot
Disallow: /
but if it were another robot, we should eliminate or name the other
Several robots can be named within the same file, allowing or denying access to different folders of our website
more information at: