web crawlers are what search engines use to index websites.
they literally crawl the internet following links, reading pages and reporting the content back to their providers, that's how Google manage to index the internet.
you can (supposedly) stop web crawls by using a file on the root of your web server called robots.txt, or assuming the web crawlers don't pay attention to this you can regularly review your web server logs and ban traffic from web crawlers IP addresses or address ranges.
if you mean worms, then firewalls can stop those.
I didn’t fight my way to the top of the food chain to be a vegetarian…
Im sick of people saying 'dont waste paper'. If trees wanted to live, they'd all carry guns.
"The inherent vice of capitalism is the unequal sharing of blessings; The inherent vice of socialism is the equal sharing of miseries."