By blocking robots spidering their website, Google (et al) cannot index their content, meaning that if you search for it, you will not find it. Bad news? Not to Hacker News.
Hacker News has long been a tight knit community, and not one driven by page views. They care little if they grow to be as large as Digg, and this move is a strong one to keep their community unique, and cohesive. However, some HN users are not happy:
Proof from Hacker News itself:
What do you think?
Paul Graham has spoken and it’s apparently all one misunderstanding:
“Don’t worry, it doesn’t mean anything. The software for ranking applications runs on the same server, and it is horribly inefficient (something 4 people use every 6 months doesn’t tend to get optimized much). This weekend all of us were reading applications at the same time, and the system was getting so slow that I banned crawlers for a bit to buy us some margin. (Traffic from crawlers is much more expensive for us than traffic from human users, because it interacts badly with lazy item loading.) We only finished reading applications an hour before I had to leave for SXSW , so I forgot to set robots.txt back to the normal one, but I just did now”
Read next: Tell Obama's Copyright Minister To Back Off