Tag: Web Crawler

  • Google Patent – Duplicate Document Detection

    12/1/2009 Google was granted a patent by the US Patent Office detailing how duplicate documents are detected in a web crawler system. This new patent details how Google detects and then filters or determines which documents are the “more important” version for the purpose of providing unique search results.

  • How Search Engines Rank Pages

    You may be wondering how search engines select the top pages from millions of others. There are calculations involved called algorithms and you have to work with these to put your site in page one. The great part is that all of the things search engines look for to rank your site are beneficial to…