|WikiProject Google||(Rated Redirect-class, Low-importance)|
Slashdot featured on 1 May 2007 a Forbes article about this: http://www.forbes.com/home/technology/2007/04/29/sanar-google-skyfacet-tech-cx_ag_0430googhell.html Rwehr 16:25, 1 May 2007 (UTC)
Quality of this article
This article reads like a propaganda page for Google and needs to be rewritten at least to explain that the Google Supplemental Results Index is not nearly as benign as the article makes it out to be.
It may be that the inaccuracies in the article are due only to the contributors' collective lack of knowledge on the subject. For example, Supplemental Results only rank in search results if there are NO Main Web Index pages to show for the query. Google arbitrarily shows less relevant content from the Main Web Index as a preference to showing Supplemental Results pages.
New Web content -- perfectly unique, valuable, and informative -- most often goes directly into the Supplemental Results Index simply because it lacks sufficient (internal) PageRank to be included in the Main Web Index.
Duplicate is NOT a cause for pages being included in the Supplemental Results. More than one Google employee has attempted to debunk that myth over the past year. The only connection between duplicate content and the Supplemental Results is that a site which duplicates its content across multiple URLs may attract links to the various URLs and thus split its PageRank across too many pages. The classic example is a blog which makes multiple archives of posts.
Most people who believe that duplicate content is automatically thrown into the Supplemental Results Index are confused about the significance of Google's "Omitted Results", which is a sign that a filter has been applied to search results. Matt Cutts confirmed in a discussion on SEOmoz that "Omitted Results" -- usually kicked off by duplicated titles or meta descriptions -- are NOT necessarily Supplemental Results Pages and that it should not be assumed there is any correlation between the two.
The "Lack of Trust" paragraph refers to an old Matt Cutts post explaining the Bigdaddy update (early 2006), which has since been superseded by the May 2007 Searchology Update -- an update that introduced an entirely new, completely rewritten Google search engine (variously called "Google 2.0" and "Google 3.0" because they rewrote the entire search engine).
The "High Page Count" paragraph is misleading because it fails to take into consideration that a large content site may, in fact, attract sufficient inbound links to promote much of its content into the Main Web Index. There is no direct correlation between site size and the likelihood that pages will fall into the Supplemental Results Index.
And page freshness has not been shown to be a factor. In fact, there are many very stale, very old pages that remain firmly entrenched in the Main Web Index despite their having undergone no changes simply because they have a lot of links pointing to them.
The opinions that people express at conferences are not substantiating "facts" and should not be used to shore up the points as presented by this article. In fact, a review of SEO opinions is not constructive. What is known is that the Supplemental Results Index exists, that its contents are not indexed as thoroughly as the pages in the Main Web Index, and that Main Web Index pages are given preference over Supplemental Results Pages (in search results) even when the Supplemental Pages are more relevant.Michael Martinez (talk) 22:27, 26 November 2007 (UTC)
- Overall, I agree completely with your comments. You cannot have a quality article that uses statements made by Google and goolgers as the major sources of information. This article has major problems in it. -- John Gohde (talk) 20:57, 22 December 2007 (UTC)
Links in the resources section lead to affiliate/vendor sites, blogs, articles, etc... Wikipedia should not be used for profit but for information alone. —Preceding unsigned comment added by 220.127.116.11 (talk) 10:59, 14 July 2010 (UTC)