Jump to content

Link rot

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Moretz (talk | contribs) at 12:16, 18 June 2010 (→‎Link rot on the Web: removed dead link tag - link is functional). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Link rot (or linkrot) is an informal term for the process by which, either on individual websites or the Internet in general, increasing numbers of links point to web pages, servers or other resources that have become permanently unavailable. The phrase also describes the effects of failing to update out-of-date web pages that clutter search engine results. A link that doesn't work anymore is called a broken link, dead link or dangling link.

Because broken links are, to some people, very annoying, are generally disruptive to the user experience, and can live on for many years, sites containing them are regarded as unprofessional.[citation needed]

Causes

A link may become broken for several reasons: The most common result of a dead link is a 404 error, which indicates that the web server responded, but the specific page could not be found.

Some news sites contribute to the link rot problem by keeping only recent news articles online where they are freely accessible at their original URLs, then removing them or moving them to a paid subscription area. This causes a heavy loss of supporting links in sites discussing newsworthy events and using news sites as references.[citation needed]

Another type of dead link occurs when the server that hosts the target page stops working or relocates to a new domain name. In this case the browser may return a DNS error, or it may display a site unrelated to the content sought. The latter can occur when a domain name is allowed to lapse, and is subsequently reregistered by another party. Domain names acquired in this manner are attractive to those who wish to take advantage of the stream of unsuspecting surfers that will inflate hit counters and PageRanking.

A link might also be broken because of some form of blocking such as content filters or firewalls. Dead links commonplace on the Internet can also occur on the authoring side, when website content is assembled, copied, or deployed without properly verifying the targets, or simply not kept up to date.

Links specially crafted to not resolve, as a type of meme, are known as Zangelding, which roughly translated from German means tangle thing. A zangelding is basically a list of self referencing broken links.

Prevalence

The 404 "not found" response is familiar to even the occasional Web user. A number of studies have examined the prevalence of link rot on the Web, in academic literature, and in digital libraries. In a 2003 experiment, Fetterly et al. (2003) discovered that about one link out of every 200 disappeared each week from the internet. McCown et al. (2005) discovered that half of the URLs cited in D-Lib Magazine articles were no longer accessible 10 years after publication, and other studies have shown link rot in academic literature to be even worse (Spinellis, 2003, Lawrence et al., 2001). Nelson and Allen (2002) examined link rot in digital libraries and found that about 3% of the objects were no longer accessible after one year.

Discovering

Detecting link rot for a given URL is difficult using automated methods. If a URL is accessed and returns back an HTTP 200 (OK) response, it may be considered accessible, but the contents of the page may have changed and may no longer be relevant. Some web servers also return a soft 404, a page returned with a 200 (OK) response (instead of a 404 that indicates the URL is no longer accessible). Bar-Yossef et al. (2004) developed a heuristic for automatically discovering soft 404s. One of the most widely used link checkers is Xenu's Link Sleuth.[citation needed][neutrality is disputed]

Combating

Due to the unprofessional image that dead links bring to both sites linking and linked to, there are multiple solutions that are available to tackle them - some working to prevent them in the first place, and others trying to resolve them when they have occurred. There are several tools that have been developed to help combat link rot.

Server Side

  • Avoiding unmanaged hyperlink collections
  • Avoiding links to pages deep in a website ("deep linking")
  • Using redirection mechanisms (e.g. "301: Moved Permanently") to automatically refer browsers and crawlers to the new location of a URL
  • Content Management Systems often offer inbuilt solutions to the management of links, e.g. links are updated when content is changed or moved on the site.
  • WordPress guards against link rot by replacing non-canonical URLs with their canonical versions.[1]
  • IBM's Peridot attempts to automatically fix broken links.
  • Permalinking stops broken links by guaranteeing that the content will never move. Another form of permalinking is linking to a permalink that then redirects to the actual content, ensuring that even though the real content may be moved etc..., links pointing to the resources stay intact.

User side

  • The Linkgraph widget get the URL of the correct page based upon the old broken URL by using historical location information.
  • The Google 404 Widget employs Google technology to 'guess' the correct URL, and also provides the user a Google search box to find the correct page.
  • When a user receives a 404 response, the Google Toolbar attempts to assist the user in finding the missing page.[2]
  • DeadURL.com gathers and ranks alternate urls for a broken link using Google Cache, the Internet Archive, and user submissions. Typing deadurl.com/ left of a broken link in the browser's address bar and pressing enter loads a ranked list of alternate urls, or (depending on user preference) immediately forwards to the best one.

Web archiving

To combat link rot, web archivists are actively engaged in collecting the Web or particular portions of the Web and ensuring the collection is preserved in an archive, such as an archive site, for future researchers, historians, and the public. The largest web archiving organization is the Internet Archive Wayback Machine, which strives to maintain an archive of the entire Web, taking periodic snapshots of pages that can then be accessed for free and without registration many years later simply by typing in the URL. National libraries, national archives and various consortia of organizations are also involved in archiving culturally important Web content.

Individuals may also use a number of tools that allow them to archive web resources that may go missing in the future:

  • WebCite, a tool specifically for scholarly authors, journal editors and publishers to permanently archive "on-demand" and retrieve cited Internet references (Eysenbach and Trudel, 2005).
  • Archive-It, a subscription service that allows institutions to build, manage and search their own web archive
  • Some social bookmarking websites, such as Furl, make private copies of web pages bookmarked by their users.
  • Google keeps a text-based cache (backup copy) of the pages it has crawled, which can be used to read the information of recently removed pages. However, unlike in other archiving services, cached pages are not stored permanently.

Authors citing URLs

A number of studies have shown how widespread link rot is in academic literature (see below). Authors of scholarly publications have also developed best-practices for combating link rot in their work:

See also

Further reading

  • Ziv Bar-Yossef, Andrei Z. Broder, Ravi Kumar, and Andrew Tomkins (2004). "Sic transit gloria telae: towards an understanding of the Web's decay". Proceedings of the 13th international conference on World Wide Web. pp. 328–337. doi:10.1145/988672.988716. {{cite conference}}: Unknown parameter |booktitle= ignored (|book-title= suggested) (help)CS1 maint: multiple names: authors list (link)
  • Tim Berners-Lee (1998). "Cool URIs Don't Change". {{cite journal}}: Cite journal requires |journal= (help)
  • Gunther Eysenbach and Mathieu Trudel (2005). "Going, going, still there: using the WebCite service to permanently archive cited web pages". Journal of Medical Internet Research. 7 (5): e60. doi:10.2196/jmir.7.5.e60. PMC 1550686. PMID 16403724.{{cite journal}}: CS1 maint: unflagged free DOI (link)
  • Dennis Fetterly, Mark Manasse, Marc Najork, and Janet Wiener (2003). "A large-scale study of the evolution of web pages". Proceedings of the 12th international conference on World Wide Web. {{cite conference}}: Unknown parameter |booktitle= ignored (|book-title= suggested) (help)CS1 maint: multiple names: authors list (link)
  • Wallace Koehler (2004). "A longitudinal study of web pages continued: A consideration of document persistence". Information Research. 9 (2).
  • John Markwell and David W. Brooks (2002). "Broken Links: The Ephemeral Nature of Educational WWW Hyperlinks". Journal of Science Education and Technology. 11 (2): 105–108. doi:10.1023/A:1014627511641.

In academic literature

In digital libraries

References

  1. ^ Rønn-Jensen, Jesper (2007-10-05). "Software Eliminates User Errors And Linkrot". Justaddwater.dk. Retrieved 2007-10-05.
  2. ^ Mueller, John (2007-12-14). "FYI on Google Toolbar's Latest Features". Google Webmaster Central Blog. Retrieved 2008-07-09.