Jump to content

Deep linking: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Line 70: Line 70:
* [http://www.boingboing.net/2006/06/22/cory_on_fanmade_radi.html Cory Doctorow on fan-made radio podcasts: "What deep linking means."] from [[BoingBoing]]
* [http://www.boingboing.net/2006/06/22/cory_on_fanmade_radi.html Cory Doctorow on fan-made radio podcasts: "What deep linking means."] from [[BoingBoing]]
* [http://garryconn.com/what-is-deep-linking.php What Is Deep Linking] - What is deep linking and how does it improve SEO?
* [http://garryconn.com/what-is-deep-linking.php What Is Deep Linking] - What is deep linking and how does it improve SEO?
* [http://knol.google.com/k/seo-basics# SEO Basics]
* [http://www.useit.com/alertbox/20020303.html Deep Linking is Good Linking] - Usability implications of deep links
* [http://www.useit.com/alertbox/20020303.html Deep Linking is Good Linking] - Usability implications of deep links



Revision as of 13:10, 20 April 2011

On the World Wide Web, deep linking is making a hyperlink that points to a specific page or image on a website, instead of that website's main or home page. Such links are called deep links.

Example

This link: http://en.wikipedia.org/wiki/Deep_linking is an example of a deep link. The URL contains all the information needed to point to a particular item, in this case the English Wikipedia article on deep linking, instead of the Wikipedia home page at http://www.wikipedia.org/.

Deep linking and HTTP

The technology behind the World Wide Web, the Hypertext Transfer Protocol (HTTP), does not actually make any distinction between "deep" links and any other links—all links are functionally equal. This is intentional; one of the design purposes of the Web is to allow authors to link to any published document on another site. The possibility of so-called "deep" linking is therefore built into the Web technology of HTTP and URLs by default—while a site can attempt to restrict deep links, to do so requires extra effort. According to the World Wide Web Consortium Technical Architecture Group, "any attempt to forbid the practice of deep linking is based on a misunderstanding of the technology, and threatens to undermine the functioning of the Web as a whole". [1]

Usage

Some commercial websites object to other sites making deep links into their content either because it bypasses advertising on their main pages, passes off their content as that of the linker or, like The Wall Street Journal, they charge users for permanently-valid links.

Sometimes, deep linking has led to legal action such as in the 1997 case of Ticketmaster versus Microsoft, where Microsoft deep-linked to Ticketmaster's site from its Sidewalk service. This case was settled when Microsoft and Ticketmaster arranged a licensing agreement.

Ticketmaster later filed a similar case against Tickets.com, and the judge in this case ruled that such linking was legal as long as it was clear to whom the linked pages belonged.[2] The court also concluded that URLs themselves were not copyrightable, writing: "A URL is simply an address, open to the public, like the street address of a building, which, if known, can enable the user to reach the building. There is nothing sufficiently original to make the URL a copyrightable item, especially the way it is used. There appear to be no cases holding the URLs to be subject to copyright. On principle, they should not be."

Deep linking and web technologies

Websites which are built on web technologies such as Adobe Flash and AJAX often do not support deep linking. This can result in usability problems for people visiting such websites. For example, visitors to these websites may be unable to save bookmarks to individual pages or states of the site, web browser forward and back buttons may not work as expected, and use of the browser's refresh button may return the user to the initial page.

However, this is not a fundamental limitation of these technologies. Well-known techniques, and libraries such as SWFAddress and History Keeper, now exist that website creators using Flash or AJAX can use to provide deep linking to pages within their sites.[3][4][5][6]

Court rulings

Probably the earliest legal case arising out of deep-linking was the 1996 Scottish case of Shetland Times vs Shetland News where the Times accused the News of appropriating stories on the Times' website as its own.[7]

In the beginning of 2006 in a case between the search engine Bixee.com and job site Naukri.com, the Delhi High Court in India prohibited Bixee.com from deeplinking to Naukri.com.[8]

In December 2006, a Texas court ruled that linking by a motocross website to videos on a Texas-based motocross video production website did not constitute fair use. The court subsequently issued an injunction.[9] This case, SFX Motor Sports Inc., v. Davis, was not published in official reports, but is available at 2006 WL 3616983.

In a February 2006 ruling, the Danish Maritime and Commercial Court (Copenhagen) found systematic crawling, indexing and deeplinking by portal site ofir.dk of real estate site Home.dk not to conflict with Danish law or the database directive of the European Union. The Court even stated that search engines are desirable for the functioning of the Internet of today. And that one, when publishing information on the Internet, must assume—and accept—that search engines deep link to individual pages of one's website.[10]

Opt out

Web site owners wishing to prevent search engines from deep linking are able to use the existing Robots Exclusion Standard (/robots.txt file) to specify their desire or otherwise for their content to be indexed. Some[who?] feel that content owners who fail to provide a /robots.txt file are implying that they do not object to deep linking either by search engines or others who might link to their content. Others[who?] believe that content owners may be unaware of the Robots Exclusion Standard or may not use robots.txt for other reasons. Deep linking is also practiced outside the search engine context, so some participating in this debate question the relevance of the Robots Exclusion Standard to controversies about Deep Linking. The Robots Exclusion Standard does not programmatically enforce its directives so it does not prevent search engines and others who do not follow polite conventions from deep linking.

See also

References

  1. ^ Bray, Tim (Sept. 11, 2003). ""Deep Linking" in the World Wide Web". W3. Retrieved May 30, 2007. {{cite web}}: Check date values in: |date= (help)
  2. ^ Finley, Michelle (Mar. 30, 2000). "Attention Editors: Deep Link Away". Wired News. {{cite news}}: |access-date= requires |url= (help); Check date values in: |accessdate= and |date= (help)
  3. ^ "Deep-linking to frames in Flash websites".
  4. ^ "Deep Linking for Flash and Ajax".
  5. ^ "History Keeper – Deep Linking in Flash & JavaScript".
  6. ^ "Deep Linking for AJAX".
  7. ^ For a more extended discussion, see generally the Wikipedia article Copyright aspects of hyperlinking and framing.
  8. ^ "High Court Critical On Deeplinking". EFYtimes.com. Dec. 29, 2005. Retrieved May 30, 2007. {{cite web}}: Check date values in: |date= (help); Cite has empty unknown parameters: |month= and |coauthors= (help)
  9. ^ Declan McCullagh. "Judge: Can't link to Webcast if copyright owner objects". News.com. Retrieved May 30, 2007.
  10. ^ "Udskrift af SØ- & Handelsrettens Dombog" (PDF) (in Danish). bvhd.dk. Feb. 24, 2006. Retrieved May 30, 2007. {{cite web}}: Check date values in: |date= (help)