Copyright © 2003W3C®(MIT,ERCIM,Keio),All Rights Reserved. W3Cliability,trademark,document use, andsoftware licensingrules apply.
The community of Web users has been engaged in discussion andlitigation concerning the practice of "deep linking." This document isdesigned to provide input to this discussion based on the architectureof the underlying technology.
This document has been developed for discussion by theW3C Technical Architecture Group.This finding addressesissuedeepLinking-25.
The only change in the 11 Sep 2003finding is the addition of a reference to a court decision in Germanythat relates to deep linking. The TAG decided to add thisto the finding at itsJulyface-to-face meeting.
This finding was first accepted by the TAG at its7February 2003 face-to-face meeting, and then reconfirmed (with asmall change) at its17 February 2003teleconference.
Publication of this finding does not imply endorsement by the W3CMembership. This is a draft document and may be updated, replaced orobsoleted by other documents at any time.
Additional TAG findings, bothapproved and in draft state, may also be available. The TAG expects toincorporate this and other findings into a Web Architecture Documentthat will be published according to the process of theW3C RecommendationTrack.
Please send comments on this finding to the publicly archived TAGmailing listwww-tag@w3.org(archive).
1Introduction and summary
2Deep linking background
3The Uniform Resource Identifier and Web Architecture
4Access Control and Accountability on the Web
5Deep Linking by Analogy
6Resource Access is an Issue of Public Policy
7Conclusion
8References
9Policies
This finding discusses the issues raised by the controversies arounddeep linking. This discussion includes a survey of the usage andmeaning of Web addresses and the mechanisms available to controlaccess to resources on the Web.
The conclusion is that any attempt to forbid the practice of deeplinking is based on a misunderstanding of the technology, andthreatens to undermine the functioning of the Web as a whole. The twochief reasons for this are:
A Web Address ("URI," or "URL") is just an identifier. There is aclear distinction between identifying a resource on the Web andaccessing it; suppressing the use of identifiers is not logicallyconsistent.
It is entirely reasonable for owners of Web resources tocontrol access to them. The Web provides several mechanisms for doingthis, none of which rely on hiding or suppressing identifiers forthose resources.
People engaged in delivering information or services via the WorldWide Web typically speak in terms of "Web sites" which have "homepages" or "portal pages." Deep linking is the practice of publishinga hyperlink from a page on one site to a page "inside" another site,bypassing the "home" or "portal" page.
Certain Web publishers wish to prevent or control deep linking intotheir site, and wish to establish a right to exercise such control asa matter of public policy, i.e., through litigation based on existinglaw or by instituting new legislation.
This issue centers around the use of hyperlinks. The centralfeature of a hyperlink, and indeed a central feature of Webarchitecture, is the notion of a "Uniform Resource Identifier" (URI),often called a "Uniform Resource Locator" or URL, or in everydayspeech a "Web address." Every object on the Web must have a URI, whichis simply a string of characters that may be typed into a Web browser,read over the phone, or painted on the side of a vehicle.
The only purpose of a URI is to identify a Web resource. It isbasic to the architecture of the Web that URIs may be freelyinterchanged, and that once one knows a URI, one may pass it ontoothers, publish it, and attempt to access whatever resource itidentifies. There is a clear distinction between identifying aresource and accessing it. It is entirely reasonable to control accessto a resource, but entirely futile to prevent it being identified.
The formal definition of the URI, on which all of the software thatsuccessfully drives the Web is built, is in[RFC2396]. This formal definition has no notion of a "home" or"portal" page, nor does any of the vast amount of software deployed toprocess URIs. Thus, from the point of view of the underlyingtechnology, all links are deep links.
While the Web does not limit anyone's ability to refer to anyresource, it offers a rich suite of access-control facilities. Theprocedures by which resources may be accessed over the web are thoseof the Hypertext Transfer Protocol (HTTP), which is formally definedin[RFC2616]. When any piece of software attempts toaccess a resource via its URI, it sends a request which typicallycontains a variety of information including:
The identity of the software (for example, Microsoft Internet Explorer or the Google indexing engine).
The URI of the resource which contained the link which isbeing followed, known as "the Referer" [sic]. This field is not compulsory but is widely provided by popular user agents such as Web Browsers.
Optionally, a user identification and password for theresource being accessed.
When such a request is received, it may succeed, or it may fail. Itmay fail because there is no resource identified by the URI (thewell-known "404 Not Found") or because the server refuses, based onthe information available, to grant access ("401 PermissionDenied"). A server can be programmed to deny access to any resourcefor a variety of reasons, including:
Resource access requires a username and password, and none was provided, or the username was not recognized, or the password was wrong.
The server has a policy about which pages are allowed to link to this page, and the Referer field in the request was not in the approved list or was not provided.
The software requesting access was not of an approvedtype, for example some sites limit access to particular Webbrowsers. The TAG emphasizes that the practice of denying access to content to someone because of their choiceof user agent is generally counter-productive (e.g., one is likelyto exclude an entire class of user agents such as those runningon millions of mobile devices) and harmful (e.g.,because it may deny access to users with specialized user agents,including some users with a disability).
Two analogies have been proposed to help illuminate the question ofdeep linking through parallels in the real world.
The first analogy is with buildings, which typically have a numberof doors. A building might have a policy that the public may onlyenter via the main front door, and only during normal workinghours. People employed in the building and in making deliveries to itmight use other doors as appropriate. Such a policy would be enforcedby a combination of security personnel and mechanical devices such aslocks and pass-cards. One would not enforce this policy by hiding someof the building entrances, nor by requesting legislation requiring theuse of the front door and forbidding anyone to reveal the fact thatthere are other doors to the building.
The second analogy is with a library, which has a well-known streetaddress. Each book on the shelves of this library also has anidentifier, composed of its title, author, call number, shelflocation, and so on. The library certainly will exercise accesscontrol to the individual books; but it would be counterproductive todo so by forbidding the publication of their identities.
These analogies are compelling in the context of the deep linkingissue. A provider of Web resources who does not make use of thebuilt-in facilities of the Web to control access to a resource isunlikely to achieve either justice or a good business outcome byattempting to suppress information about the existence of theresource.
The Web's structure includes facilities to implement nearly anyimaginable set of business policies as regards access control. Forexample, access policies based on the "Referer" field could restrictaccess to links from a "home page."
Unethical parties could, of course, attempt to circumvent suchpolicies, for example by programming software to transmit false valuesin various request fields, or by stealing passwords, or any number ofother nefarious practices. Such a situation has clearly passed fromthe domain of technology to that of policy. Public policy may need tobe developed as to the seriousness of such attempts to subvert thesystem, the nature of proof required to establish a transgression, theappropriate penalties for transgressors, and so on.
Attempts at the public-policy level to limit the usage,transmission and publication of URIs at the policy level areinappropriate and based on a misunderstanding of the Web'sarchitecture. Attempts to control access to the resources identifiedby URIs are entirely appropriate and well-supported by the Webtechnology.
This issue is important because attempts to limit deep linkingare in fact risky for two reasons:
The policy is at risk of failure.The Web is so large that any policy enforcement requires considerableautomated support from software to be practical. Since a deep linklooks like any other link to Web software, such automatedsupport is not practical.
The Web is at the risk of damage.The hypertext architecture of the Web has brought substantial benefitsto the world at large. The onset of legislation and litigation basedon confusion between identification and access has the potential toimpair the future development of the Web.
Below is an incomplete list of policies related to deep linking.