Automated Content Access Protocol ("ACAP") was proposed in 2006 as a method of providing machine-readable permissions information for content, in the hope that it would have allowed automated processes (such as search-engine web crawling) to be compliant with publishers' policies without the need for human interpretation of legal terms. ACAP was developed by organisations that claimed to represent sections of the publishing industry (World Association of Newspapers, European Publishers Council, International Publishers Association). It was intended to provide support for more sophisticated online publishing business models, but was criticised for being biased towards the fears of publishers who see search and aggregation as a threat rather than as a source of traffic and new readers.
In November 2007 ACAP announced that the first version of the standard was ready. No non-ACAP members, whether publishers or search engines, have adopted it so far. A Google spokesman appeared to have ruled out adoption. In March 2008, Google's CEO Eric Schmidt stated that "At present it does not fit with the way our systems operate". No progress has been announced since the remarks in March 2008 and Google, along with Yahoo! and MSN, have since reaffirmed their commitment to the use of robots.txt and sitemaps.
In April 2007 ACAP commenced a pilot project in which the participants and technical partners undertook to specify and agree various use cases for ACAP to address. A technical workshop, attended by the participants and invited experts, has been held in London to discuss the use cases and agree next steps.
By February 2007 the pilot project was launched and participants announced.
By October 2006, ACAP had completed a feasibility stage and was formally announced at the Frankfurt Book Fair on 6 October 2006. A pilot program commenced in January 2007 involving a group of major publishers and media groups working alongside search engines and other technical partners.
ACAP and search engines
It has been suggested that ACAP is unnecessary, since the robots.txt protocol already exists for the purpose of managing search engine access to websites. However, others support ACAP’s view  that robots.txt is no longer sufficient. ACAP argues that robots.txt was devised at a time when both search engines and online publishing were in their infancy and as a result is insufficiently nuanced to support today’s much more sophisticated business models of search and online publishing. ACAP aims to make it possible to express more complex permissions than the simple binary choice of “inclusion” or “exclusion”.
As an early priority, ACAP is intended to provide a practical and consensual solution to some of the rights-related issues which in some cases have led to litigation between publishers and search engines.
The Robots Exclusion Standard has always been implemented voluntarily by both content providers and search engines, and ACAP implementation is similarly voluntary for both parties. However, Beth Noveck has expressed concern that the emphasis on communicating access permissions in legal terms will lead to lawsuits if search engines do not comply with ACAP permissions.
No public search engines recognise ACAP. Only one, Exalead, ever confirmed that they will be adopting the standard, but they have since ceased functioning as a search portal to focus on the software side of their business.
Comment and debate
- that keeping the specification simple will be critical to its successful implementation, and
- that the aims of the project are focussed on the needs of publishers, rather than readers. Many have seen this as a flaw.
- ACAP FAQ: Where is the driving force behind ACAP?
- Douglas, Ian (3 December 2007). "Acap: a shot in the foot for publishing". The Daily Telegraph. Archived from the original on 14 November 2009. Retrieved 3 May 2012.
- Search Engine Watch report of Rob Jonas' comments on ACAP Archived 18 March 2008 at the Wayback Machine
- Corner, Stuart (18 March 2008). "ACAP content protection protocol "doesn't work" says Google CEO". iTWire. Retrieved 11 March 2018.
- Improving on Robots Exclusion Protocol: Official Google Webmaster Central Blog
- IPTC Media Release: News syndication version of ACAP ready for launch and management handed over to the IPTC Archived 15 July 2011 at the Wayback Machine
- Official ACAP press release announcing project launch Archived 10 June 2007 at the Wayback Machine
- News Publishers Want Full Control of the Search Results
- "Why you should care about Automated Content Access Protocol". yelvington.com. 16 October 2006. Archived from the original on 11 November 2006. Retrieved 11 March 2018.
- "FAQ: What about existing technology, robots.txt and why?". ACAP. Archived from the original on 8 March 2018. Retrieved 11 March 2018.
- "Is Google Legal?" OutLaw article about Copiepresse litigation
- Guardian article about Google's failed appeal in Copiepresse case
- Paul, Ryan (14 January 2008). "A skeptical look at the Automated Content Access Protocol". Ars Technica. Retrieved 9 January 2018.
- Noveck, Beth Simone (1 December 2007). "Automated Content Access Protocol". Cairns Blog. Retrieved 9 January 2018.
- Exalead Joins Pilot Project on Automated Content Access
- Search Engine Watch article Archived 27 January 2007 at the Wayback Machine
- Shore.com article about ACAP Archived 21 October 2006 at the Wayback Machine
- IP Watch article about ACAP
- Douglas, Ian (23 December 2007). "Acap shoots back". The Daily Telegraph. Archived from the original on 7 September 2008.
- Official website
- Google's hunger for the news in The Guardian newspaper
- Why you should care about Automated Content Access Protocol (Steve Yelvington)
- Automated Content Access Protocol: Why? – Wildly Appropriate
- Acap: flawed and broken from the start – Martin Belam
- Automated Content Access Progress
- WAN calls on Google to embrace Acap[permanent dead link] – Editor and Publisher