The difference between Google's existing Knowledge Graph, and their Knowledge Vault, is the way that facts are accumulated. The Knowledge Graph pulls in information from trusted[according to whom?] sources like Freebase and Wikipedia, both of which are crowdsourced initiatives. The Knowledge Vault is an accumulation of facts from across the entire web. It is a mix of both high-confidence results and low-confidence or ‘dirty’ ones and machine learning is used to rank them.
The concept behind the Knowledge Vault was presented in a paper authored by Xin Luna Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun and Wei Zhang - all of them from Google Research.
The approach has been through various tests by Google in other search and web products. The official blog post announcing the Knowledge Graph and the transition from “Strings to Things” says that the Knowledge Graph isn't just rooted in public sources such as Freebase, Wikipedia and the CIA World Factbook. It's also augmented at a much larger scale — because we're focused on comprehensive breadth and depth."[clarification needed]
One of the earliest examples was the Google Q&A service that used artificial intelligence and a large corpus of data to provide direct answers to questions. It is explained in a presentation by Google's Peter Norvig. The Q&A service was discontinued in July 2014.
- Hal Hodson (20 August 2014), "Google's fact-checking bots build vast knowledge bank", New Scientist
- Dong, X. L., Murphy, K., Gabrilovich, E., Heitz, G., Horn, W., Lao, N., ... & Zhang, W. (2014). Knowledge Vault: A Web-scale approach to probabilistic knowledge fusion.
- "Introducing the Knowledge Graph: things, not strings". May 16, 2012.
- "Peter Norvig presentation on fact mining across the web".