Jump to content

Domain authority

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Bruce1ee (talk | contribs) at 15:42, 14 August 2020 (Reverted edits by 182.187.125.112 (talk) to last version by Bruce1ee). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

The domain authority (also referred to as thought leadership) of a website describes its relevance for a specific subject area or industry. This relevance has a direct impact on its ranking by search engines, trying to assess domain authority through automated analytic algorithms. The relevance of domain authority on website-listing in the SERPs of search engines led to the birth of a whole industry of Black Hat SEO providers, trying to feign an increased level of domain authority.[1] The ranking by major search engines, e.g., Google’s PageRank is agnostic of specific industry or subject areas and assesses a website in the context of the totality of websites in the Internet.[2] The results on the SERP page set the PageRank in the context of a specific keyword. In a less competitive subject area, even websites with a low PageRank can achieve high visibility in search engines as the highest ranked sites that match specific search words are positioned on the first positions in the SERPs.

Dimensions

Domain authority can be described through four dimensions

  1. prestige of a website and its authors
  2. quality of the information presented
  3. information and website centrality
  4. competitive situation around a subject

The weight of these factors varies in function of the ranking body. When individuals judge domain authority, decisive factors can include the prestige of a website, the prestige of the contributing authors in a specific domain, the quality and relevance of the information on a website, the novelty of the content, but also the competitive situation around the discussed subject area or the quality of the outgoing links.[3] Several search engines (e.g., Bing, Google, Yahoo) have developed automated analyses and rank algorithms for domain authority. Lacking "human reasoning" which would allow to directly judge quality, they make use of complementary parameters such as information or website prestige and centrality from a graph-theoretical perspective, manifested in the quantity and quality of inbound links. The Software as a Service company Moz.org has developed an algorithm and weighted level metric, branded as "Domain Authority", which gives predictions on a website's performance in search engine rankings with a discriminating range from 0 to 100.[4][5]

Prestige of website and authors

Prestige identifies the prominent actors in a qualitative and quantitative manner on the basis of Graph theory. A website is considered a node. Its prestige is defined by the quantity of nodes that have directed edges pointing on the website and the quality of those nodes. The nodes’ quality is also defined through their prestige. This definition assures that a prestigious website is not only pointed at by many other websites but that those pointing websites are prestigious themselves[6] Similar to the prestige of a website, the contributing authors’ prestige is taken into consideration, in those cases, where the authors are named and identified (e.g., with their Twitter or Google Plus profile]. In this case, prestige is measured with the prestige of the authors who quote them or refer to them and the quantity of referrals which these authors receive.[3] Search engines use additional factors to scrutinize the websites’ prestige. To do so, Google’s PageRank looks at factors like link-diversification and link-dynamics: When too many links are coming from the same domain or webmaster, there is a risk of Black Hat SEO. When backlinks grow rapidly, this nourishes suspicion of Spam or Black Hat SEO as origin. In addition Google looks at factors like the public availability of the WhoIs information of the domain owner, the use of global Top-level domains, domain age and volatility of ownership to assess their apparent prestige. Lastly search engines look at the traffic and the amount of organic searches for a site as the amount of traffic should be congruent to the level of prestige that a website has in a certain domain.[3]

Information quality

Information quality describes the value which information provides to the reader. Wang and Strong categorize assessable dimensions of information into intrinsic (accuracy, objectivity, believability, reputation), contextual (relevancy, value-added/authenticity, timelessness, completeness, quantity), representational (interpretability, format, coherence, compatibility) and accessibile (accessibility and access security).[7] Humans can base their judgments on quality based on experience in judging content, style and grammatical correctness. Information systems like search engines need indirect means, allowing concluding on the quality of information. In 2015, Google’s PageRank algorithm took approximately 200 ranking factors included in a learning algorithm to assess information quality.[citation needed]

Centrality of a website

Prominent actors have extensive and living relationships with other (prominent) actors. This makes them more visible and the content more relevant, interlinked and useful.[6] Centrality from a graph-theoretical perspective describes unidirectional relationships, not making a distinction between receiving and sending information. From this point of view, it includes the inbound links considered in the definition of “prestige” complemented with outgoing links. Another difference between prestige and centrality is that the measure of prestige counts for a complete website or an author, whereas centrality can be considered on a more granular level like one individual blog post. Search engines look at various factors to judge the quality of outgoing links, i.e., on link-centrality, describing the quality and quantity as well as the relevance of outgoing links and the prestige of its destination. They also look at the frequency of new content publication (“freshness of information”) to be sure that the website is still an active player in the community.[3]

Competitive situation around a subject

The domain authority that a website attains is not the only factor which defines its positioning in the SERPs of search engines. The second important factor is the competitiveness of a specific sector. Subjects like SEO are very competitive. A website needs to outperform the prestige of competing websites to attain domain authority. This prestige, relative to other websites, can be defined as “relative domain authority.”[citation needed]

References

  1. ^ Ntoulas, Alexandros; Najork, Marc; Manasse, Mark; Fetterly, Dennis (May 23–26, 2006). "Detecting Spam Web Pages through Content Analysis" (PDF). International World Wide Web Conference. WWW 2006.
  2. ^ Brin, Sergey; Page, Larry (January 29, 1998). "The PageRank Citation Ranking: Bringing Order to the Web" (PDF). Stanford University InfoLab Publication Server.
  3. ^ a b c d Scholten, Ulrich (Nov 29, 2015). "What Is Domain Authority and How Do I Build It?". VentureSkies.
  4. ^ Zilincan, Jakub; Kryvinska., Natalia (May 28, 2015). "Improving Rank of a Website in Search Results–Experimental Approach". International Conference at Brno University of Technology: Perspectives of Business and Entrepreneurship Development - System Engineering Track. 15.
  5. ^ Orduna-Malea, Enrique; Aytac, Selenay (May 9, 2015). "Revealing the online network between university and industry: the case of Turkey". Scientometrics. 105 (3): 1849–1866. arXiv:1506.03012. doi:10.1007/s11192-015-1596-4.
  6. ^ a b Wasserman, Stanley; Faust, Katherine (1994). Social Network Analysis: Methods and Applications (Structural Analysis in the Social Sciences). New York, USA: Cambridge University Press. ISBN 0 521 38707 8.
  7. ^ Wang, Richard Y.; Strong, Diane M. (October 26, 2013). "Beyond Accuracy: What Data Quality Means to Data Consumers". Journal of Management Information Systems. 12 (4): 5–33. JSTOR 40398176.