Blog scraping

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Blog scraping is the process of scanning through a large number of blogs, usually through the use of automated software, searching for and copying content. The software and the individuals who run the software are sometimes referred to as blog scrapers.

Blog scraping is copying a blog, or blog content, that is not owned by the individual initiating the scraping process. If the material is copyrighted it is considered copyright infringement, unless there is a license relaxing the copyright or the country has fair-use or private use law. The scraped content is often used on spam blogs or splogs, such places are called scraper sites.


A blog scraper who gathers content that is copyrighted material can be considered in violation of the law, depending on the case, data usage and country. Blog scraping can create problems for the individual or business who owns the blog. Blog scraping is particularly worrisome for business owners and business bloggers. Scrapers can copy an entire post from an independent or business blog. The duplicated content will include the author's tag and a link back to the author's site (if that link appears in the author's tag). However, most blog scrapers copy only a portion of the content that is keyword-relevant to their splog topic. By doing this, the keyword relevancy of the scraper's site is increased. Secondly, by not scraping the entire post, any outbound links are eliminated which means their search engine ranking is not reduced.

Additionally, scraped content can appear on literally any type of splog or RSS-fed spam site. This means an unsuspecting individual could find their creative or copyrighted material copied onto a site promoting pornography or similar type of content that may be offensive to the original author and his/her audience. This may be damaging to the original author's reputation.


External links[edit]