Jump to content

Spam in blogs: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Hu12 (talk | contribs)
m Reverted edits by 24.129.117.162 (talk) to last version by AnomieBOT
This is very interesting, You are a very skilled blogger. I've joined your feed and look forward to seeking more of your magnificent post. Also, I've shared your site in my social networks!
Line 1: Line 1:
There is a vast amount of information on the blog you've started. Thanks.
{{Dablink|For blogs that are built only for spamming, see [[Spam blog]].}}
<a href=>ostarine</a>
{{Redirect|Spam blacklist|Wikipedia's internal spam-blocking mechanism|Wikipedia:Spam blacklist}}

'''Spam in blogs''' (also called simply '''blog spam''' or '''comment spam''') is a form of [[spamdexing]]. (Note that ''blogspam'' has another, more common meaning, namely the post of a blogger who creates no-value-added posts to submit them to other sites.) It is done by automatically posting random comments or promoting commercial services to [[weblog|blogs]], [[wiki]]s, [[guestbook]]s, or other publicly accessible online [[discussion board]]s. Any web application that accepts and displays [[hyperlinks]] submitted by visitors may be a target.

Adding links that point to the spammer's web site artificially increases the site's search engine ranking{{Fact|date=May 2012}}. An increased ranking often results in the spammer's commercial site being listed ahead of other sites for certain searches, increasing the number of potential visitors and paying customers.

==History==
This type of spam originally appeared in internet [[guestbook]]s, where spammers repeatedly fill a guestbook with links to their own site and no relevant comment, to increase search engine rankings. If an actual comment is given it is often just "cool page", "nice website", or keywords of the spammed link.

In 2003, spammers began to take advantage of the open nature of comments in the [[weblog|blogging]] software like [[Movable Type]] by repeatedly placing comments to various blog posts that provided nothing more than a link to the spammer's commercial web site. Jay Allen created a free plugin, called MT-BlackList,<ref>{{cite web|url=http://www.jayallen.org/projects/mt-blacklist/ |title=MT-Blacklist - A Movable Type Anti-spam Plugin |publisher=Jayallen.org |date= |accessdate=2012-01-09}}</ref> for the Movable Type weblog tool (versions prior to 3.2) that attempted to alleviate this problem. Many blogging packages now have methods of preventing or reducing the effect of blog spam, although spammers have developed tools to circumvent them. Many spammers use special blog spamming tools like [[trackback submitter]] to bypass comment spam protection on popular blogging systems like Movable Type, Wordpress, and others.

Other phrases can be stolen comments, "nice article", something about their imaginary friends, stolen parts from books, unfinished sentences, nonsense words or the same comment.

==Possible solutions==
===Disallowing multiple consecutive submissions===
It is rare on a site that a user would reply to their own comment, yet spammers typically do.<ref name="blogx.co.uk">{{cite web|url=http://blogx.co.uk/Comments.asp?Entry=757 |title=Matthew1471's ASP BlogX - 5 things you probably did not know about the spammers who spam your website |publisher=Blogx.co.uk |date=2008-08-14 |accessdate=2012-01-09}}</ref> Checking that the user's IP address is not replying to a user of the same IP address will significantly reduce flooding. This, however, proves problematic when multiple users, behind the same proxy, wish to comment on the same entry. Blog Spam software may get around this by faking IP addresses, posting similar blog spam using many IP addresses.<ref name="http://iisforinclude.org/Romanasblog/?p=250">[http://iisforinclude.org/Romanasblog/?p=250 IIsForInclude.org - Blog Spam gets ramped up.]</ref>

===Blocking by keyword===
Blocking specific words from posts is one of the simplest and most effective ways to reduce spam. Much spam can be blocked simply by banning names of popular pharmaceuticals and casino games.

This is a good long-term solution, because it's not beneficial for spammers to change keywords to "vi@gra" or such, because keywords must be readable and indexed by search engine bots to be effective.

Unsophisticated implementations of this may lead to examples of the [[Scunthorpe Problem]].

===nofollow===<!-- This section is linked from [[PageRank]] -->
{{Main|nofollow}}
Google announced in early 2005 that hyperlinks with <code>rel="nofollow"</code> attribute<ref>{{cite web|url=http://www.w3.org/TR/REC-html40/struct/links.html#adef-rel |title=Links in HTML documents |publisher=W3.org |date= |accessdate=2012-01-09}}</ref> would not be crawled or influence the link target's ranking in the search engine's index. The Yahoo and MSN search engines also respect this tag.<ref>{{cite web|url=http://googleblog.blogspot.com/2005/01/preventing-comment-spam.html |title=Official Google Blog: Preventing comment spam |publisher=Googleblog.blogspot.com |date=2005-01-18 |accessdate=2012-01-09}}</ref>

Using <code>rel="nofollow"</code> is a much easier solution that makes the improvised techniques above irrelevant. Most weblog software now marks reader-submitted links this way by default (with no option to disable it without code modification). A more sophisticated server software could spare the nofollow for links submitted by [[trust management|trusted users]] like those registered for a long time, on a [[whitelist]], or with a high [[karma (Slashdot)|karma]]. Some server software adds <code>rel="nofollow"</code> to pages that have been recently edited but omits it from stable pages, under the theory that stable pages will have had offending links removed by human editors.

Some weblog authors object to the use of <code>rel="nofollow"</code>, arguing, for example,<ref>Michael Hampton (May 23, 2005), [http://www.homelandstupidity.us/2005/05/23/nofollow-revisited/ Nofollow revisited], ''HomelandStupidity.us'', retrieved November 2, 2007</ref> that
* Link spammers will continue to spam everyone to reach the sites that do not use <code>rel="nofollow"</code>
* Link spammers will continue to place links for clicking (by surfers) even if those links are ignored by search engines.
* Google is advocating the use of <code>rel="nofollow"</code> in order to reduce the effect of heavy inter-blog linking on page ranking.
* Google is advocating the use of <code>rel="nofollow"</code> only to minimize its own filtering efforts and to deflect that this actually had better been called <code>rel="nopagerank"</code>.
* Nofollow may reduce the value of legitimate comments<ref>{{cite web|author=Posted by jzawodn at May 30, 2006 06:59 AM |url=http://jeremy.zawodny.com/blog/archives/006800.html |title=Nofollow No Good? (by Jeremy Zawodny) |publisher=Jeremy.zawodny.com |date=2006-05-30 |accessdate=2012-01-09}}</ref>

Other websites like [[Slashdot]], with high user participation, use improvised nofollow implementations like adding <code>rel="nofollow"</code> only for potentially misbehaving users. Potential spammers posting as users can be determined through various heuristics like age of registered account and other factors. Slashdot also uses the poster's karma as a determinant in attaching a nofollow tag to user submitted links.

<code>rel="nofollow"</code> has come to be regarded as a [[microformat]].

===Validation (reverse Turing test)===
A method to block automated spam comments is requiring a [[data validation|validation]] prior to publishing the contents of the reply form. The goal is to verify that the form is being submitted by a real human being and not by a spam tool and has therefore been described as a [[reverse Turing test]]. The test should be of such a nature that a human being can easily pass and an automated tool would most likely fail.

Many forms on websites take advantage of the [[CAPTCHA]] technique, displaying a combination of numbers and letters embedded in an image which must be entered literally into the reply form to pass the test. In order to keep out spam tools with built-in [[text recognition]] the characters in the images are customarily misaligned, distorted, and noisy. A drawback of many older CAPTCHAs is that passwords are usually [[case-sensitive]] while the corresponding images often don't allow a distinction of capital and small letters. This should be taken into account when devising a list of CAPTCHAs. Such systems can also prove problematic to blind people who rely on [[screen readers]]. Some more recent systems allow for this by providing an audio version of the characters. A simple alternative to CAPTCHAs is the validation in the form of a [[password]] question, providing a hint to human visitors that the password is the answer to a simple question like "The Earth revolves around the... [Sun]".

One drawback to be taken into consideration is that any validation required in the form of an additional form field may become a nuisance especially to regular posters. Many bloggers and guestbook owners notice a significant decrease in the number of comments once such a validation is in place.{{Citation needed|date=February 2010}}

===Disallowing links in posts===
There is negligible gain from spam that does not contain links, so currently all spam posts contain (an excessive number of) links. It is safe to require passing Turing tests only if post contains links and letting all other posts through. While this is highly effective, spammers do frequently send gibberish posts (such as "ajliabisadf ljibia aeriqoj") to test the spam filter. These gibberish posts will not be labeled as spam. They do the spammer no good, but they still clog up comments sections.

Garbage submissions might however also result from level 0 spambots, which don't parse the attacked HTML form fields first, but send generic POST requests against pages. So it happens that a "content" or "forum_post" POST variable is set and received by the blog or forum software, but the "uri" or other wrong url field name is not accepted and thus not saved as spamlink.

===Redirects===
Instead of displaying a direct hyperlink submitted by a visitor, a web application could display a link to a script on its own website that redirects to the correct [[Uniform Resource Locator|URL]]. This will not prevent all spam since spammers do not always check for link redirection, but effectively prevents against increasing their [[PageRank]], just as <code>rel=nofollow</code>. An added benefit is that the redirection script can count how many people visit external URLs, although it will increase the load on the site.

Redirects should be [[server-side]] to avoid accessibility issues related to client-side redirects. This can be done via the [[.htaccess|.htaccess file]] in [[apache server|Apache]].

Another way of preventing [[PageRank]] leakage is to make use of public [[URL redirection|redirection]] or [[HTTP referer|dereferral]] services such as [[TinyURL]]. For example,

<nowiki><a href="http://my-own.net/alias_of_target" rel="nofollow" >Link</a></nowiki>

where 'alias_of_target' is the alias of target address.

Note however that this prevents users from being able to view the target of a link before clicking it, thus interfering with their ability to ignore websites they know to be spam.
[[TinyURL]] now offers a preview feature to help avoiding this situation.

===Distributed approaches===
This approach is very new to addressing link spam. One of the shortcomings of link spam filters is that most sites receive only one link from each domain which is running a spam campaign. If the spammer varies IP addresses, there is little to no distinguishable pattern left on the vandalized site. The pattern, however, is left across the thousands of sites that were hit quickly with the same links.

A distributed approach, like the free [[LinkSleeve]]<ref>{{cite web|url=http://www.linksleeve.org/ |title=SLV : Spam Link Verification |publisher=LinkSleeve |date= |accessdate=2012-01-09}}</ref> uses [[XML-RPC]] to communicate between the various server applications (such as blogs, guestbooks, forums, and wikis) and the filter server, in this case LinkSleeve. The posted data is stripped of urls and each url is checked against recently submitted urls across the web. If a threshold is exceeded, a "reject" response is returned, thus deleting the comment, message, or posting. Otherwise, an "accept" message is sent.

A more robust distributed approach is [[Akismet]], which uses a similar approach to LinkSleeve but uses API keys to assign trust to nodes and also has wider distribution as a result of being bundled with the 2.0 release of [[WordPress]].<ref>{{cite web|url=http://wordpress.org/development/2005/12/wp2/ |title=WordPress › Blog » WordPress 2 |publisher=Wordpress.org |date= |accessdate=2012-01-09}}</ref> They claim over 140,000 blogs contributing to their system. [[Akismet]] libraries have been implemented for Java, Python, Ruby, and PHP, but its adoption may be hindered by its commercial use restrictions. In 2008, [[Six Apart]] therefore released a [[beta version]] of their [[TypePad AntiSpam]] software, which is compatible with Akismet but free of the latter's commercial use restrictions.

[[Project Honey Pot]] has also begun tracking comment spammers. The Project uses its vast network of thousands of traps installed in over one hundred countries around the world in order to watch what comment spamming web robots are posting to blogs and forums. Data is then published on the top countries for comment spamming, as well as the top keywords and URLs being promoted. The Project's data is then made available to block known comment spammers through '''{{sic|http:BL|hide=yes}}'''. Various plugins have been developed to take advantage of the http:BL API.

===Application-specific anti-spam methods===
Particularly popular software products such as [[Movable Type]] and [[MediaWiki]] have developed their own custom anti-spam measures, as spammers focus more attention on targeting those platforms. Whitelists and blacklists that prevent certain IPs from posting, or that prevent people from posting content that matches certain filters, are common defenses{{reference necessary||date=December 2010}}. More advanced [[access control list]]s require various forms of validation before users can contribute anything like linkspam.

The goal in every case is to allow good users to continue to add links to their comments, as that is considered by some to be a valuable aspect of any comments section.

====RSS feed monitoring====
Some wikis allow you to access an RSS feed of recent changes or comments. If you add that to your news reader and set up a smart search for common spam terms (usually [[viagra]] and other drug names) you can quickly identify and remove the offending spam.

====Response tokens====
Another filter available to webmasters is to add a hidden variable to their comment form containing a [[session token]] which uniquely identifies the instance of the form. The primary protection afforded by this mechanism is through enforcing a one-to-one correspondence between each request to get the form and each request to submit it. This is impossible to do with IP addresses, since they are shared by users behind a proxy, firewall, or nat (e.g., multiple users sitting in the same internet cafe, library, senior citizens' center, managed care home, club, etc.) and they may change frequently, even between related requests (e.g., AOL and other enterprise-scale proxies, anonymizing services such as [[Tor (anonymity network)|Tor]]). When the form is eventually submitted, the server can use the token to validate the post. If the token is unrecognized the server can send back the form, along with a new token, requiring user resubmission. A duplicate token with duplicate content can safely be silently discarded. Additionally, spammers may not actually load the comments form for an entry; having a unique code for each request inserted into the comment form and verifying it on receipt of the HTTP POST will significantly increase the number of steps required to spam multiple entries.<ref name="blogx.co.uk"/>

Given a valid token, the server can then flag as suspicious, for example, postings that use different IP addresses for loading the comment form and posting the comment form, many postings all using the same IP address, or postings that took unusually short or long periods of time to compose. These can then be subjected to additional scrutiny, such as challenging the poster with a [[captcha]], queuing for human review, or outright rejected.

This method is effective against spammers who [[IP address spoofing|spoof their IP Address]] in an attempt to conceal their identities or to appear to be many more distinct users than the number of IP addresses simultaneously under their control, since the token can only be returned if it was received by the spammer in the first place. It has been suggested that flagging posts based on changing IP addresses is effective against spammers abusing the distributed anonymous proxy [[Tor (anonymity network)|Tor]].<ref name="blogx.co.uk"/>

===Ajax===

Some blog software such as [[Typo (content management system)|Typo]] allow the blog administrator to allow only comments submitted via [[Ajax (programming)|Ajax]] [[XMLHttpRequest]]s, and discard regular form POST requests. This causes accessibility problems typical to Ajax-only applications.

Although this technique largely prevents spam so far, it is a form of [[security by obscurity]] and can easily be defeated if it becomes popular enough, since it essentially is just a different encoding of the same data.

==See also==
* [[Adversarial information retrieval]]
* [[Social networking spam]]

==References==
{{reflist}}

==External links==
* [http://www.projecthoneypot.org/list_of_ips.php?t=p Project Honeypot Directory of Content Spammers]
* [http://sixapart.com/pronet/comment_spam.html Six Apart Comment Spam Guide], fairly broad overview from [[Movable Type]]'s authors.
* Gilad Mishne, David Carmel and Ronny Lempel: [http://airweb.cse.lehigh.edu/2005/mishne.pdf Blocking Blog Spam with Language Model Disagreement], PDF. From the First International Workshop on Adversarial Information Retrieval (AIRWeb'05) Chiba, Japan, 2005.
{{spamming}}
{{Blog topics}}

{{DEFAULTSORT:Spam In Blogs}}
[[Category:Spamming]]
[[Category:Black hat search engine optimization]]

[[ar:إزعاج في مدونات]]
[[de:Spam#Index-, Link-, Blog-, Social-Bookmark- und Wikispam]]
[[pl:Link spam]]
[[sv:Kommentarspam]]
[[vi:Blog spam]]

Revision as of 19:09, 29 May 2012

There is a vast amount of information on the blog you've started. Thanks. <a href=>ostarine</a>