Jump to content

Spam in blogs: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m Reverted edits by 24.129.104.165 (talk) to last version by Bjelleklang
Line 101: Line 101:


=== Switching off comments ===
=== Switching off comments ===
Some bloggers have chosen to turn off comments because of the volume of spam.
An increasing number of bloggers have chosen to turn off comments entirely because of the volume of spam comments.


=== Buying Blog Comments ===
=== Buying Blog Comments ===

Revision as of 15:10, 17 August 2007

Spam in blogs (also called simply blog spam or comment spam) is a form of spamdexing. It is done by automatically posting random comments or promoting commercial services to blogs, wikis, guestbooks, or other publicly accessible online discussion boards. Any web application that accepts and displays hyperlinks submitted by visitors may be a target.

Adding links that point to the spammer's web site artificially increases the site's search engine ranking. An increased ranking often results in the spammer's commercial site being listed ahead of other sites for certain searches, increasing the number of potential visitors and paying customers.

History

This type of spam originally appeared in internet guestbooks, where spammers repeatedly fill a guestbook with links to their own site and no relevant comment to increase search engine rankings. If an actual comment is given it is often just "cool page", "nice website", or keywords of the spammed link.

In 2003, spammers began to take advantage of the open nature of comments in the blogging software like Movable Type by repeatedly placing comments to various blog posts that provided nothing more than a link to the spammer's commercial web site. Jay Allen created a free plugin, called MT-BlackList,[1] for the Movable Type weblog tool (versions prior to 3.2) that attempted to alleviate this problem. Many current blogging packages now have methods of preventing or reducing the effect of blog spam, but spammers became smarter as well. Many of them use special blog spamming tools like Trackback Submitter to bypass comment spam protection on popular blogging systems like Movable Type, Wordpress and others.

Possible solutions

Blocking by keyword

This is the simplest form of blocking, which yields very good results, because comment spam is targeted at bots, so it must be readable by simple software. A lot of spam can be blocked by banning names of popular pharmaceuticals and casino games.

The main problem with this approach is that spammers constantly find new ways to spell or hawk their goods, so this requires constant updating. For example, blocking "viagra" would cut down spam by a lot, but spammers will start spamming "vi@gra", "v1agr@", "vigra". There's also an uncountable number of goods spammers who try to sell, making this system difficult to keep updated.

rel="nofollow"

In early 2005, Google announced that hyperlinks with rel="nofollow" attribute[2] would not influence the link target's ranking in the search engine's index. The Yahoo and MSN search engines also respect this tag. [3]

nofollow is a misnomer in this case since it actually tells a search engine "Don't score this link" rather than "Don't follow this link." This differs from the meaning of nofollow as used within a robots meta tag, which does tell a search engine: "Do not follow any of the hyperlinks in the body of this document."

Using rel="nofollow" is a much easier solution that makes the improvised techniques above irrelevant. Most weblog software now marks reader-submitted links this way by default (with no option to disable it without code modification). A more sophisticated server software could spare the nofollow for links submitted by trusted users like those registered for a long time, on a whitelist, or with a high karma. Some server software adds rel="nofollow" to pages that have been recently edited but omits it from stable pages, under the theory that stable pages will have had offending links removed by human editors.

Some weblog authors object to the use of rel="nofollow", arguing, for example,[4] that

  • Link spammers will continue to spam everyone to reach the sites that do not use rel="nofollow"
  • Link spammers will continue to place links for clicking (by surfers), even if those links are ignored by search engines.
  • Google is advocating the use of rel="nofollow" in order to reduce the effect of heavy inter-blog linking on page ranking.
  • Google is advocating the use of rel="nofollow" only to minimize its own filtering efforts, and to deflect that this actually had better been called rel="nopagerank".

Jeremy Zawodny has stated on his blog [5] that

Worse, nofollow has another, more pernicious effect, which is that it reduces the value of legitimate comments.

Other websites like Slashdot, with high user participation, use improvised nofollow implementations like adding rel="nofollow" only for potentially misbehaving users. Potential spammers posing as users can be determined through various heuristics like age of registered account and other factors. Slashdot also uses the poster's karma as a determinant in attaching a nofollow tag to user submitted links.

rel="nofollow" has come to be regarded as a microformat.

Validation (reverse Turing test)

A method to block automated spam comments is requiring a validation prior to publishing the contents of the reply form. The goal is to verify that the form is being submitted by a real human being and not by a spam tool, and has therefore been described as a reverse Turing test. The test should be of such a nature that a human being can easily pass, whereas an automated tool would most likely fail.

Many forms on websites take advantage of the CAPTCHA technique, displaying a combination of numbers and letters embedded in an image, which must be entered literally into the reply form to pass the test. In order to keep out spam tools with built-in text recognition, the characters in the images are customarily misaligned, distorted and noisy. A drawback of many older CAPTCHAs is that passwords are usually case-sensitive, while the corresponding images often don't allow a distinction of capital and small letters. This should be taken into account when devising a list of CAPTCHAs.

A simple alternative to CAPTCHAs is the validation in the form of a password question, providing a hint to human visitors that the password is the answer to a simple question like "The Earth revolves around the... [Sun]".

One drawback to be taken into consideration is that any validation required in the form of an additional form field may become a nuisance especially to regular posters. Bloggers and guestbook owners may notice a significant decrease in the number of comments once such a validation is in place.

There is negligible gain from spam that does not contain links, so currently all spam posts contain (excessive number of) links. It is safe to require passing Turing tests only if post contains links and letting all other posts through. While this is highly effective, spammers do frequently send gibberish posts (such as "ajliabisadf ljibia aeriqoj") to test the spam filter. These gibberish posts will not be labeled as spam. They do the spammer no good, but they still clog up comments sections.

Garbage submmissions might however also result from level 0 spambots, which don't parse the attacked HTML form fields first, but send generic POST requests against pages. So it happens that a "content" or "forum_post" POST variable is set and received by the blog or forum software, but the "uri" or other wrong url field name is not accepted and thus not saved as spamlink.

Redirects

Instead of displaying a direct hyperlink submitted by a visitor, a web application could display a link to a script on its own website that redirects to the correct URL. This will not prevent all spam since spammers do not always check for link redirection, but effectively prevents against increasing their PageRank, just as rel=nofollow. An added benefit is that the redirection script can count how many people visit external URLs, although it will increase the load on the site.

Redirects should be server-side to avoid accessibility issues related to client-side redirects. This can be done via the .htaccess file in Apache.

Another way of preventing PageRank leakage is to make use of public redirection services such as TinyURL or My-Own.Net. For example,

<a href="http://my-own.net/alias_of_target" rel="nofollow" >Link</a>

where 'alias_of_target' is the alias of target address.

Services such as POW7.com offer a public redirection without the need to configure an alias. An example of a link to http://wikipedia.org/ on POW7 would be:

<a href="http://pow7.com/pr/http://wikipedia.org/">http://wikipedia.org/</a>

Again, the issue with this method is that while it removes the benefit the spammer is seeking, the users of this method are still left with a very high volume of spam that they must clean up or leave behind.

Distributed approaches

This approach is very new to addressing link spam. One of the shortcomings of link spam filters is that most sites only receive one link from each domain which is running a spam campaign. If the spammer varies IP addresses, there is little to no distiguishable pattern left on the vandalized site. The pattern, however, is left across the thousands of sites that were hit quickly with the same links.

A distributed approach, like the free LinkSleeve,[6] uses XML-RPC to communicate between the various server applications (such as blogs, guestbooks, forums, and wikis) and the filter server, in this case LinkSleeve. The posted data is stripped of urls and each url is checked against recently submitted urls across the web. If a threshold is exceeded, a "reject" response is returned, thus deleting the comment, message, or posting. Otherwise, an "accept" message is sent.

A more robust distributed approach is Akismet, which uses a similar approach to LinkSleeve but uses API keys to assign trust to nodes and also has wider distribution as a result of being bundled with the 2.0 release of WordPress.[7] They claim over 140,000 blogs contributing to their system. Akismet libraries have been implemented for Java, Python, Ruby, and PHP, but its adoption may be hindered by the requirement of an API key and its commercial use restrictions. No such restrictions are in place for Linksleeve.

Project Honey Pot has also begun tracking comment spammers. The Project uses its vast network of thousands of traps installed in over one hundred countries around the world in order to watch what comment spamming web robots are posting to blogs and forums. Data is then published on the top countries for comment spamming, as well as the top keywords and URLs being promoted. The Project's data is then made available to block known comment spammers through http:BL. Various plugins have been developed to take advantage of the http:BL API.

Application-specific anti-spam methods

Particularly popular software products such as Movable Type and MediaWiki have developed their own custom anti-spam measures, as spammers focus more attention on targeting those platforms. Whitelists and blacklists that prevent certain IPs from posting, or that prevent people from posting content that matches certain filters, are common defenses. More advanced access control lists require various forms of validation before users can contribute anything like linkspam.

The goal in every case is to allow good users to continue to add links to their comments, as that is considered by some to be a valuable aspect of any comments section.

RSS feed monitoring

Some wikis allow you to access an RSS feed of recent changes or comments. If you add that to your news reader and set up a smart search for common spam terms (usually viagra and other drug names) you can quickly identify and remove the offending spam.

Response tokens

Another filter available to webmasters is to add a hidden session token or hash function to their comment form. When the comments are submitted, data stored within the posting such as IP address and time of posting can be compared to the data stored with the session token or hash generated when the user loaded the comment form. Postings that use different IP addresses for loading the comment form and posting the comment form, or postings that took unusually short or long periods of time to compose can be filtered out. This method is particularly effective against spammers who spoof their IP Address in an attempt to conceal their identities.

Ajax

Some blog software such as Typo allow the blog administrator to only allow comments submitted via Ajax XMLHttpRequests, and discard regular form POST requests. This causes accessibility problems typical to Ajax-only applications.

Although this technique prevents spam so far, it is a form of security by obscurity and will probably be defeated if it becomes popular enough.

Switching off comments

An increasing number of bloggers have chosen to turn off comments entirely because of the volume of spam comments.

Buying Blog Comments

A new website came out where spammers can now purchase blog comments from legitimate writers. People write the blog comments and use their username for the anchor and the URL for their spam site. The main site is Buy Blog Comments but there have been some more popping up in other places..

See also

References