Wikipedia:Reference desk/Archives/Computing/2012 January 23

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Computing desk
< January 22 << Dec | January | Feb >> January 24 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


January 23[edit]

Sharing big files[edit]

How can I share legal files with people? I need a service without silly ads. — Preceding unsigned comment added by 88.9.209.157 (talk) 00:01, 23 January 2012 (UTC)

How big? Dropbox (minimum 2GB), SugarSync (5GB), Windows Live SkyDrive (25GB). See also comparison of file hosting services and comparison of online backup services. If the files are really big and other people are willing to help spread them you may want to use torrent. Von Restorff (talk) 00:05, 23 January 2012 (UTC)
SkyDrive does not allow for more than 50 MB, which is kind of too little. SugarSync is not free and Dropbox seems to require to install some software. Are there more options? BTW, it could be with ads, but not silly ads (like find a Russian bride or 18-yo wants to meet you). 88.9.209.157 (talk) 00:18, 23 January 2012 (UTC)
If you do not want to install software and a 2GB maximum is acceptable then you can use something like wetransfer.com. SugarSync has a free 5GB plan, but they try to hide it a bit. Go here. It says: "5 GB Free Plan - Not a trial but a free account with no credit cards and no monthly payment" (maybe you need to scroll down a bit). I am not 100% sure but it is likely you will need to install some software. No need to worry about that, it is not malware, and once you've uploaded the files you can uninstall the software. Another option with 5GB free is Ubuntu One but their Windows client sucks imho. Von Restorff (talk) 00:34, 23 January 2012 (UTC)
Our article says 100 MB although that's still very small. Nil Einne (talk) 13:33, 23 January 2012 (UTC)

Sendoid is pretty convenient, but I wouldn't use it for really large files. Most instant messaging clients and protocols support file transfer, too. ¦ Reisio (talk) 01:05, 23 January 2012 (UTC)

Sendoid's web interface limits total transfer size based on the resource availability of your local machine. This tends to be somewhere between 600MB and 1gb. Von Restorff (talk) 01:11, 23 January 2012 (UTC)
DropBox is pretty convenient. You can use it without installing the software. It also has a web interface. --Mr.98 (talk) 01:08, 23 January 2012 (UTC)
I <3 DB. I am not a fan of uploading big files via the website, but it certainly is possible. Von Restorff (talk) 01:09, 23 January 2012 (UTC)
What about the original File Transfer Protocol and its free!--Aspro (talk) 01:30, 23 January 2012 (UTC)
Oh, yeah, I almost forgot that still existed Face-wink.svg A couple of years ago I had a FTP server running at home but I replaced it with the combination of torrents with webseeds and Dropbox. Von Restorff (talk) 01:39, 23 January 2012 (UTC)
What would you do, without us old fogies? Face-smile.svg--Aspro (talk) 01:55, 23 January 2012 (UTC)
Google Docs doesn't require special software although only gives you 1GB for free per account. There is effectively no file size limit for files Google Docs doesn't intepret since the limit is 10GB [1]. If you want more options, I suggest you check out the article linked to by VR since I'm pretty sure it includes most of what's been discussed and more. Nil Einne (talk) 13:33, 23 January 2012 (UTC)
Files that you upload but don’t convert to Google Docs format can be up to 10GB each. This upload limit is larger than the free storage space given to each Google Docs user. Every user is given 1GB of free storage space for files, and can purchase additional Google Docs storage to upload larger files. Von Restorff (talk) 17:31, 23 January 2012 (UTC)
Isn't that what I said? Well I didn't specifically mention that you can purchase more space, but I thought that would be obvious. Nil Einne (talk) 23:29, 23 January 2012 (UTC)
Now that you mention it, yes, it is. Meth is a hell of a drug. Von Restorff (talk) 01:16, 24 January 2012 (UTC)

seizing websites: how does it technically work?[edit]

In light of the Megaupload incident, there is something I've always wondered: just how exactly does the U.S. government seize domain names?

I'm under the impression that this is done through court orders, which would require the domain registrars to hand over the domain names to the appropriate agency. It would be natural to think that this would take some time because the court has to deliver the order to a registrar's legal department, which would then forward the instructions to the technical department. Yet the shutdowns usually happen pretty quickly. Is it safe to assume that registrars have a very short timeframe in which to comply or face penalties? Or is there another means for the .gov to shut down websites?

This brings me to my second question: a domain name merely resolves to an IP address. However, very few websites have come back online quickly under a different domain after being shut down (Megaupload certainly doesn't seem to be one of them). In this case, does the government do anything other than seize a domain name, such as ordering the hosting provider to suspend access to the actual resources? I do know that authorities can seize the physical servers on which an infringing website is located, but this isn't always the case, especially for people that use overseas servers. --Ixfd64 (talk) 01:52, 23 January 2012 (UTC)

Most large companies have on-staff legal departments that receive government and court instructions and know how to respond to them (not all such proceedings are necessarily mandatory, and a company may wish to appeal). If those people believe their company should act, they have access to internal processes and people to get stuff done quickly. Generally a court will require them to act with reasonable diligence - if they do so in a way that's consistent with how they'd handle their own urgent business then a court will likely be satisfied with that. As for a shutdown - indeed, seizing only a domain allows the service to continue under another domain (and some things like p2p networks don't rely on DNS to work). They'll want the servers themselves physically seized, because these have evidence of the suspect's alleged wrongdoings (e.g. logs, email traffic) and if a suspect knew he was under investigation he'd be likely to wipe his whole system. So for maximum effectiveness (bearing in mind they want a solid prosecution, not just to shut down a given service and have a workalike pop up a few days later) law enforcement wants to coordinate its efforts into one swift strike. Doing that, particularly over cross-jurisdictional and cross-country boundaries is difficult and time consuming (and will have required other parties to consult their own legal teams). The MegaUpload raids will have been weeks in the planning; they didn't just brainstorm this idea one day and execute it the next. While this kind of thing is unusual for technology stuff, such synchronised operations are par for the course for organised crime, drugs, people smuggling, and terrorism cases. 87.115.127.51 (talk) 02:27, 23 January 2012 (UTC)
The process is also something of a freight-train; once they've got it started, it's hard to stop. Lots of people in the US and NZ authorities (and probably other jurisdictions like HK) will have been involved in the logistics, and once they've fixed a date they'll have essentially "booked" the time of police resources like officers' time, helicopters. The co-incidence of the MegaUpload raids and the planned voting dates for PIPA and SOPA seems to uncanny to ignore- but even with those kicked seemingly into the long grass, the raids would either have to go ahead anyway or news of their cancellation would come out (too many rank-and-file people knew stuff by then). 87.115.127.51 (talk) 02:33, 23 January 2012 (UTC)
Thanks for the detailed explanation, 87.115.127.51. That makes a lot of sense. Cheers! --Ixfd64 (talk) 02:48, 23 January 2012 (UTC)
As two additional points, the date of the raids was selected for a reason. There was a birthday party for one of the accused planned for the time, which meant several of the accused would be present. [2]. Also while the US government took over the domain fairly fast, it wasn't as soon as the raids happened. For several hours Megaupload was unavailable, generally believed to be because their US based servers were seized. It was only later that the seizure notice came up. Nil Einne (talk) 14:10, 23 January 2012 (UTC)

Hot CPU?[edit]

I run a CPU intensive task in the background all of the time on an i7. The CPU temperature is about 70C, which SpeedFan says is way too hot. Is this bad for the CPU? Bubba73 You talkin' to me? 04:49, 23 January 2012 (UTC)

PS, Speccy says that it is 75C and still in the OK range. Bubba73 You talkin' to me? 04:54, 23 January 2012 (UTC)
Pushing it I reckon. See http://communities.intel.com/thread/21563 for other opinions (if you like). CPU cooling systems exist for good reason. fredgandt 05:09, 23 January 2012 (UTC)
I'd say you need to be in the 30C to 50C range. Things like CoolerMaster get you there. High temperatures not only lower the lifetime of your components but can cause random problems like hanging etc. Sandman30s (talk) 11:56, 23 January 2012 (UTC)
Since I posted that, I found some blog that said that 72.5C is the recommended max temp for my CPU. Speccy isn't bothered by it being 74C. The motherboard and GPU are 45 to 47C - only the CPU is hot. Bubba73 You talkin' to me? 17:55, 23 January 2012 (UTC)
Are you sure that the sensors are correctly labeled? If your CPU temp reaches 70-75 degrees it is time to clean the dust, doublecheck if the fans are working correctly and buy extra cooling stuff, but I think it is the GPU. Von Restorff (talk) 21:07, 23 January 2012 (UTC)
I am not sure they are labeled correctly, but that is what both Speccy and SpeedFan say. This is a quad-core CPU with 100% CPU use. Bubba73 You talkin' to me? 23:30, 23 January 2012 (UTC)
My guess would be that the sensor that reports the highest temperature after playing a modern game like Battlefield 3 for a while is measuring the GPU's temperature. Von Restorff (talk) 23:54, 23 January 2012 (UTC)
I don't play any such games, but actually I'm pretty sure that it is labeled correctly because if I turn off the program that is using the CPU heavily, the CPU temperature quickly drops to about 51C and in a few more seconds, to around 48C, according to Speccy. Bubba73 You talkin' to me?
75 C seems high to me, even for a high-end i7 on air cooling. What's the TDP? I have a substantially overclocked i5-2500k (TDP 95 W) with a basic closed-loop water cooler (Corsair H50) and it runs about 58-60 C at 100% load. Start by cleaning the dust out of the radiator, checking to make sure that the cooler fan operates at full speed, and checking the overall airflow inside the case. (I suspect that you are having problems with the airflow - 45 C for a motherboard is also high.) Add a couple of 120mm fans. Try keeping the case open and see if that helps.--Itinerant1 (talk) 06:04, 25 January 2012 (UTC)

I don't know what TDP is, but I'll check for dust and check the airflow. Bubba73 You talkin' to me? 02:11, 28 January 2012 (UTC)

Referrer spam possible in non-bot traffic?[edit]

According to this article, the best way to respond to referrer spam in Google Analytics is to filter out traffic with known spammy referrers. This makes sense to me if the traffic is all generated by bots, since it doesn't indicate actual readership, but is it possible that a visitor may have malware that simply spoofs the referrer header on their legitimate traffic (since that would be harder to detect on their firewall or modem lights, compared to creating traffic when they weren't browsing)? NeonMerlin 08:59, 23 January 2012 (UTC)

It's certainly possible in theory.. but if malware was at that level of control of your machine, why wouldn't they just redirect your browser traffic, forcing you to visit the sites they sponsor, instead of spoofing referrers which might only give perhaps a 1-in-a-1000 chance of someone looking through the referrer logs following them. Unilynx (talk) 18:53, 23 January 2012 (UTC)
The idea behind referrer spam is not that sysadmins will visit the site, but that the logs will get posted somewhere or be publicly available, and thus boost pagerank and things like that. --Mr.98 (talk) 19:56, 23 January 2012 (UTC)
My experience with referrer spam is that Google Analytics automatically filters out 99% of it. (I've come to this conclusion comparing my Google Analytics listings from my raw server statistics; the latter have so much referrer spam that it's just nearly impossible to sort out the signal from the noise, while the Google Analytics has removed almost all of those.) That doesn't really answer the question, though, other than to suspect that Google already has clever enough algorithms in place that doing it manually isn't worth the time. It's possible that legit users could have wacko referrers but that's not the current situation at all, where the vast number of referrer spammers are clearly bots. --Mr.98 (talk) 19:56, 23 January 2012 (UTC)

"expert" user-generated content[edit]

Wikipedia's own article on "user-generated content" doesn't really deal with one species that I often encountered a few months ago but (whether because I'm googling for different subjects or because Google has downranked the sites) I seldom encounter these days -- websites that invite individuals to post their own factual (?) and helpful (?) complete articles on particular subjects. (Unlike Wikipedia, individual pages don't invite collaboration or improvement by others.) Some of the content on these sites looks at least moderately good at first glance, and the sites tend to have sober designs, presumably in the hope of both attracting potential writers and impressing potential readers.

(I'd also often encounter links to these sites as would-be references in Wikipedia articles, and would zap them [as unreliable] when I found them; I haven't seen such links recently [whether because I spend less time looking for them or because most are now in the spam blacklist].)

I have a fleeting and "academic" interest in such websites. Question, then: Is there a name for this kind of website? If not particularly (as I'd guess), can somebody jog my memory with the domain names of a few examples? (Better not attempt to link to them.) -- Hoary (talk) 09:21, 23 January 2012 (UTC)

I presume you mean things such as eHow and Associated Content. My experience with both is their content is generally not particularly good. I have occasionally provided them as refs on the RD, either because there was nothing better or as an additional ref but often don't bother to even read them when I come across them in searches. Both are blacklisted on wikipedia. I'm not surprised because as writers are usually paid at least partially based on how many hits they get (or I think more accurately advertising revenue from their article), there's a strong incentive to spam from people who are the kinds who have not much to lose. And they are unlikely to have much legitimate use since they aren't likely to be considered RS. Nil Einne (talk) 13:45, 23 January 2012 (UTC)
P.S. Other examples would be About.com, HubPages, Helium.com and Google Knol (which is destined for death) Nil Einne (talk) 14:05, 23 January 2012 (UTC)
Yes! Helium.com was one of them. (I did know about About.com, but this site is an odd mixture indeed, containing worthwhile material among the dross. And the dross includes a lot of stuff by people who are identified and who presumably were paid.)
Armed with the name helium.com, I soon found writing.com (for "literature" only?), suite101.com, voices.yahoo.com (previously associatedcontent.com), ... Ugh.
(Fear not: I have no intention of pressing for recognition of websites such as helium.com as reliable sources. And if somebody were to point out that this or that page within it was of excellent quality, this might surprise me but it wouldn't change my opinion. After all, the occasional undergraduate term paper that's posted to the web is of excellent quality, but we don't accept even masters' theses, and nor should we.) -- Hoary (talk) 02:20, 24 January 2012 (UTC)

Some files missing - TestDisk didn't help (but then it did)[edit]

Resolved

I use a standalone hd-recorder to record from tv, for which I use an external hd. When I unplugged the hd during a recording to view on my computer (Linux Mint), all the files (and directories) in the directory to which it recorded were missing (on my old hd this was never a problem). Recordings in another directory on the same partition were still there. When I plugged it back in the recorder that did show some recordings, but only recent ones (not the ones I had already renamed on my computer).
So I analysed it with TestDisk, which reported

ntfs_device_testdisk_io_ioctl() unimplemented
NTFS Volume is dirty.

'List files' showed all the files. Not really knowing what I was doing, I tried 'Write' (and reboot) and when that didn't help 'Load backup', which didn't help either. So I tried Disk Utility's 'check filesystem', which reported 'File system is clean'. But TestDisk still reports it to be dirty. So now I'm stuck.
I used PhotoRec successfully before, but this is a 2 TB partition, which is going to take forever. And I'd have to buy another hd of that size to write to.
Any idea what I might try? DirkvdM (talk) 14:43, 23 January 2012 (UTC)

Aha, that annoying "dirty bit"-problem. Sorry, I am a Wind0wsn00b, but I would do something like this. Von Restorff (talk) 15:44, 23 January 2012 (UTC)
Does Linux have something like chkdsk? I don't have msWindows. (Btw, I've read several warnings not to use chkdsk, because it can really screw things up.) DirkvdM (talk) 18:35, 23 January 2012 (UTC)
Oh, and before anyone asks, I use NTFS because that's what the recorder wants. DirkvdM (talk) 18:36, 23 January 2012 (UTC)
Make a backup! fsck is a chkdsk equivalent. Or install the ntfsprogs package and use the tool in that. Von Restorff (talk) 20:54, 23 January 2012 (UTC)
Ntfsprogs is already installed. I ran ntfsfix (after unmounting the partition), which says "Failed to startup volume. Permission denied." and then "Volume is corrupt. You should run chkdsk." But I don't have msWindows. So I tried fsck, but I get "fsck.ntfs: not found". Which I already anticipated - it's only for Linux/Unix filesystems, I assume. DirkvdM (talk) 09:48, 24 January 2012 (UTC)
Well, get a Windows CD/DVD (in a legal or illegal way), and use it as a livecd so you do not have to install it. Von Restorff (talk) 10:04, 24 January 2012 (UTC)
Hold on. TestDisk does help!
Before trying your suggestion I wanted to make a complete overview of all the files, using TestDisk, when I coincidentally noticed an option 'copy'. So I tried that and indeed it copied the file (or directory). I have now copied everything and all seems to be well. It worked at normal copying speed.
For a complete description: open a console, go to the dir where you want to copy the 'lost' files to (and where the log file will be put), and type 'testdisk'. Then (in my case) Create new log file > give root password > select partition > Intel partition table type (the default is usually correct) > Analyse > Quick search > Vista? ('No' in my case). And then type 'p' to browse the partition with the arrow keys. Select any file or directory you want to copy (works one at a time) and press 'c'.
I don't know if any of the earlier actions ('Write' and "Load backup') were necessary for this to work, but they don't seem to have hurt. :) DirkvdM (talk) 11:13, 26 January 2012 (UTC)
Oh lol, well that probably saved you some time. Von Restorff (talk) 12:24, 27 January 2012 (UTC)

service providers and passwords[edit]

Do mail service providers like Gmail or Yahoo save passwords in retrievable form? Do they handover passwords to law enforcing agencies? --117.253.198.143 (talk) 16:36, 23 January 2012 (UTC)

Would you consider a salted hash to be retrievable? It is more likely they just make a copy of the contents of the account. Von Restorff (talk) 16:46, 23 January 2012 (UTC)
One would have to know how those services store passwords (and it is unlikely that they would reveal specifics) but 99% of the time websites do not store the actual password for security purposes. Rather, when the user creates the account the service generates a hash of the password, often salted. When the user tries to log in later, the same hashing and salting function is used on the password entered, and the two hashes are compared. By doing this, the service does not know your actual password. TheGrimme (talk) 17:19, 23 January 2012 (UTC)
And just to follow up on your question, while these services might not give your password to law enforcement agencies, they can easily provide either alternative access or an exported view of your account. Email services will likely cooporate with law enforcement if the agency has a subpoena. TheGrimme (talk) 17:24, 23 January 2012 (UTC)
Bear in mind the very important distinction between two very different things:
  • An account, like Google Mail, that uses secure transport, such as https, to provide cryptographically-secure access to a Google-owned server that stores your unencrypted data
  • A server that encrypts your user-data (or, a user-side client that encrypts the data so that the server cannot meaningfully interpret it).
When you connect to GMail over a secure channel (like HTTPS or IMAP/TLS), your use of industrial-strength cryptography "guarantees" only one thing: You And Google, but nobody else, can both read the private contents of your emails. No third-party can intercept the data while it is in transit.
Google still has totally unimpeded access to your data. If you are uncomfortable with this, you have several options:
  • Manually encrypt your data for end-to-end transmission, before you enter it into the email client, so that the server never sees your private data in plain-text. This can be fairly inconvenient, but it's much more secure. At the same time, this only encrypts the data, but the mail headers (including the recipients' addresses) must still be readable by the server. Essentially, this means that whoever received your email will receive "junk data" and must have the expertise, willingness, and software tools, to decrypt the contents back to human-readable form.
  • Avoid using Google (or Hotmail, or any other untrusted service-provider). If you don't own and operate the mail server, you don't know how the owner/operator is using your data. Even still, unless you own and operate every element of the network between you and your destination, even encrypted traffic is still subject to traffic analysis; so you should be sure to send as much white-noise as possible. Nimur (talk) 19:40, 23 January 2012 (UTC)
Simple test: if a site lets you reset your password after you forget it, that means the data isn't stored in a securely-encrypted fashion. --Colapeninsula (talk) 11:03, 24 January 2012 (UTC)
Shouldn't that be: If a site lets you recover your current password? If a site has you reset the password to a new value, that generally means they can't actually retrieve the existing password, and is thus hashed. 66.46.213.4 (talk) 18:01, 24 January 2012 (UTC)
No, ColaPeninsula's example was correct as it was originally written. If the contents were encrypted, it would not make sense to "set a new password." Creating a new password would be irrelevant for accessing the encrypted data, which could only be decrypted using the correct (old) password. Changing read-access to encrypted data doesn't make it useful.
This distinction comes down to what the password is being used for: is it being simply used to gate read/write access, or is it being used as the passphrase for an encryption/decryption key? On most IMAP servers, the password serves the former purpose, which is inherently less secure than the latter purpose. Nimur (talk) 19:38, 24 January 2012 (UTC)
I first thought the same thing as User:66.46.213.4, that although the site administrators don't know your current password, they certainly can either let you change it or change it themselves. But after I read your reply, I understood that a really secure site isn't just using a password to gain access to data, it's using a password to the data itself. JIP | Talk 21:26, 25 January 2012 (UTC)

Sophos and Spyware Quarantines[edit]

I am running Windows 7 with Firefox, and Sophos is my anti-virus program. Twice now in recent days, Sophos had quarantined spyware. The first was MAL/ExpJS-N. Sophos's "delete" and "move" options both failed, but when I rebooted for another reason later in the day, I noticed the quarantine area was then empty. Today, another quarantined gizmo shows up as quarantined. This one is named Troj/ExpJS-N. Both "delete" and "move" fail for it, too. I haven't yet rebooted to see what that does. These two spyware nasties seems to be more or less the same thing.

Question one: Is the second one the first one that didn't really go away, or do I have two separate, but similar, viruses?

Question two: Is it safe just to leave these viruses quarantined, or can they still do damage? Bielle (talk) 17:57, 23 January 2012 (UTC)

I do not know. Quarantined virusses are unable to do damage, and you can delete them from the quarantine if you want to. Von Restorff (talk) 19:51, 23 January 2012 (UTC)
That's one of the problems. I cannot delete it. The message comes up: Delete failed. Bielle (talk) 19:54, 23 January 2012 (UTC)
You cannot delete it from the quarantine??? That is weird. Reboot, and check if it is still there. Did you try running malwarebytes? Von Restorff (talk) 20:59, 23 January 2012 (UTC)
According to Sophos, the Troj/ExpJS-N has been around for about two years, and arrives via social websites etc. MAL/ExpJS-N is more recent, and seems to masquerade as anti-spyware, though they both seem to be obfuscated Javascript that directs your browser to dangerous websites. I'm puzzled by the failure of delete and move because Sophos seem to be aware of the malware and able to deal with it. Just in case the malware has somehow slipped through, follow Von Restorff's advice and download (free) Malwarebytes, update it, and run it once, just as a backup check. Dbfirs 09:10, 24 January 2012 (UTC)
I updated Malwarebytes and ran it. Everything came back clean (0). Troj/ExpJS-N was still showing as "quarantined" in Sophos when the Malwarebytes scan finished. Once again, I tried "delete" and when that came back "Delete failed" again, I rebooted. So far (2 hours and several visits to Fb, my email and WP later), "Quarantine" in Sophos remains empty. I'll be back if It returns. Thanks for your help. Bielle (talk) 17:17, 24 January 2012 (UTC)
If it reappears clean it again, then create a new restore point and then delete all but the latest restore points. Von Restorff (talk) 19:54, 24 January 2012 (UTC)

Free certificate authorities?[edit]

It may be possible that we have to install a web application we're delivering to our customer as an SSL application. For this we need a certificate signed by a trusted authority. Verisign seems to ask for about 500 to 1500 € for a certificate valid for about two to three years. It's possible that our customer doesn't want to pay that much. Are there any free trusted certificate authorities? JIP | Talk 20:10, 23 January 2012 (UTC)

You can self-sign and distribute your certificate directly. Nimur (talk) 20:22, 23 January 2012 (UTC)
In other words, you are free, and you are trusted by your customer, and you can make a certificate... so, you are a "free trusted certificate authority." Nimur (talk) 20:35, 23 January 2012 (UTC)
I'm afraid I didn't place my question clearly enough. The web application is going to be open to the public. It's a service our customer are providing to the public, we're just coding it. The communication is going to be between our customer and the public, not between us and our customer. The public isn't going to want to go through an extra step of accepting a previously unknown certificate, they will want it to just work. JIP | Talk 20:57, 23 January 2012 (UTC)
Then, you will need to purchase a certificate from one of the "trusted" authorities (or their authorized re-sellers). Certificates issued by these agencies are trusted by default in several major modern web-browsers. Here the a list of authorities that are trusted by default, in recent builds of Mozilla Firefox. You can buy directly from one of these agencies, or from one of their certificate resellers. A similar list of agencies exists for Internet Explorer, Safari, and so on. For example, Apple publishes its Program Requirements for root certificate authorities. Similarly, Microsoft publishes its Microsoft Root Certificate Program rules. Android devices are pre-programmed to trust root certificate agencies by the hardware vendor, specific to each hardware vendor's variant of the operating system; for example, here is Motorola's ANDROID - Root certificate management policy. If you want the certificate to "just work," you need to pre-purchase "trust" from an agency (or multiple agencies) to cover each platform you care about. Nimur (talk) 22:20, 23 January 2012 (UTC)
Yes, probably the most well known 'free certificate authority' is CAcert.org which consider themselves community based. However they haven't been included in any well known browsers AFAIK. As our article mentions, there was discussion of including them in Firefox at one stage, but this was abandoned after an audit raised concerns over some of their practices. See also [3]. Nil Einne (talk) 23:24, 23 January 2012 (UTC)
USD 500 for 2-3 years is high. Shop around, I think I paid USD 60 per year last I bought a cert (not telling where I bought it so as not to advertise, and so you can maybe find an even better price.) Captain Hindsight (talk) 16:03, 24 January 2012 (UTC)