Jump to content

Wikipedia:Reference desk/Computing

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Avril6790 (talk | contribs) at 08:44, 19 August 2010. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the computing section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


August 14

RSS

Do online rss readers like Google Reader update feeds even when you're not logged in? So for example, if I added a feed, logged out and logged back in a week later, would google have saved all the news from that week? 82.44.54.4 (talk) 00:00, 14 August 2010 (UTC)[reply]

In the specific case of Google Reader, it would appear so. I subscribe to BoingBoing there, but I never actually bother looking at Google Reader. When I opened it just now, it shows >700 entries for that feed. But if I look at the actual RSS than BoingBoing syndicates, they're only publishing 30. So Google Reader must have been periodically visiting the feed and keeping its own database updated. That's a sensible thing for it to do, but it's a decision that the Google Reader people have chosen to make. They could quite reasonably have stopped visiting after a while, or kept only so many entries (maybe they have, it's hard to know). Other online readers may work entirely differently. -- Finlay McWalterTalk 00:31, 14 August 2010 (UTC)[reply]
Having gone through a longish (several month) period without checking Google Reader, I can hopefully add some perspective. *Most* of the feeds were maintained in my absence - that is, as far as I can tell, when I checked back in all the updates that were added when I was gone were there. However, some of the "personal" feeds (specifically "keep me updated on these search term"-type feeds where I was the only subscriber) *weren't* updated, and it looked like I was missing portions. Here's how I think Google Reader works: Google only keeps a single list of items for each feed address, and that's shared by everyone who subscribes. That's why you can get several years worth of feed history when you sign up to a feed - Google has the information stored from earlier subscribers. However, Google only updates the feed if *someone* is actively looking at it. It doesn't have to be you, it could be someone else who has subscribed. If it's a popular feed (say the BBC News feed), someone else will probably cause it to be updated while you're gone. However, if you are the only one watching it, Google won't bother to check for updates unless you are logging in, so you may lose some if you don't log in regularly. That's Google Reader - I don't know about other online feed readers. -- As a final note, RSS feeds usually contain more than just the last entry/last day's entries (some rarely updated ones can even contain the complete history). So even if Google doesn't check the feed for several days/weeks, there is a chance that older posts will still be in the feed itself. -- 174.21.233.249 (talk) 20:27, 15 August 2010 (UTC)[reply]

Identical

This is probably a silly question, but I just want to clarify anyway; is anything lost or altered when files are copied? For example, say I have a file on my computer, I put it onto a usb drive and gave it to a friend, he/she then uploaded it to rapidshare where someone in Australia downloads it, burned it to a disk, and sent in back to me in the post. Would the file on the computer and the file on the disk be identical? 82.44.54.4 (talk) 00:05, 14 August 2010 (UTC)[reply]

Yes, they should be identical - that is, if the file content was different, that would be considered a defect in the transmission process. There are some (mostly nitpicky, won't matter to you in reality) issues. Firstly, the meta information (the file's name, modification-date (and other date info that some computers store about files) and owner info might be changed or lost) - mostly you don't care about that, but sometimes if you're transmitting files where that does matter, artefacts of the transmission process can break things you didn't expect. Secondly some file systems store additional data with files (NTFS stores "alternate data streams", MacOS stores "resource forks", and some fancier filesystems store complicated things like revision info); depending on the tools you use to transmit and remotely-store such files, these data might also be lost - this too is a rare and esoteric thing, so you'll very very rarely care about this. And lastly there are weird doings with text files (truly .txt ascii files) where transmitting from a DOS/Windows system to Unix (or to Mac) might cause the line termination characters to be translated to suit (see Newline for a painful explanation) - sometimes you'll want this to happen, sometimes you definitely won't, but these days most programs handle everything as binary and you won't see any difference. So, for a simple binary store-and-retrieve thing like Rapidshare, the files should be 100% identical - you and your Australian counterpart can both generate md5 checksums of the file and you should both generate the same code. -- Finlay McWalterTalk 00:17, 14 August 2010 (UTC)[reply]
Depending on your level of interest, you might find the article on Parity bit interesting. Vespine (talk) 00:57, 14 August 2010 (UTC)[reply]
Not sure if you're asking out of pure curiosity or you're planning on copying a lot of files. If it's the latter, I've found data transfer to CDs and Flash media is less than 100% reliable, particularly at fast write speeds for CDs and old or cheap USB flash drives. Generating checksums as Finlay McWalter mentions is a great idea to be sure the transfer went okay, especially with photos and other irreplaceable files. To be clear, digital files don't "degrade" with each transfer like you would expect from a cassette tape, but every once in a while some bytes can be copied wrong and corrupt or distort a file. Checksums alert you when this happens so you can redo the transfer.--el Aprel (facta-facienda) 04:10, 14 August 2010 (UTC)[reply]
While I usually write checksums to optical media myself, note that programs like Nero and Imgburn can verify the written image or files after burning Nil Einne (talk) 07:29, 14 August 2010 (UTC)[reply]

Activating OpenType Features

How do I activate the various "features" of an OpenType font in a program such as Word 03? (And if I can't, what program can I do it in?) For instance, this page (http://robert-pfeffer.spacequadrat.de/schriftarten/englisch/nachgeladener_rahmen.html?pfeffer_mediaeval.html) mentions things like "activating the 'hist' feature for historical text layout" and "activating stylistic set 'ss01'", but I have no idea how to do that. The "OpenType User Guide" isn't very helpful either. Help would be appreciated. 64.179.155.63 (talk) 06:13, 14 August 2010 (UTC)[reply]

The program in question (e.g. Word) needs to support the features. You can't make it support them if it doesn't already do so. I don't know if Word '03 does. This table (at the bottom) shows how some of these features are supported in various programs — Word 2010 does OK, for a word processor. At the bottom of the page it shows what it looks like in a few programs that support these kinds of features. --Mr.98 (talk) 19:31, 14 August 2010 (UTC)[reply]
But how do I activate those features? Is it usually in the Format->Font window or.. what? No website I've found says anything about how to do such cryptic, specific actions... only about what programs can do them. 64.179.155.63 (talk) 00:35, 15 August 2010 (UTC)[reply]

Defaulting Australian English in Word 2007?

Hi everyone,

does anyone know how to make Australian English the default language in Word 2007, and actually remove the US, so that any US spelling will be highlighted and not slip through with the Australian?

Thanks in advance Adambrowne666 (talk) 10:03, 14 August 2010 (UTC)[reply]

As far as I'm aware Word 2007, as with all versions of Word should highlight spelling considered incorrect in Australian English (even if okay in US English) by default when the document is set to Australian English. I've tried it myself on Word 2007 and it seems to work. Is your version not doing this? Bear in mind there are some words where it's not clear if an alternative spelling is considered incorrect in Australian English (as with other variants of English) so the Word 2007 may have these in their Australian English dictionary and they aren't going to be considered incorrect. For example, recogise/recognize appears to be one of these (as is mum/mom but not sceptic/skeptic). Colour/color is a good test if you're looking for one.
You may not agree with all of Microsoft's views on what's correct/incorrect in Australian English (and some of them may be more 'bugs' then intentional choices anyway) but there isn't AFAIK any way to be 'stricter' or something so your best bet is to manually remove words you consider incorrect from the Australian English dictionary (well there's probably some way, I don't know how) and complain to MS. You can also add autocorrections, Word already does this for things like kilometer (but not meter for obvious reasons).
In terms of how to change the default, click on the language shown at the bottom of the screen and you should get a list, select English (Australian) and then click on default and ok and it should change the default. Note that this does indeed only change the default. If you have any existing documents where the language is set and saved (it usually is with Word documents) it will not affect them. You will have to manually change the language in existing documents or use a macro or something to modify all existing ones. You could similarly probaby make a macro which will automatically change documents you open to be Australian English I'm not aware Word has such a feature itself.
BTW, AFAIK, Word follows your OS language when it's installed, so you may want to change your OS language if it's something else to be Australian English for this and other programs.
Nil Einne (talk) 10:46, 14 August 2010 (UTC)[reply]
Thanks for the thorough answer, Nil Einne - I was also sent this by a friend step by step Adambrowne666 (talk) 03:38, 16 August 2010 (UTC)[reply]

McGill Email

Hello. Is it possible to check my McGill email through Windows Live Mail? (Outlook Web Access would be the third party if I checked my email through my web browser.) The second setup window asks for my incoming and outgoing server information. Thanks in advance. --Mayfare (talk) 12:05, 14 August 2010 (UTC)[reply]

A quick search for 'mcgill email' leads me to [1]. Clicking on "Email and Calendaring" then "Email for Students" leads me to [2]. Under the "Accessing McGill email" section there is a "Configure an email client (application) to access McGill Exchange" link which leads to [3]. There are a bunch of stuff there telling you how to configure it for various clients including Windows Live Mail. If this is the wrong McGill, perhaps it would be wise to specify which McGill you are referring to in the future. However generally speaking the people who are able to best tell you if you are able to access your e-mail via an e-mail client would be the people who provide you the e-mail. They also tend to provide you the server settings and in fact very commonly as in this case with specific instructions for popular clients. POP and IMAP are the most commonly used protocols for accessing email by standalone clients so a search for your email providers POP or IMAP servers is often a useful test. Nil Einne (talk) 14:10, 14 August 2010 (UTC)[reply]

Changing default printer does not stick

I'm running XP Professional - Service Pack 3 with automatic updates fully up to date. When I change the default printer - from a colour inkjet (HP Deskjet 5652) to a laser (HP Laserjet 1020) - the laser only stays the default until I reboot, then it reverts to the inkjet by itself. Both are connected by USB. How can I make sure the change becomes permanent? Roger (talk) 12:56, 14 August 2010 (UTC)[reply]

Random idea - swap the two connectors over in the USB slots? Maybe your PC assigns defaults based on the order they are connected in. Exxolon (talk) 16:46, 14 August 2010 (UTC)[reply]
You could also try deleting the Deskjet, rebooting, and re-adding it. From a quick google search, it doesn't look like other people with the problem have found simple solutions. Hmm. Indeterminate (talk) 20:47, 14 August 2010 (UTC)[reply]
I once used a library computer which was set up so that it created a new user from default settings every time someone logged in and then deleted that user when they logged out. It was good for the library so that any one user didn't mess up the machine's configuration, but if you had that kind of thing going on at home or at work it would be damn irritating if you wanted to change the default printer because that setting wouldn't stick. Astronaut (talk) 07:47, 15 August 2010 (UTC)[reply]
Thanks for all the advice. Here is how I fixed it: 1. Unplugged both printers. 2. Removed the Deskjet from "Printers and Faxes" 3. Set the Laserjet to default. 4. Rebooted. 5. Reconnected both printers. This caused the Deskjet to be re-installed by the "Found new hardware" function. 6. Checked that the Laserjet was still default. 7. Rebooted again. Problem solved! Roger (talk) 15:52, 16 August 2010 (UTC)[reply]

UDP

I read the UDP article but I don't get it. Can UDP be used instead of TCP? What is UDP for? What programs use it? 82.44.54.4 (talk) 14:06, 14 August 2010 (UTC)[reply]

Simply put, TCP and UDP are two different transmission protocols for moving packets across an IP (packet switched) network. One distinction between TCP and UDP would be their connection orientation -- whereas TCP is a connection oriented protocol, meaning that an session is opened between client and server (and this session is opened, closed, acknowledged, etc) -- UDP is a connectionless protocol. Another distinction between TCP and UDP is that TCP is a "reliable" protocol and UDP is not. This means that when a client sends a packet using TCP to the server, it can know for sure whether or not that packet arrived there. In UDP, there is no acknowledgment of packet receipt. Here is a good breakdown —Preceding unsigned comment added by Rocketrye12 (talkcontribs) 15:00, 14 August 2010 (UTC)[reply]
(ec)They do rather different things, so mostly you wouldn't use them for the same job. The advantage of TCP is that it's "reliable", which really means that the packets are delivered to you in the order they were sent. So if a packet is lost or damaged, TCP will resend it, and won't deliver to the receiver any subsequent packets until it's got a good copy of the damaged one. So if reliability is what you care about, then you use TCP. Web pages and images are sent (over HTTP) over TCP, because a web page with the middle missing, or an image with the top half missing, is useless. But that reliability has a cost - if a packet has to be retransmitted, all the subsequent packets pile up behind it. While you're waiting for the retransmission, the delivery of packets appears to stall. For stuff where you care about the timely delivery (often more than reliability) using TCP would cause jumps. So things like video-chat use UDP instead - that way a lost packet doesn't hold up all the subsequent ones. Such a protocol thus has to be tolerant of such a fault - so for video it's okay if the picture skips, but it mustn't stall or disintegrate when a packet is lost. So mostly TCP and UDP do different things, and you'd use the one appropriate to your application. A few applications use UDP for reliable transfer (I think edonkey does), but that means they have to build their own retransmission stuff into their application protocol. -- Finlay McWalterTalk 15:04, 14 August 2010 (UTC)[reply]
Strictly, I've conflated "reliable" (you know the packet gets through) with "in order" (the packets arrive in the order they were sent) above; most applications either need both or neither. UDP is neither, TCP is both. P2P protocols like edonkey are unusual in that they have a meaningful use for "reliable, not in order", something that neither protocol can give them (so they essentially implement their own). P2P systems are very fault tolerant and deal with counterparts that only have part of the data (you can download a file from a bunch of sources, where each individual doesn't have a whole copy, so long as each byte of the image is held somewhere by someone in the swarm). So P2P clients are willing to build up a piecewise mosaic of a file, with the pieces arriving in crazy orders from a bunch of places. -- Finlay McWalterTalk 15:10, 14 August 2010 (UTC)[reply]
Agree with all that has been said above. Just to address the OP's original question (in case he/she has been lost in technical jargon): "Can UDP be used instead of TCP?" Well, if you are the user of a program, no; there are very few programs that let you "choose" between the two options. It is very rare for any program to offer a "mode" or user-configurable option to switch between the two. This is because (as has been explained above) the program is designed to use the protocol that best suits its needs; TCP and UDP function differently, and provide a different "contract" regarding data delivery. Just "switching" transmission protocols would be trivial, but the program would probably break in weird ways if it's using a scheme other than the one it was designed for. So it would be useless to offer this as a "menu-option" to the user. If you are the designer of a program (e.g. a programmer), you can easily switch between UDP and TCP, as long as you program the system to properly handle the resulting data-transmission behavior. Typically, a programmer does this by selecting either a TCP or UDP socket API in the programming-language of their choice; in the special case of C or assembly-programming, you can actually roll your own protocol-level data management and "re-implement" either protocol at the device-level. Nimur (talk) 05:59, 15 August 2010 (UTC)[reply]

Windows Movie Maker

Whenever I try to use this, it tells me I have missing "codecs". What codecs should I download please? And also, what do I do with the codecs once I have downloaded them? I have WinXP Sp3. Thanks 92.28.247.227 (talk) 20:57, 14 August 2010 (UTC)[reply]

What this will mean is that you are trying to open a movie file in a format that Windows media player doesn't support out-of-the-box. Codecs are library files that allow applications to work with more file formats. In this case I would have a look at free-codecs.com for a codec matching the file format you want to work with. Rjwilmsi 21:12, 14 August 2010 (UTC)[reply]

Thanks, but I'd like some more information if I may. Will all and any of the codecs work with Windows Media Player? Are some better than others? Why are there so many different ones? Should I have more than one codec pack installed? Are they all compatible or incompatible? How can I tell what I already have installed already? Is there any kind software program that can tell me what codec packs I should install? What do I do with the codecs once I have downloaded them? Thanks 92.28.248.196 (talk) 20:16, 15 August 2010 (UTC)[reply]

GSpot is a free program that will analyze a given video file and determine whether you have the proper video and audio codecs installed. It is small and quick and helpful. There are countless codecs you could install, but in practice, what you need most is DivX (also free, though be careful what options you choose during installation). Matt Deres (talk) 15:40, 17 August 2010 (UTC)[reply]


August 15

Online back up services

From what I've been reading, online backup services seem to be slightly pointless for anything more than a few GB worth of data. Let's say I have 1TB to back up and a home DSL connection. I haven't done the math but I've seen estimates that say that 3-4 GB can be uploaded in a day. On the slow end, that's nearly a year for the initial backup. So do any of these services offer the ability to physically send a drive full of data to them for the initial "upload"? Dismas|(talk) 05:24, 15 August 2010 (UTC)[reply]

Some quick math yields an estimate of 7.9 gigabytes per day uploaded for a person with a 768 kilobit per second upstream DSL connection, which would be 129 days for the initial round-the-clock upload. Yes, that's hilariously slow if you actually have 1TB to back up and you have that slow of an Internet connection. Of course, you would choose to slash your amount of backed-up data by choosing to not back up your system and application directories. I disagree quite a lot with "slightly pointless", though. It's only pointless until the initial backup is complete. Then it's very pointful. Comet Tuttle (talk) 06:10, 15 August 2010 (UTC)[reply]
I think I've read something on the RD before where someone mentioned such a service. Actually it may have it was something like some sort of hosting service which apparently do do this sort of thing or perhaps Amazon Cloud [4] which does as well.
In any case for things specifically designed as online backups, a quick search reveals to me [5], [6], [7], [8], [9], (I think) [10], (I think), [11], [12] appear to have this feature. [13] do as well but I think they're a photo backup service not a general purpose one. [14] may as well (saw suggestions of it but can't find any mention on their site from a quick look).
Comparison of online backup services has a tab for the opposite (sending you a disk to recover a backup) but does mention physical seeding for at least one provider. I would guess many more have such a feature this is just the results of a quick search. And do note I'm not recommending any of these, in fact one of them seems to be using free GoDaddy hosting which doesn't exactly inspire confidence that you should trust their service with your backups.
If you're wondering, I found the first result from discussion of such a feature, found out it was called 'physical media upload' from that page and searched for more with such a feature. During that, I found out it was also called 'physical (media) seeding' (far better name IMHO).
You may notice that many of these are more directed at businesses then home users, while this may seem counterintuative at first, I'm not really that surprised. Most home users likely don't have enough they want to backup to make it worthwhile particularly given the cost of keeping that much backuped online. I presume there's also a charge for physical seeding in addition to the requirement for a external hard drive and the cost and risk of sending it which also make it less worthwhile for home users who may not mind the wait since unless you live here in NZ or other places with very high data charges, even if it takes a few weeks it'll be basically free (well unless the service charges a bandwidth fee).
Note that a number of them appear to require you to use their hard drives, it makes it easier I guess since they don't have to worry about funny drives with compatibility problems and also ensures you have the necessaring packaging and they're likely have less hassle if the hard drive is damaged when they get it.
Nil Einne (talk) 06:56, 15 August 2010 (UTC)[reply]
Wow! Thanks for that! I didn't know what to search for because I didn't know what it was called. Thanks again, Dismas|(talk) 07:43, 15 August 2010 (UTC)[reply]

Call to improve password security?

There was again much fuzz in the media about supercomputers becoming more affordable, and that it makes brute-forcing passwords much easier. However, there is a very easy way to make brute-force completely useless: have a maximum number of allowed tries before blocking the account for some time. This is why PIN codes for credit cards can be only 4 digits long. For online or other personal computer-related use this could cause the possibility of someone blocking the accounts of others by deliberately entering wrong passwords, but even this can be easily solved: don't block the account, but have a cooldown period between accepting consecutive tries. Even forcing to wait for one second after entering a wrong password can thwart any brute-force attack, one second is big enough to hinder algorithms which are based on trying millions of combinations every second, but small enough that it doesn't disturb human users. So why is there always so much panic around passwords, and forcing them to make longer and longer, instead of everyone implementing a system like this? --131.188.3.21 (talk) 09:41, 15 August 2010 (UTC)[reply]

Blocking attempted access after a number if wrong tries is very common. However, brute force attacks rarely work like this - the normal target of brute force attacks is downloaded password files. These are stored encrypted, but are susceptible to long timescale attempts to break them. --Phil Holmes (talk) 09:57, 15 August 2010 (UTC)[reply]
If you mean online passwords (e.g. for Facebook) then the speed and latency intrinsic to the system makes brute force impractical and supercomputers irrelevant. Attackers instead cast a broad net, trying dictionary attacks of common passwords. It's tempting to add a cooldown system to this, but with internet scales - saying "if you make three bad attempts, the delay before you can login goes to five minutes, and then 30, and then 300, and so forth", but that has two major problems - firstly the attacker attacks thousands of accounts concurrently, so they can be off attacking all the others while the timer runs on one (so you haven't slowed the attacker down at all) and secondly such schemes afford a great opportunity for denial-of-service - my cheapo botnet can occasionally try a random (wrong) password on your facebook account, meaning you essentially never again can login to facebook. Other online schemes, like ssh, already have a cooldown. -- Finlay McWalterTalk 10:02, 15 August 2010 (UTC)[reply]
On your last question, about "panic around passwords", Bruce Schneier generally seems pessimistic about password security; he famously advocates that people should have difficult-to-guess passwords but actually write them down and keep the piece of paper in their wallet, because that's hard to steal. Here is an essay of his about the insecurity of all passwords, and the observation that phishing attacks are the attack of today that avoids the need to ever crack a password. Comet Tuttle (talk) 22:15, 15 August 2010 (UTC)[reply]

wget

I mirrored a lot of pages with the -m -k options, but wget crashed just before it converted them to relative links, and the pages I downloaded are now gone from the site I downloaded them from, so I can't run the download again. Everything has been downloaded, it's just all the html files still point to online locations instead of the downloaded ones. Is there any way to make wget convert the links? Or some other program that might? 82.44.54.4 (talk) 11:17, 15 August 2010 (UTC)[reply]

sed can almost certainly do it - rather easily, if all files are from the same domain. sed -e 's:full_http_server:root_of_the_local_dir:g' infile > outfile. Make sure the locations are properly escaped (using : instead of / saves you from having to escape all /s ;-). I'd make sure outfile is in some mirrored directory so as to leave the original files save and sound. --Stephan Schulz (talk) 07:12, 16 August 2010 (UTC)[reply]
That will work for some websites, but it's pretty fragile. It will fail if the site uses relative URLs of the form "/foo/bar", and in various other situations. -- BenRG (talk) 05:38, 17 August 2010 (UTC)[reply]
I agree that that's a simple 95% solution, but I don't see why it will fail for the use case you described. Relative links should not be a problem to begin with, as they don't point to "online locations". Do you mean local absolute links? That's indeed not something I even knew existed, though yes, that would break a simple text replacement approach. --Stephan Schulz (talk) 07:56, 17 August 2010 (UTC)[reply]
Thanks I'll try that 82.44.54.4 (talk) 11:24, 16 August 2010 (UTC)[reply]

What font you are viewing?

Is there any quick way of telling what font you are viewing in a web browser? (I prefer to allow sites to set their preferred typeface.) When I copy and paste into a word processor it just uses the word processor's default. I know you can view page source and dig into the css...was wondering if there's an easier way. 151.203.20.165 (talk) 13:29, 15 August 2010 (UTC)[reply]

Firebug (web development)'s Inspect feature can do this - if you click on an element, it will list all applied CSS styles, making it easy to pick out the fonts. Other browsers may also have comparable extensions or Developer modes which allow this. Unilynx (talk) 13:36, 15 August 2010 (UTC)[reply]
Thanks; Firebug seems to work quite well. 151.203.20.165 (talk) 14:04, 15 August 2010 (UTC)[reply]

Device drivers and hard disk data recovery

Firstly, a recent hard disk failure, I ordered a new hard drive and installed it in my computer, then proceeded to install Windows XP Professional (service pack 2) onto the new hard drive. This got my computer up and running again...mostly. Windows works, and I've been able to get some things functioning...the only problem is that none of my computer's hardware seems to be installed anymore. As far as my computer knows, it has no internal network card and no internal graphics card (or at least not the one installed and working before the crash) or sound card, causing me to have very limited graphics options and no sound or internet options/devices in Control Panel. I'm guessing that this is because the device drivers need to be reinstalled (correct me if I'm wrong). However, the Dell CD labeled "Drivers and Utilities" evidently (as I discovered upon running it) has nothing on it except a computer diagnostic problem and not the "drivers and utilities" I need. Where do I obtain the drivers (again, assuming this is why half my computer hardware doesn't seem to exist) and get all my hardware working again?

After this is done, I would like to recover the data from the non-working hard drive and copy it all (including installed programs, data within those programs, files, pictures, etc.) onto the new hard drive. The hard drive originally failed when I was running Windows Live Messenger, Mozilla Firefox, and GR2Analyst (see previous posts). I had installed the radar program that day, but it was running just fine for the few hours up until the hard disk failure. I'm fairly certain that the old hard drive did not suffer a head crash, because when I plug it into a SATA thing (this thing is the one I have) the hard disk spins without the click of death (it sounds like it's running normally), but I could be wrong considering my inexperience with such issues. Previously when trying to start up the hard drive while it was in the computer, it would start up to the Windows screen where the little blue bars scroll across the screen ([15]), which would sit there for about 5 minutes before displaying a blue error screen that said among other things "UNMOUNTABLE_BOOT_VOLUME", and automated diagnostics on the hard drive gave the error codes "Error 0142 Msg code 2000-0142 Unit 1 Self test status 79" (or something similar), "0F00:0750 Disk_0 Self-test Read Error" and "00F0:0244 Disk_0 Block 6468425 Can't read, replace disk or remove write protection". When I plug the old hard disk into the computer using the SATA cord, the computer recognizes it by saying "your new hardware is installed and ready to use", but it doesn't show up in Windows Explorer. Given all of this information, what might be wrong with the old hard drive, and what can I do to copy the information from the old hard drive to the new hard drive? Sorry about the huge wall of text, and a big thanks to anyone who makes it through it all and answers. There also might just be a barnstar or two in it for the answers that end up working (one for the first paragraph, and another for the second). Ks0stm (TCG) 16:35, 15 August 2010 (UTC)[reply]

Assuming your actual PC is a Dell (based on the fact you have a Dell driver CD) the first part of your question should be quite easy. Just go to Dell's website and click technical support in the top right corner and then home users and finally drivers and downloads (or just click here!) Choose to enter a service tag and type in the code printed somewhere on your computer (either on the back or under a front flap are the usual places. They'll give you a page with ALL the drives for your computer for you to be able to download. Unfortunately sometimes the same model of computer would have come with a few choices so they'll offer multiple network card drivers so unless you know exactly which one is yours it'll be trial and error, but it shouldn't take too much longer.
The second part of your question is a bit more tricky, but the fact Windows started to boot up previously is promising. When you plug it in and it says "Installed and ready to use" go into Control Panel and Administrative Tools and then Disk Management. You should see the new disk as on of the options in the bottom right and hopefully it'll be just as simple as it doesn't have a drive letter so you can simply right-clicking this and select "Add/change drive letters" and then just giving it a drive letter. After doing so HOPEFULLY it'll appear in "My Computer"  ZX81  talk 16:55, 15 August 2010 (UTC)[reply]


To see what hardware MS-Windows thinks your computer has got (from memory, might not be correct),
  • Right-click on "My Computer" and select "Properties..." (or press Windows-Break)
  • This should bring up a dialog box with several tabs, select "Hardware"
  • There should be "Device Drivers" button near the top, press it.
  • A new window will open, titled "Device Manager", with a tree view listing all the hardware.
  • Any hardware with a question mark in a yellow circle does not have the correct driver.
  • Right-click on the hardware, and select "Properties..."
  • A new dialog will open, select "Install Drivers..."
  • A file selection dialog box will open, select the .inf file you have downloaded for that device, and follow any instructions displayed.
  • repeat for the other devices.
CS Miller (talk) 18:45, 15 August 2010 (UTC)[reply]

echo time

Resolved

How do you echo the time in a bacth file? I tried "echo time /t" and it didn't work. I just want it to display the time, not log it to a file or anything. 82.44.54.4 (talk) 19:58, 15 August 2010 (UTC)[reply]

Try echo . | time /t  ZX81  talk 20:20, 15 August 2010 (UTC)[reply]
echo . | time (without the /t) was the old way of doing it, before the /t switch became available. -- 78.43.71.155 (talk) 21:10, 16 August 2010 (UTC)[reply]
It said "the process tried to write to an non-existent pipe" 82.44.54.4 (talk) 20:22, 15 August 2010 (UTC)[reply]
You don't need to echo anything: you just need to execute the "time" program: time /t in the batch script. Nimur (talk) 20:56, 15 August 2010 (UTC)[reply]
That works 82.44.54.4 (talk) 21:17, 15 August 2010 (UTC)[reply]
Alternatively, you can do echo %time% --Bavi H (talk) 23:30, 15 August 2010 (UTC)[reply]

network cards

Do network cards and routers and such eventually wear out with use, like say a hard drive would? Or could you download at the maximum rate for years without any degradation? 82.44.54.4 (talk) 20:15, 15 August 2010 (UTC)[reply]

Hard drive failures are nearly always mechanical in nature. With finely balanced parts rotating at high speeds and close tolerances between head and disk, it is hardly surprising that hard drives are suceptable to mechanical problems. Network cards and routers are made of solid-state components that, in a perfect world, would last indefinitely. However, manhandling the card, changes in temperature (including that caused by a build-up of dust), power surges, etc. can all damage electronic components. Astronaut (talk) 00:40, 16 August 2010 (UTC)[reply]
Actually, electronic components do wear out as well. Apart from them suffering from manhandling damage, the moving parts in a circuit (the electrons) cause the compnents to wear out by a process called electromigration. How fast this happens depends on the current densities used in the circuits and how hot they get - so it's not easy to predict. It could be a few years or many tens of years.--Phil Holmes (talk) 09:37, 17 August 2010 (UTC)[reply]
A common and well known point of failure of some electronic components in recent years is leaking electrolytic capacitors or as our article calls it, capacitor plague. This primarily affects motherboards, video cards and other things which deal with resonable amounts of power particularly I think if that includes DC-DC conversion. It's probably not that likely to occur in a network card or router I would guess Nil Einne (talk) 08:37, 19 August 2010 (UTC)[reply]

Weird PDF

This PDF has some weird things going on it.

For one thing, I can't (on a Mac, with multiple PDF programs) search it at all. It doesn't seem to have any security bits relating to this set when I look at its properties in Adobe Reader.

When I try to copy and paste text, I get just gobbledegook as a result. I would paste some but it seems to be killing the Wiki editor with its crazy gremlin characters.

What's going on here? I'm both 1. just curious (I haven't run into this before), and 2. trying to search it for a reference and am frustrated! Is it intentional obfuscation, or is it an artifact of the PDF producing program? (Apparently it was created with GNU Ghostscript 7.05, which I wouldn't think would cause a problem.) --Mr.98 (talk) 23:21, 15 August 2010 (UTC)[reply]

I downloaded it on my Mac using Safari (both current), and it opens fine. You seem to be seeing all of the formatting characters that Acrobat uses to structure text (that's the gobbledygook), which makes me think that (a) you have a corrupt download or (b) your version or reader is out of date or its plist is corrupt. try downloading it again. (though I'll say, from a brief glance at the text it reminds me of a cartoon I once saw: an alert dialog that says "This document has 2,432 spelling errors and is really boring. Print anyway?" - maybe your computer is engaging in civil disobedience...) --Ludwigs2 00:16, 16 August 2010 (UTC)[reply]
Ludwigs2, I believe you're missing the point. Just looking at the pages in a pdf viewer doesn't reveal the abnormality. You have to try to do something that requires the actual text, and not just the appearance of text. This file seems to be using a technique similar to the one described at http://spivey.oriel.ox.ac.uk/corner/Obfuscated_PDF - it contains a bunch of custom fonts, in which each character looks like a completely different character than the one it's supposedly representing. 69.245.226.104 (talk) 00:36, 16 August 2010 (UTC)[reply]
Right, I should clarify. I can read it fine. But I can't copy/paste or search. I, the human, can read it, but the computer cannot. (And it is not a scan.) Anyway, I suspected that something like what 69.245 describes is going on here, but it seems odd to me that this particular document would be obfuscated? --Mr.98 (talk) 00:45, 16 August 2010 (UTC)[reply]
It does say on page 17, "No Derivative Works - You may not alter, transform, or build upon this work." The obfuscation is just typical copyright-fanatic behavior. "People might actually use this instead of passively reading it! Must add obstacles!" 69.245.226.104 (talk) 01:00, 16 August 2010 (UTC)[reply]
It's not just obfuscated on Mac/Safari. I get similar looking crap in Windows/IE 8. IMHO, it does seem to show an overly paranoid approach (perhaps due to fears of plagarism). I notice it is a very long document of nearly 500 pages, so if you really need to work with that document why not email Dr Maret and ask her for a clean copy. Astronaut (talk) 01:04, 16 August 2010 (UTC)[reply]
Ah, I see what you mean. Well, it doesn't seem that the document is protected in any way. however, I do notice that it is using custom fonts (I don't know why, since the fonts are not that attractive), and there may be some issue with translating to standard fonts. I don't know if that's accidental or intentional. --Ludwigs2 01:31, 16 August 2010 (UTC)[reply]
It's definitely intentional. I extracted one of those fonts and looked it over. It's too abnormal to be anything but intentional obfuscation. All the glyphs are above U+10000, in an order almost but not quite like ASCII. 69.245.226.104 (talk) 01:35, 16 August 2010 (UTC)[reply]
Yes, and the PDF markup inside it is also mechanically obfuscated (or is the product of some downright whack processing) - rather than the long paragraphs of text interspersed with occasional markup, it has a "move X emit char Y" line for every single character. While this breaks simple text extraction and copy-and-paste, it also breaks any hope of accessibility to the blind and partially sighted, loses any chance of search engines meaningfully indexing it, and breaks the embedded URL on p17. Duh. -- Finlay McWalterTalk 16:26, 16 August 2010 (UTC)[reply]
And all that for a result that any moderately useful OCR software can undo, especially given that the image is available in perfect electronic form... --Stephan Schulz (talk) 16:55, 16 August 2010 (UTC)[reply]
Bizarrely, I was able to find it in Word format, here. Google Scholar, interestingly enough, can seemingly parse the data. --Mr.98 (talk) 22:45, 16 August 2010 (UTC)[reply]


August 16

How much information on web browsing habits is accessible from the main node in a home network?

Without going into too much detail on my particular situation(it's complicated), let's just say that the administrator of my network I use at home has become extremely untrustworthy and has made some threats to me, and I suspect they may attempt blackmail if they can obtain any sensitive or embarrassing information. Setting up a second network is not an option at this time, nor is simply avoiding the network altogether, as I'm a student and have work that needs done over the internet while I'm not at the university(where I have to use an insecure wireless network anyway, so it's not suitable for many non-school tasks). And I know what everybody will assume here, but I am a legal adult and the administrator in question is not my parent, so I'm not attempting to subvert any authority with my actions.

My question is, how much information can they intercept or access from the computer that controls the router/wireless access point? I'm pretty sure they can intercept any packets I send, unless I encrypt them, but would they be able to access any lists of web sites I've viewed? I'm not talking about the internet history(though I do clear that), but is there a list of urls(or server addresses) accessed by my computer saved on the router that they might access? The router in question is an airport extreme, though I'm not certain of any exact model numbers as I didn't set it up. I've done what I can to minimize the possibility of most of the threats made, but I don't know enough about the router options to judge if this one holds any water.

Also, are there any security precautions I should take? I don't run as the root/admin user as a rule, and I've physically unplugged my desktop pc from the network except when I (rarely) need to connect to the internet. My laptop uses the wireless connection though, and I'm not sure of anything else I can do to make it more secure, apart from enabling encryption where I can.

69.243.51.81 (talk) 05:01, 16 August 2010 (UTC)[reply]

1. Obviously, you should move elsewhere. 2. Yes, they can stream to their PC a list of the websites you visit. Comet Tuttle (talk) 05:43, 16 August 2010 (UTC)[reply]
You should consider using a secure tunnel to a trusted proxy server; or, use a secure tunnel to a Tor network. These will obfuscate your web viewing habits. Note that even if your connection is encrypted, the administrator can know what the destination of that secure tunnel is - so that is why you should use a proxy server. The administrator will only be able to know that you are making encrypted connections to the proxy - they will be unable to trace what the proxy is relaying for you. Your university may host remote-access servers, which you can use as secure proxies. Nimur (talk) 08:16, 16 August 2010 (UTC)[reply]
If you have or can get a Unix shell account from your school or your ISP, you can use the -D option of PuTTY or OpenSSH to turn that into a SOCKS proxy, which you can then use in the same way you'd use Tor (which also runs as a SOCKS proxy). The advantages are that it's much faster, and the proxy (which can see all of your traffic) is administered by your school or your ISP, instead of some random person who happens to be running a Tor exit node. -- BenRG (talk) 19:40, 16 August 2010 (UTC)[reply]

SAP

What are the advantages of SAP Reporting tool? What are the types of SAP Reports available? Thank you for the answers. —Preceding unsigned comment added by 61.246.57.2 (talk) 05:38, 16 August 2010 (UTC)[reply]

Haskell functions - instances of the Eq class?

I've been told that in general it isn't feasible for function types to be instances of the Eq class in general, though sometimes it is. Why isn't it feasible in general and when is it feasible? Surely functions are equal if they return equal values for equal arguments and not equal if they don't? SlakaJ (talk) 07:44, 16 August 2010 (UTC)[reply]

Function equivalence is undecidable in general. You can't compare every return value if the domain is infinite, and even if it's finite, the function might run forever when applied to certain arguments, and you can't (in general) tell whether it will run forever or just slightly longer than you've tried running it so far. You could write an instance like (Data a, Eq b) => Eq (a -> b) that would attempt to prove equivalence or inequivalence by trying every argument in turn, but it would fail (by running forever) in many cases. There are families of functions for which equivalence is decidable—for example, primitive recursive functions with finite domains—but there's no way to express constraints like that in Haskell. -- BenRG (talk) 19:28, 16 August 2010 (UTC)[reply]
Thanks very much SlakaJ (talk) 14:07, 17 August 2010 (UTC)[reply]

Maximum # of Cores (i.e. Logical Processors) & Amount of RAM in various Linux Operating-Systems

Hi.

   I want to know what is the maximum number of processing-cores (i.e. logical processors, not physical sockets) and maximum amount of RAM which each of the following Linux operating-systems can support.

  1. Mandriva Linux One 2010
  2. Gentoo 64-bit Linux
  3. Ubuntu 10.04 Linux 32-bit Server Edition
  4. Ubuntu 10.04 Linux 32-bit Desktop Edition
  5. Ubuntu 10.04 Linux 32-bit Netbook Edition
  6. Ubuntu 10.04 Linux 64-bit Server Edition
  7. Ubuntu 10.04 Linux 64-bit Desktop Edition
  8. Fedora 13 Linux 32-bit GNOME Edition
  9. Fedora 13 Linux 32-bit KDE Edition
  10. Fedora 13 Linux 32-bit LXDE Edition
  11. Fedora 13 Linux 32-bit XFCE Edition
  12. Fedora 13 Linux 64-bit GNOME Edition
  13. Fedora 13 Linux 64-bit KDE Edition
  14. Fedora 13 Linux 64-bit LXDE Edition
  15. Fedora 13 Linux 64-bit XFCE Edition
  16. Debian 5.0.4 Linux 64-bit
  17. Sun Microsystems' OpenSolaris 2009.06
  18.    Thank you in advance to all respondents.

    Rocketshiporion
We have articles on all these operating systems (Mandriva Linux, Gentoo Linux, Ubuntu (operating system), Fedora (operating system), Debian & OpenSolaris) but, if system requirements are mentioned at all, it is always to define the minimum requirements and not the maximums. I also took a look at a few of the official sites, but again always the minimum requirements and not the maximums. Most distributions run community forums, so you could try asking there (for example, this post suggests the maximum addressable RAM on 32-bit Ubuntu, without using something called "PAE", is 4GB).
On another subject, it really is not necessary to write your post using HTML markup. Wiki-markup is flexible enough to achieve what you want and shorter to type (for example, simply preceed each line with a # to create a numbered list; no need for all that <ol>...<li>...</li><li>...</li></ol>). A brief guide can be seen on Wikipedia:Cheatsheet. Astronaut (talk) 12:07, 16 August 2010 (UTC)[reply]
PAE is Physical Address Extension. While (absent PAE) a 32bit OS can address 4Gb of memory, that doesn't mean 4Gb of RAM. That 4Gb address space also has to accomodate all the memory-mapped peripherals, particularly the apertures of PCI devices like the graphics adapter. So, in practice, while you can install 4Gb of RAM in a machine running a 32 bit OS, you'll actually see about 3.3 Gb of that. Precisely how much is a function chiefly of the motherboard and the installed adapter cards rather than the OS. For any purpose that needs lots of RAM (where 4Gb these days isn't lots at all) you'd want a 64bit OS. -- Finlay McWalterTalk 12:23, 16 August 2010 (UTC)[reply]
Number of processors and quantity of RAM is not determined by the distribution - it is determined by the kernel. You can "easily" swap out a different kernel to any of the above systems. All of the above distributions (except OpenSolaris, which uses the Solaris kernel), are default-installed with a Linux Kernel version 2.6, (and many will allow you to "easily" substitute a 2.4 kernel if you wanted to). The Linux 2.6 kernel requires SMP support to be enabled if you want to support multiple CPUs; but it can theoretically support an "arbitrary" number of symmetric multiprocessors (if you recompile the kernel, you can specify the maximum number of CPUs you want). The Kernel Configuration Option Reference tells you how to set up SMP (multi-processor) support if you are recompiling your kernel for any of the above. On the other hand, if you are using the kernel distributed with the "default" distribution, make sure that you select an SMP option; the compiled binary will probably have picked a "reasonable" maxcpus parameter. I have several SMP-enabled netbooks running Ubuntu, based on the default "Netbook" distribution - so it's really irrelevant which distribution you pick, if you switch the kernel.
While in theory, you can recompile a 2.6 kernel with maxcpus=arbitrarily_large_integer, it is very unusual to see any kernel binary that supports more than 32 logical x86 cores. At a certain point, if you want more than that, you will probably have a custom system-architecture, and should know what you are doing when re-engineering at the kernel level. Here is detailed information for the SMP linux system-designer (almost 10-years old and out of date, based on Kernel 2.2...). The MultiProcessor Specification defines the architecture for x86 cores; there are similar (but usually proprietary) specifications for MIPSes and ARMS and POWERs ... and Cell processors). The limiting factor will probably be your hardware - whether your BIOS supports symmetric access to physical memory; whether your CPU architectures have a hardware limitation for their cache-coherency protocol. Linux Kernel will abstract all of this (that is what is meant when the term "SMP" is used); but if the hardware does not support that abstraction, you will need to use a NUMA memory architecture and a multi-operating-system parallelization scheme ("node-level parallelism" - see ""why a cluster?") to manage your CPUs, because the actual circuitry does not support true shared-memory programming. With the magic of virtualization, you can make all those operating systems "look" like one unified computer (e.g., Grid Engine and its ilk) - but strictly speaking, these are multiple machines. Though the interface to the programmer is simple and appears to be one giant computer with thousands of CPUs, there is an obvious performance penalty if the programmers choose to pretend that a NUMA-machine is actually a shared-memory machine. Nimur (talk) 16:44, 16 August 2010 (UTC)[reply]
Thank you to Nimur for the information about the maximum cores and RAM being determined by the kernel. Then what is the maximum number of cores and maximum amount of RAM supported by the Linux 2.6.35.2 kernel? And the same in regard of the Solaris kernel? Rocketshiporion Tuesday 7-August-2010, 11:54pm (GMT).
As I mentioned, if you use the default, unmodified SMP kernel distributed with the distributions, the limit is probably 32 CPUs. You can recompile with an arbitrary limit. This will depend on the architecture, too; x86 CPUs use MPS, so 32 seems to be an "upper bound" for the present (2010-ish) system specifications. I suspect that as more Linux kernel hackers learn to love and hate QPI, there will be a major re-engineering effort of the kernel's SMP system (in the next year or two). To learn more about Kernel, consider reading The Linux Kernel, from The Linux Documentation Project (old, but introductory-level); or Basic Linux Kernel Documentation from the folks at kernel.org.
For main memory, x86_64 hardware seems to support up to 44 bits, or 16 terabytes of physical memory (but good luck finding hardware - motherboards, chipsets, and so on, let alone integrated systems); I've seen sparse reference to any actual hardware systems that support more than 64 GB (recent discussion on WP:RDC has suggested that 96GB and even 256 GB main-memory servers are on the horizon of availability). This forum (whose reliability I do not vouch for) says that the 64-bit linux kernels support up to 64 GB with x86_64 and 256 GB with AMD/EMT processors. If you want to dive off the deep-end, SGI/Altix supports up to sixteen terabytes of unified main memory in the Altix UV system (at a steep performance penalty). Commercial Solaris ("from Oracle") discusses maximum performance boosts for one to eight CPUs (though does not specify that as a hard upper-limit). They also support SPARC, x86, and AMD/EMT; their performance benchmarks make some vague claims about advanced memory technologies for large memory systems (without specifying a hard upper boundary). OpenSolaris uses an older version of the Solaris kernel; I can't find hard limits on upper-bounds for number of CPU or RAM (but suspect it's awfully similar to the Linux limitations). Since you're asking, here's why you'd want to use Solaris instead of a Linux kernel: fine-granularity control on SMP utilization. The system-administrator can control, to a much greater level than in linux, the specific processes that bind to specific physical CPUs, and how much time each process may be allocated. Linux kernel basically allows you to set a priority and all a "nice" for each user process, and then throws them all into a "free-for-all" at the kernel scheduler. Solaris gives you much more control (without going so far as to be a real time operating system - a trade-off that means a little bit less than 100% control and a whole let less work on the white-board designing process schedules). I have never personally seen a Solaris machine with more than a gigabyte of RAM (but it's been a long while since I worked with Solaris). Nimur (talk) 07:11, 18 August 2010 (UTC)[reply]

wget

Why is wget v1.12 not available for windows yet? It was released a year ago. I read something about them not being able to port it, what does that even mean? I'm using v1.11 and it works ok on windows, but I want the new css support in version 1.12 82.44.54.4 (talk) 11:16, 16 August 2010 (UTC)[reply]

There are several posts about the Win32 native port of 1.12 on the wget mailing list. This one seems to explain it best - it seems the core of 1.12 introduced some changes that require individual platforms to adapt, and for Win32 "no one did the work." -- Finlay McWalterTalk 12:14, 16 August 2010 (UTC)[reply]
Adding to what Finlay wrote, if you're really desperate for Win32 version of 1.12 then you can get a development version here, although as with all pre-compiled binaries, use at your own risk (although they actually also include the build files so you could probably compile it yourself if you wish).  ZX81  talk 13:42, 16 August 2010 (UTC)[reply]
Cygwin's wget is version 1.12. -- BenRG (talk) 05:29, 17 August 2010 (UTC)[reply]

Problem with Google Chrome

Every time I type certain Chinese characters in the pinyin input method Google Chrome crashes! Why is that? Kayau Voting IS evil 13:51, 16 August 2010 (UTC)[reply]

It's a computer bug. --Sean 18:15, 16 August 2010 (UTC)[reply]
For more technical information, see Google Chrome's bug-report - Chrome has had a long history of IME problems. It seems that Google Pinyin and Google IME might help. What IME are you using? Nimur (talk) 20:21, 18 August 2010 (UTC)[reply]

RAM

hi all this is silly but i have a problem in understanding what does the RAM actually do?? Why do we always prefer for a RAM of bigger size? whats the use? whats the difference between the RAM and processor?```` —Preceding unsigned comment added by Avril6790 (talkcontribs) 14:40, 16 August 2010 (UTC)[reply]

The processor does lots of calculations (everything a computer does is arithmetic once you get down to the lowest levels). The RAM is for storing the instructions for the calculations and the data those calculations are being done on. It it much quicker to read and write information to RAM than to the hard drive, but if there isn't enough RAM to store all the instructions and data that the processor needs or is likely to need in the near future then it will have to use the hard drive (the "swap file", to be precise) to store the extra and that slows everything down. --Tango (talk) 15:04, 16 August 2010 (UTC)[reply]
RAM (Random-access memory) is short-term working space. Its size (in gigabytes) is a measure of how much information can be kept easily accessible. (Programs are themselves information, so even if a program isn't manipulating all that much, it can still take up space on its own.) Hard drives are bigger and more permanent, but immensely slower, because they have moving parts.
The CPU (Central processing unit) manipulates the contents of RAM. A faster CPU can perform computations more quickly.
Nothing could happen without either of them. As it happens, these days, the performance of personal computers for most tasks is limited by RAM size, because modern applications tend to be quite memory-hungry, people like to do tasks involving a large amount of data, and people like to leave a bunch of applications open at once. CPUs spent a great deal of time twiddling their thumbs, waiting for more information, and if the information is coming from RAM, rather than the hard drive, less time is wasted. Paul (Stansifer) 16:35, 16 August 2010 (UTC)[reply]
The "classical" analogy I use in explaining RAM, processing speed, and hard drives (which are all intertwined in practical usage) to people not very computer literate is as follows: imagine you are working at a desk, and your work consists of reading and writing on lots of paper. The desk has deep drawers that contain all of the stored paper you use. That is your hard drive. To use the paper, though, you have to put it on the surface of the desk. The size of the surface is your RAM. Once it is on the surface, there is a limit to how fast you can read, write, edit, whatever, as you go over the paper. This is your processor speed. It is an imprecise analogy in many ways, but perhaps it will be useful as a very basic approach to it. If the surface of the desk is too small, you're constantly having to use the drawers. This slows you down. If the surface is very large, you can have a lot of paper on top to access whenever you want it. If you yourself are quite slow, everything takes longer. And so on. "Faster" RAM involves you being able to move things around quicker once you have it on the surface of the desk. A "faster" hard drive means you can get things in and out of the drawers quicker. A multiple-core processor is kind of as if you, the worker, had been replaced by two or three people all working simultaneously (the main difficulty being that you can't usually all work on the same part of the same problem at once). --Mr.98 (talk) 22:38, 16 August 2010 (UTC)[reply]
Actually, this is not a bad analogy for someone who is knowledgeable about computers, 98! --Ouro (blah blah) 06:00, 17 August 2010 (UTC)[reply]

I'm trying to fix a problem on a friend's computer. The main symptom is that, on Firefox, links on certain web pages don't work. The most notable of these is Google. Clicking on any result from a google search will cause the tab to say "loading" and the status bar to say "waiting for..." without any result. Apparently this also happens on other sites (but we couldn't find one to replicate this). This is a problem limited to Firefox, since I tried K-meleon and that works. Oddly, there is no IE on this computer; the application seems to have been accidentally deleted somehow? My friend suspects this is somehow relevant, but I doubt it, since I doubt Firefox depends on IE or any of its dlls, or if it does, K-meleon would too. One point that might matter is that the problem existed on an older version Firefox, and persisted after the update somehow. I tried disabling all add-ons, which didn't help. Any idea what causes this, or what to experiment with? (Supposedly the same problem is causing a general slowing down of browsing, but that might just be confirmation bias.) Card Zero (talk) 16:58, 16 August 2010 (UTC)[reply]

Go to Tools->Options->Network->Settings and try various options in there (if it isn't currently set to auto-detect, try that first, there may also be instructions from whoever supplies your internet connect on what those settings should be). I can't think why problems with the network settings would cause the exact symptoms you describe, but they could explain similar symptoms and it would certainly explain why it works in one browser but not another. --Tango (talk) 17:09, 16 August 2010 (UTC)[reply]
OK, auto-detect didn't help. I put it back to "use the system proxy settings". I'm not quite sure why this sort of thing would prevent links from working in google, while not preventing browsing as such. One can copy the links and paste them into a new tab, and that works; or perform a google search, quit (while saving the tabs) and restart, and the links on the google page work when it reappears that way. Meanwhile, my friend attempted a new install of Internet Explorer, and Avast has noticed the new file and reported it as a trojan ("Win32:Patched-RG [Trj]") - is that probably a false positive, or should I react to it? Card Zero (talk) 17:33, 16 August 2010 (UTC)[reply]
I'm perplexed by what you mean by "a new install of Internet Explorer". All versions of Windows (for much more than a decade) come with Internet Explorer and it is, essentially, impossible to remove (the most that can be done is to hide it). Some of the later versions of IE are optional downloads, but even then you generally get them using the Windows Update mechanism, or at least as a download from Microsoft's own site. If your friend has done anything else (like type "internet explorer download" into Google and blithely download whatever that finds) then that's sending him off into a vortex of malware and pain. The fact that Google doesn't work in Firefox is also curious, and leads me to wonder whether the system already has malware on it (redirecting search traffic is a common trick malware authors like to do). It sounds like this system needs a thorough spyware/malware/virus cleansing session. -- Finlay McWalterTalk 18:57, 16 August 2010 (UTC)[reply]
Oh, well there was no executable in the IE folder - that part can get deleted, right? - so he sought out an installer from Microsoft and ran it. I doubt it did much more than put the executable back in the folder. It's a fair point that this could in fact have been malware; it's now sequestered in Avast's vault, anyway, since nobody here actually wants to use IE. I'm going to search for rootkits with Rootkit Revealer when Avast has finished a thorough scan. It was run last week, and has found more malware since then, so that's pretty bad. Your advice on further free cleaning tools is welcome. To add insult to injury it appears to be shitty malware that can't even redirect properly. 86.21.204.137 (talk) 20:52, 16 August 2010 (UTC)[reply]

The next day

So I spent most of a day trying to fix that, and came home. All we had achieved was to do a thorough scan with Avast, completely uninstall Firefox, add a couple of alternative browsers, and install Firefox again. Now, apparently, the computer is crashing a lot, and both Opera and the re-installed Firefox are suffering from the same problem with google links, although K-meleon is mysteriously unaffected. Any advice on what to (tell my friends to) try next?
I did attempt to use Rootkit Revealer, and it found five discrepancies, then it refused to save its log file (invalid volumes, or something) and crashed. At least one of the things it found, some googling suggested, was a false positive. The others sounded like harmless things (although, perhaps, harmless hijacked things), and it was after midnight at this point so we said "that's probably fine" and did nothing. What I wonder is: are rootkits actual noted much in the wild, or is this line of investigation probably a wild goose chase? Card Zero (talk) 23:04, 17 August 2010 (UTC)[reply]

Automated text input program

Hi, does anyone know if there is a [freeware] script/software that can input pre-written text into a browser running Javascript? My knowledge is very limited on this topic so apologies if I'm not making sense to more erudite users. Basically, in a text field, I want to write something, wait for a few seconds, write something else, wait again, and write something else again but all automated on a continuous loop obviously. Thanks very much in advance and I will check this section periodically if you require any more classification. Thank you! 81.105.29.114 (talk) 17:37, 16 August 2010 (UTC)[reply]

You mean something like Google Docs - basically a web-version of MS Office? -- kainaw 17:53, 16 August 2010 (UTC)[reply]
Nope :-) - The thing I'm thinking of is kinda like a macro script for cutting down repetitive manual input but completely automated with a setting that allows a few seconds delay inbetween each input. But thank you anyway. 81.105.29.114 (talk) 18:06, 16 August 2010 (UTC)[reply]
This would be straightforward to write in Greasemonkey, but I'm not aware of any canned solution to this particular problem. --Sean 18:18, 16 August 2010 (UTC)[reply]
That could work, thanks. Alternatively, just a simple program that could just enter predetermined text into a field? Rather than a program inside of the browser as a plug-in. 81.105.29.114 (talk) 18:42, 16 August 2010 (UTC)[reply]
AutoIt maybe? -- 78.43.71.155 (talk) 21:06, 16 August 2010 (UTC)[reply]
I will try that out too, sir, thank you. 81.105.29.114 (talk) 21:13, 16 August 2010 (UTC)[reply]

random number generator problem?

I was bored on a long car trip a few days back and using a TI-89 calculator, I wrote a program where the calculator would use its built in random number generator to select either the number 1 or 2. If it selected 1, it would increment a counter. The program would run this loop 10,000 times, and give me the value of my counter, Thus telling me how many times the random number was 1. The results I got were very interesting, and I was wondering if anyone could tell me why. I ran the program 20 times and there results were as follows...

Test 1, 50.64% #1
Test 2, 52.33% #1
Test 3, 51.73% #1
Test 4, 50.72% #1
Test 5, 51.02% #1
Test 6, 49.97% #1
Test 7, 50.92% #1
Test 8, 51.07% #1
Test 9, 52.02% #1
Test 10, 51.78% #1
Test 11, 50.63% #1
Test 12, 51.00% #1
Test 13, 51.15% #1
Test 14, 50.87% #1
Test 15, 50.25% #1
Test 16, 50.91% #1
Test 17, 50.80% #1
Test 18, 51.23% #1
Test 19, 50.82% #1
Test 20, 51.01% #1

There seems to be a very real bias towards 1 vs 2 in that #2 was selected more only 1 time out of 20, though #1 never ended up being selected more by a very huge margin. Is there a problem with the built in random number generator, or is this amount of testing not enough to be statistically significant? Googlemeister (talk) 18:37, 16 August 2010 (UTC)[reply]

Exactly what did you use to generate the random number? -- kainaw 18:57, 16 August 2010 (UTC)[reply]
Your calculator won't be generating true random numbers, but rather pseudorandom numbers - basically they seem random, but they're not. However I really think you need to use a larger sample to be able to say anything conclusive about the randomness, although you're generating 10,000 random numbers per test, you're only comparing 20 results and the average of those 20 is 51.0435% which (to me) is still pretty close to 50% so I don't think anything strange about it (yet).  ZX81  talk 19:07, 16 August 2010 (UTC)[reply]
I am not sure I agree. Only 1 out of 20 on what should be a 50/50 shot is a probability on the order of 2^19th right? 1 in 500,000? Googlemeister (talk) 20:34, 16 August 2010 (UTC)[reply]
I am betting that one of compatriots at the Math Desk could tell us for sure, using all that fancy statistics for detecting randomness that has been developed from the Pearson's chi-square test test and onward. --Mr.98 (talk) 22:21, 16 August 2010 (UTC)[reply]
I admit it has been a good many years since I've looked at this sort of stuff, but I'm sure I remember that when it comes to randomness having a big enough sample is very import because it's random (I know this is the worst explanation ever!) Although you would eventually expect the averages to be 50/50 with a big enough sample size, if something is truely random then getting 20 x 1 in a row is just as likely as getting 20 x 2, it's just not what you'd expect, but it is nethertheless random. With a big enough sample size you could expect the results to be more equal, but otherwise... I'm probably not explaining my reasoning very well am I? Sorry, Mr.98's idea about the Math's desk is probably a better idea!  ZX81  talk 22:37, 16 August 2010 (UTC)[reply]
Depending on the way in which the calculator generates its pseudorandom numbers, and depending on how you implemented it, there may in fact be a bias. My understanding (following Knuth and others) is that many of the prepackaged "Rand" functions included in many languages are not very statistically rigorous. I don't know if that applies to the TI-89 though. --Mr.98 (talk) 22:26, 16 August 2010 (UTC)[reply]
It's also possible that the random numbers are perfectly fine (unbiased, or zero mean, and otherwise "statistically valid" random numbers). But Googlemeister's description of his/her algorithm might not be entirely perfect. A tiny systematic bias screams "off-by-one error" to me - are you sure you normalized your values by the correct amount? That is, if your for-loop ran from 0 to 100, did you divide by 100 or 101? (Similar logic-errors could crop up, in other ways). Another probable error-source is floating-point roundoff. The Ti-89 is a calculator - so it uses a floating point representation for its internal numeric values. There are known issues related to most representations of floating-point: if you add a small number (like "1" or "2") to a large number (like the current-sum in the loop), you may suffer a loss of precision error that is a "design feature" of floating-point representations. Ti-89 uses a 80-bit binary-coded decimal float format; unlike IEEE-754, this format's precision "pitfalls" are less widely-studied (but certainly exist). Nimur (talk) 01:01, 17 August 2010 (UTC)[reply]
The chance of 200,000 flips of a fair coin deviating more than 1% from 50% heads is less than 1 in 1018, so yes, this is statistically significant. Did you write rand(100) >= 50, by any chance? That will be true 51% of the time. -- BenRG (talk) 05:51, 17 August 2010 (UTC)[reply]
Pseudorandom is really a bit of a misnomer, if it's not truly random then to what degree is it random? That's the question you are answering here. I remember playing a similar game with the Vbasic RNG and finding that depending on how it was seeded, it LOVED the number 4 (for a single digit rand() call) so anything involving the term pseudo should be taken with a grain of salt. I would think that a slower-moving, more dedicated system like a graphing calculator would have an especially hard time coming up with non-deterministic randomness without a lot of chaotic user input as a seed. --144.191.148.3 (talk) 13:39, 17 August 2010 (UTC)[reply]
I will have to investigate more in depth next time I have significant downtime. Googlemeister (talk) 13:42, 17 August 2010 (UTC)[reply]
I assume you know how to make a fair pseudo-coin from a biased but reliable coin? Throw twice, discard HH and TT, count HT as H, TH as T. --Stephan Schulz (talk) 13:50, 17 August 2010 (UTC)[reply]
Hey, that's clever, thanks. I never knew that. Comet Tuttle (talk) 17:56, 17 August 2010 (UTC)[reply]
You're effectively taking the first-derivative of the coin value. This works if the bias is exactly and only at zero-frequency; (in other words, a preference for heads, or a preference for tails, but independent of previous results). If there is a systemic higher-order bias, (in other words, if the distribution of head and tail is pathological and has time-history), you won't actually be guaranteeing 50-50 odds! All you did is high-pass-filter the PRNG. For a coin, this is a non-issue - but for a PRNG, this is a serious issue! Nimur (talk) 18:26, 17 August 2010 (UTC) [reply]

date

In a batch file, how can I make the date display like "2010 - August"? 82.44.54.4 (talk) 19:46, 16 August 2010 (UTC)[reply]

If all you need is the different formatting, and not the name of the month, you could try:
FOR /F "tokens=1-3 delims=/" %%G IN ('echo %DATE%') DO echo %%I - %%H - %%G
Note that you need to replace the / after delims= with the date separator for your locale (rund date /t and see which character separates the numbers), and you might have to shuffle %%I, %%H, and %%G around depending on your locale as well (some use MM-DD-YYYY, others DD-MM-YYYY, etc.)
Also, if you want to try it on the command line, you have to use single % signs instead of %%. -- 78.43.71.155 (talk) 21:01, 16 August 2010 (UTC)[reply]
That locale-shuffling behavior is reason enough not to do this: you will have written an unpredictable and non-portable script whose execution depends on users' settings. It would be preferable to design a system that doesn't rely on such assumptions, if you plan to distribute this script, or use it for anything non-trivial. Nimur (talk) 00:58, 17 August 2010 (UTC)[reply]
You could add some findstr nastiness followed by a few if/else constructs triggered by findstr's errorlevel, assuming that MM/DD/YYYY always uses the "/", DD.MM.YYYY always uses the ".", and YYYY-MM-DD always uses the "-" (and don't forget to catch a "no match" situation in case you run into an unexpected setting). Checking if that really is the case, and coding that nasty beast of code is left as an exercise to the reader. ;-) Of course, if Nimur knows of a solution of the kind that he considers preferable (see his post above), I, too, would be interested in seeing it. :-) -- 78.43.71.155 (talk) 09:14, 18 August 2010 (UTC)[reply]
(Sadly, my solution in this case would be to use Linux. date, a standard program, permits you to specify the output-format, and is well-documented. But that is an inappropriate response to the original poster, who specifically requested a batch-script solution!) One can find Windows ports of date in the Cygwin project; I am unaware of standalone versions. Nimur (talk) 20:17, 18 August 2010 (UTC)[reply]

Trying to get iPhone 3G to connect to home WiFi unsuccessfully

When it says "Enter the password for [my network]," isn't that my router's password, i.e., the password I use to get to the router settings? That's the one that gets my laptop to access my network, but the iPhone keeps saying "Unable to join the network '[my network]'." Thanks. 76.27.175.80 (talk) 22:26, 16 August 2010 (UTC)[reply]

No, it's actually asking your wireless encryption key (called either WEP, WPA or WPA2), but you should be able to get this from logging into the router.  ZX81  talk 22:30, 16 August 2010 (UTC)[reply]
Thank you! 76.27.175.80 (talk) 22:38, 16 August 2010 (UTC)[reply]
My WPA key is printed on the back of my router.--85.211.142.98 (talk) 05:51, 18 August 2010 (UTC)[reply]

Color saturation on television sets.

In additive color (video/film color resolution) the primary colors are Red, Green, and Blue, and the secondary colors are Cyan, Magenta, and Yellow.

For the longest time, TVs only displayed the primary colors (yielding a saturation of approximately 256 thousand colors). With the advent of HDTV in the early 2000s, however, I heard talk of how TVs would soon diplay both primary and secondary colors soon (yielding a saturation of approximately 3 trillion colors).

For years since, though, I heard nothing about this. Not only that, but recently a manufacturer announced that it was moving from RGB displays to RGBY displays (by adding in yellow). Does this mean RGBCMY is dead? Pine (talk) 23:14, 16 August 2010 (UTC)[reply]

I don't think that your use of the word saturation is the common meaning. In any event, our eyes (except for those of tetrachromats) can only see primary colors. The way we detect, say, cyan, is by observing the presence of both blue and green. So there's no obvious benefit to a display having elements that can emit blue+green, but not just one of them. I can imagine having a dedicated way to produce cyan could extend the gamut of a display slightly, but only if there are some intrinsic flaws in the display technology, and probably at a great cost to resolution.
The number of distinct (human-distinguishable) colors produced by a display is only limited by the number of distinct brightness levels for each color element. If I recall correctly, current display technology is able to create adjacent colors that we can barely (if at all) distinguish already, so increasing the number of colors displayed isn't very useful. Extending display gamut would be much more useful, but I don't understand the concept very well myself. Paul (Stansifer) 00:18, 17 August 2010 (UTC)[reply]
You might find the articles Quattron and Opponent process interesting. Exxolon (talk) 00:30, 17 August 2010 (UTC)[reply]
(ec) The RGBY displays are called Quattron. I don't think RGBCMY was ever "alive". You can get a full range of colors with just three primaries because there are just three cone types in a normal human eye. Theoretically you can improve color reproduction by adding primaries beyond three, but not by very much (not by nearly as much as you can by going from 1 to 2 or from 2 to 3). According to the WP article, twisted nematic LCD screens only have 64 brightness levels per primary, for a total of 64³ = 262,144 levels. That might be where your 256,000 figure came from. I have no idea where 3 trillion came from. The main point of adding more primaries is to widen the color gamut, not to increase the "number of colors" (though they would no doubt market it based on the number of colors, if that number happened to be higher than the competition's). -- BenRG (talk) 00:38, 17 August 2010 (UTC)[reply]

FIPS

Can a civilian [legally] use Federal Information Processing Standard (FIPS) for personal use? What's the pros and cons of using FIPS? On Win7, the computer lets me enable "FIPS" for my Netgear WNR1000. --Tyw7  (☎ Contact me! • Contributions)   Changing the world one edit at a time! 23:38, 16 August 2010 (UTC)[reply]

It looks like the WNR1000 implements FIPS 140-2. That's for communication between compliant equipment. So it's not a pro-or-con thing, it's a matter of whether you need to connect to a FIPS 140 compliant counterpart. -- Finlay McWalterTalk 00:01, 17 August 2010 (UTC)[reply]
So if my computer supports "FIPS" I should enable it? --Tyw7  (☎ Contact me! • Contributions)   Changing the world one edit at a time! 00:56, 17 August 2010 (UTC)[reply]
You have no need to enable it. FIPS is a sort of "audit" to automate and accredit that the equipment meets certain federal requirements for information security. In and of itself, FIPS does not secure any information; it just verifies whether your router is capable of meeting certain standards. You can think of it as a "standard test" that complies with a government regulation. Nimur (talk) 23:08, 17 August 2010 (UTC)[reply]

August 17

Hardware Accelerated AES Encryption

How does hardware accelerated encryption work? The processor performs the same algorithm whether it's done by the hardware or software, so how is it that hardware accelerated is faster? [EDIT:] I have a Intel Core i5 processor, just so you know. --Yanwen (talk) 00:42, 17 August 2010 (UTC)[reply]

See Parallel computing. By offloading work to a specialized hardware unit, the CPU is free to do other work, while a peripheral device computes the hash. This can actually speed up end-to-end processing for a workflow (even if the actual calculation of the hash is slower than it would have been on the CPU). It is also possible that the specialized peripheral uses some hardware, like vector processing or SIMD, to compute the hash in fewer clock-cycles than a general-purpose CPU. In either case, offloading the work from the CPU can also increase the throughput capability of the system (which is a different performance-metric than single-job execution time). The exact speedup or throughput improvement depend entirely on the characteristics of the workflow and the load. For a highly-utilized server that computes thousands of hashes per second, such accelerators are probably a good value; for a personal computer, you would rarely see any worthwhile benefit. Nimur (talk) 00:54, 17 August 2010 (UTC)[reply]
So there is a separate hardware unit just for computing the hashes and other AES operations... How is this different from your typical multi-threaded application? Wouldn't you get the same performance boost by using more threads? --Yanwen (talk) 01:24, 17 August 2010 (UTC)[reply]
Threads have to be executed on hardware. Creating more threads does not help unless there is idle hardware available to execute the thread. As for why you would want to use specialized cryptographic hardware instead of just adding more general purpose cores, there could be two reasons. (1) If you know exactly what the hardware is going to be doing, you can make it faster and smaller than general purpose hardware. (2) Cryptographic keys can be stored in tamper-resistant hardware so the unencrypted data goes in, the encrypted data comes out, and the general-purpose processors do not have access to the keys. Jc3s5h (talk) 01:55, 17 August 2010 (UTC)[reply]
Is this feature only available to Core i3 and above? How about Core 2 Duo? Is specialist software needed? --Tyw7  (☎ Contact me! • Contributions)   Changing the world one edit at a time! 03:31, 17 August 2010 (UTC)[reply]
AES instruction set lists the processors that support it. There are no Core 2 Duos or Core i3s on the list, and a couple of Core i5s are excluded also. Software needs to be rewritten to use the new instructions. The article lists some programs that have been updated. -- BenRG (talk) 08:57, 17 August 2010 (UTC)[reply]
Algorithms implemented directly in silicon are faster. Imagine your microprocessor didn't have a built-in instruction for integer addition, and you wanted to perform that operation on the contents of a couple of registers. You would have to implement an adder manually using primitive operations the processor did support. Each time you take the AND or XOR of registers A and B and store the result in register C, the processor has to decode the instruction, check that the previous operations writing to A and B have completed and wait for them if not, dispatch the values to the appropriate execution unit, and deliver the result to any pending instructions waiting for the write to C. The bookkeeping takes orders of magnitude more time than the logical operation itself. You'd probably need ~100 instructions and 50–100 cycles to add two 32-bit registers by this process, and multiplication would be far worse. In contrast, if there are ADD and MUL instruction implemented in silicon, the processor just has to get the source registers once, send the bits to the silicon gates, and get the result from the other end. The intermediate bits flow directly to the gates implementing the next step, without any of the overhead. That's why AES is faster with specialized instructions. AES makes use of operations, such as finite field multiplication, that have to be laboriously simulated on most microprocessors.
Parallelism is not really the issue. Depending on how Intel implemented AES, it might be possible to run an AES computation in parallel with operations like integer multiplication, but few applications are going to bother to do this. Maybe if you were simultaneously encrypting and computing a cryptographic hash of a message you could write a single function that did both and save some time. But this definitely isn't what gives you the speedup on raw benchmarks of hardware vs software AES. -- BenRG (talk) 04:17, 17 August 2010 (UTC)[reply]
Algorithms implemented in silicon are only faster than software if they are faster. (This is a truism, but BenRG's above statement is propagating a common misconception that hardware accelerators are somehow providing "magic speed-boosts"). There are loads of examples where a hardware-accelerator unit is actually slower than a CPU performing the same task. It depends on the implementation and the task. One problem with specialized hardware units is that they are rarely using the latest-and-greatest in CMOS process technology - so they can't be clocked as fast as an Intel CPU. (See these Discretix AES cards at 220 MHz). In that case, they are only faster at executing tasks if their instruction-set of optimized operations-per-clock outweighs their poorer clocks-per-second rate, compared to the CPU. But there are lots of reasons why a "slow" hardware accelerator might still be "better" - it might consume less power; generate less heat in a data center; increase throughput; reduce high-level software network transactions; it might be cheaper to purchase/operate according to a "dollars-per-million-transactions" analysis; and so on. And, in many cases, because it is specialized for a few important operations, a hardware accelerator does actually decrease end-to-end processing time. As far as parallelism, I can think of a perfect example - in fact, probably the most common example - let the CPU process data, and then encrypt it. If the CPU needs to time-share between processing- and encrypting, it is slowed down significantly (by an amount calculated with Amdahl's law); but if the CPU can spend 100% of the time processing, and then pipe data to an encryption accelerator, you have created a deep pipeline, improving throughput. Nimur (talk) 18:18, 17 August 2010 (UTC)[reply]
I was talking specifically about the AES instructions in newer Intel CPUs. I think the original poster was specifically interested in those instructions, though I see now that the original question you replied to doesn't say anything about that. So I think we're both right. -- BenRG (talk) 19:45, 17 August 2010 (UTC)[reply]
Ah, yeah. I wasn't even thinking about on-chip acceleration; I think the OP clarified to mean specifically the Intel AES instruction set; details are provided at the official Intel site. I have a feeling those extensions will put a lot of encryption-peripheral-manufacturers out of business... Nimur (talk) 21:03, 17 August 2010 (UTC)[reply]

Computer -> TV

I have a computer, want a TV, and don't want to buy a DVD player (I used to just watch DVDs on my laptop, but the screen is a tad small). If my laptop is the Apple macBook Pro from mid2009 and I buy the adapter to HDMI, can I plug into HDMI in my TV and watch movies and stuff on my TV from my computer.--173.58.234.169 (talk) 03:05, 17 August 2010 (UTC)[reply]

Most computers have an HDMI port. Even my Dell Studio 15 have one. If your computer have that port, just buy an HDMI cable and connect the computer to the TV that ALSO have the port. Note: Your TV have to have an HDMI port (found on most HD Tv's not old/cheap ones) as well for this to work. Even though your computer have an HDMI port or you buy an adapter, as long as your TV doesn't have the port, all your efforts are futile. The easiest way is to get a cheap DVD player (about £10-£20) which works with the standard RGB ports or the extremely old SCART port that is found on most TVs (even very old ones..or at least most very old ones I've seen). --Tyw7  (☎ Contact me! • Contributions)   Changing the world one edit at a time! 06:55, 17 August 2010 (UTC)[reply]
MacBook Pros do NOT have HDMI ports. I am 100% certain about this. I would need an adapter for whatever the thing is to HDMI and it means I might need to run it through another adapter too if that makes sense. So the question is, is using adapters feasible knowing Apple does not include HDMI ports?--173.58.234.169 (talk) 03:58, 17 August 2010 (UTC)[reply]
Our Macbook Pro article verifies that no MacBook Pro has HDMI, but Mini DisplayPort instead. Googling macbook pro hdmi yields several adapters that seem to cost around US$5. Comet Tuttle (talk) 05:40, 17 August 2010 (UTC)[reply]
You might have to consider how to get the sound to your TV - "...older 2009 line of MacBooks and MacBook Pros are unable to provide an audio signal through the Mini DisplayPort, and only do so over USB, Firewire, or the audio line out port instead (the April 2010 line of MacBook Pro, however, supports this15)." Astronaut (talk) 09:32, 17 August 2010 (UTC)[reply]
I don't use HDMI, just the regular VGA (with a MacBook, but really the same difference, albeit with a different converter), and the audio thing is not such a big deal — you just run it out of the Audio Out port. But yeah, all of this is very plausible and super easy to do in my experience. I watch Netflix InstantWatch and so on off of the TV all the time. Just make sure the TV has the right input ports. Most of them these days have VGA as well in my experience. I'm not sure there's any advantage to using HDMI or VGA in this particular situation? The resolution of the monitor mirrors what is on the laptop, which is already high-def. --Mr.98 (talk) 13:33, 17 August 2010 (UTC)[reply]
I am buying the TV in the near future. I also want to use the InstantWatch in addition to DVDs which is why the need for a cord. I would have assumed that quality is lost with regular VGA though and do newer TVs even have it if they have HDMI (I have been so focused on HDMI, I haven't even checked). I find a lot of retailer websites cryptic on what inputs and outputs are in a model that it is difficult. I have to admit going to the store is not a ton better except you can look. As to audio, what kind of cord is needed for that? Thanks.--173.58.234.169 (talk) 18:53, 17 August 2010 (UTC)[reply]
While most retailer websites are pretty thin on the details like exactly which ports are installed (and, I have noticed, they are sometimes wrong), manufacturer websites are a better source of information about their products. In my experience, not many TVs have a separate audio-in, but where I have seen audio-in it has always been via left & right RCA connectors (but maybe that's a European thing). The ideal way would be find a converter box to take video and audio from your MacBook Pro and send it to the TV with one HDMI cable. This MacWorld article, which discusses many of issues, recommends some specific products. Astronaut (talk) 09:10, 18 August 2010 (UTC)[reply]
When I use VGA on my TV it looks pretty much identical to the input. If it is worse quality or refresh rate or colors or whatever, I certainly can't tell. But I'm not a hi-fi style buff or anything. The resolution is identical on my TV/laptop combination, which is all I am going for (it's higher than the InstantWatch resolution/refresh itself). I'm pretty sure it's pretty standard these days for at least a VGA port and standard stereoplug (e.g. "headphone') input to be available on newer LCD TVs, at least when I went looking at them. (I ended up just buying one at CostCo, after hunting around — they had the best prices, heads and tails above places like Best Buy.) More tricky in my experience is audio output, which is often in an optical plug, which requires you to use compatible speakers or to get some kind of converter. --Mr.98 (talk) 12:53, 18 August 2010 (UTC)[reply]

computer file with pdb extension

I got a computer file with pdb extinsion. I don't know how to open it. Any idea? thank you.124.43.25.100 (talk) 08:17, 17 August 2010 (UTC)[reply]

Is this any use? --Phil Holmes (talk) 09:26, 17 August 2010 (UTC)[reply]
It would help us if you told us what you thought the file was supposed to be. Is it a document someone sent you? And old set of notes? Something you downloaded from a website? --Mr.98 (talk) 13:35, 17 August 2010 (UTC)[reply]
For what it's worth, the .pdb files on my system exist because I use Microsoft Visual Studio. The .pdb file does nothing by itself. Is it in a folder along with some other software? Comet Tuttle (talk) 17:30, 17 August 2010 (UTC)[reply]
This is why it's important to say where the file came from. All the files on my system with a .pdb extention came from the Protein Data Bank and can be opened with a molecular viewing program. These programs, however, would do nothing with Comet Tuttle's Visual Studio file, though. -- 140.142.20.229 (talk) 18:30, 17 August 2010 (UTC)[reply]
We have articles or dab pages on most extensions, such as PDB. ---— Gadget850 (Ed) talk 19:54, 17 August 2010 (UTC)[reply]

2 anti virus software running together

Apart from slowing down your computer why shouldn't you use 2 AV softwares together, what happens when you do? Mo ainm~Talk

It can cause problems because the two (or more) programs aren't necessarily going to be aware of each other and can get in each others way when doing active scanning. Also, when a virus is found it can cause quite a "fight" as to who gets to deal with it. Personally though I use Symantec Endpoint Protection as well as Microsoft Security Essentials and they work together just fine even when something is found, so unless you actually have problems running two programs then I wouldn't worry about it. But assuming both programs are up-to-date then running more than 2 is probably overkill and will degrade your system performance with no real benefits.  ZX81  talk 18:51, 17 August 2010 (UTC)[reply]
As virus scanners can contain portions of actual virus code for identification purposes, rival AV suites can actually flag each other as 'rogue' programs when they see the code in their own scan. Exxolon (talk) 20:59, 17 August 2010 (UTC)[reply]
If you want to run two AV engines, there are products who offer that feature. So you would be running one AV product, but with two engines from different manufacturers, which avoids the issue of two AV products detecting each other's virus samples or similar problems.
If you want to do it manually, you should find out the names of the program directories and add a scanning exclusion for those (install AV #1, add exclusion, deactivate AV #1, install AV #2, add exclusion, activate AV #1). There's no guarantee that it will work, it's just increasing the chances. On-Access scanners might still get in a fight because they hook into file access routines, and basically have to hook into each other if more than one is installed.
I'd recommend going the several engines combined into one product way if you're looking for more protection than one AV can offer. Or you could install the On-Demand components of multiple AV products, add exclusions for them in the one On-Access scanner you run, drop all your "suspicious" files into one folder and add a batch file that triggers scanning of this folder with all your On-Demand scanners.
If this is for company use, you could use one dedicated AV scanning computer that runs a different AV than the rest of your machines, and make it a company rule that all media entering and leaving the building must be scanned there. Same goes for Web proxies, Mail servers, etc. - run a different AV on them than the desktop machines. -- 78.43.71.155 (talk) 09:04, 18 August 2010 (UTC)[reply]

Need input - writing a basic referrer tracking script

Hi,

I am want to learn more about how Google Analytics works by writing a 'basic' version of it which only tracks the referrer (initially) and the page being visited (its going on a template). I plan to implement the tracker using a 1x1 pixel approach, using the javascript like this one:

<script type='text/javascript'>
document.write("<img src='//my.tracker.url.here/pixel.php?args=" + someVariablesHere +"' width='1' height='1' alt='' /> ");
</script>

for the php side, i came across this php code to generate the actual gif image, and room for me to add the database part. Now some questions:

  1. Which is more reliable for getting the referrer: Javascript (document.referrer) or PHP ($_SERVER["HTTP_REFERER"])?
  2. Aside from the referrer, what other information is commonly collected?
  3. is the syntax for the image source safe "//url.here.com" in terms of implying http/s or should I explicitly code it (detection using window.location.href)?

Open to advice regarding specifics and good practices. TIA PrinzPH (talk) 22:39, 17 August 2010 (UTC)[reply]

You could use a non-expiring tracking cookie with a globally unique id. You should write out the full url...probably using http: vs https. PHP is more reliable than javascript (server is always more reliable than client). Other information that could be collected could be duration of stay, mouse movements, client information (browser, OS, screen size, etc).Smallman12q (talk) 20:43, 18 August 2010 (UTC)[reply]

Wikia's poor quality of service

Dear Wikipedians:

Is it just me or does anyone else notice that Wikia's quality of service seems really poor.

I started a new wiki today on Wikia, and I noticed these frequent "black-out" periods where the whole Wikia site seems to crash and not respond. I remember Memory Alpha and Uncyclopedia being really reliable in the past, but not anymore.

Anyone else know more about this problem?

Thanks,

70.31.154.183 (talk) 23:38, 17 August 2010 (UTC)[reply]

I stopped going to Wikia sites when they forced that horrible "New Monaco" skin on every wiki and stole uncyclopedias domain name, so I don't know what their service is like currently. But from past experience Wikia has always been quite slow with a lot of timeouts. They probably have an outdated server. 82.44.54.4 (talk) 11:13, 18 August 2010 (UTC)[reply]


Thanks. That about explains it. 70.31.154.183 (talk) 12:31, 18 August 2010 (UTC)[reply]
Resolved

August 18

Surprising SEGV from read(2)

Resolved

Under what circumstances can this code cause a segmentation fault in read()?

#include<unistd.h>
#include<string.h>
int cread(int fd,void *buf,int sz) {
  memset(buf,0,sz);
  return read(fd,buf,sz);
}

Because the prototypes are there, sz will be properly promoted to a size_t. What does read() do (if, say, other memory is corrupted) that could cause it to fail? --Tardis (talk) 00:23, 18 August 2010 (UTC)[reply]

Well, trivially if fd is not a valid file descriptor (depending on the quality of your libc, of course). Can you verify if fd is valid? --Stephan Schulz (talk) 00:31, 18 August 2010 (UTC)[reply]
You're supposed to be able to call read(2) with an invalid descriptor and just get EBADF. fd==20 here, for what that's worth. --Tardis (talk) 01:06, 18 August 2010 (UTC)[reply]
If you're seeing this, I'd suspect that read is doing some pointer arithmetic that memset isn't, and that a vastly wrong value for sz is causing it to overflow or underflow, or perhaps memset has some internal check that's causing it to silently stop before it faults, a check that isn't done in read. I tried some trivially mad values for sz (-1, 0x7fffffff, 0x80000001) and it's memset that faults for me, but your libc, or your sz, may differ. -- Finlay McWalterTalk 00:50, 18 August 2010 (UTC)[reply]
The thing that gets me is that read is a system call! The kernel needs no help from me to put bytes into that buffer, and I've already demonstrated that the buffer is legit. What pointer arithmetic could it possibly need to do? --Tardis (talk) 01:06, 18 August 2010 (UTC)[reply]
You haven't really tested that the buffer is legit - you're assuming that memset will have addressed all that memory, because that's what is contract appears to say it will. But what does memset really do when you give it a vastly, meaninglessly negative sz? Isn't it within is right to do nothing at all? If you really want to test the buffer is legit, you'll write the memory yourself with a loop. -- Finlay McWalterTalk 01:15, 18 August 2010 (UTC)[reply]
True enough: its argument is unsigned, but it could in theory ignore very large values. But I also know that sz==67480 (and that the buffer is actually 512000 B long). --Tardis (talk) 01:51, 18 August 2010 (UTC)[reply]
Oh, and if buf is on the stack, and you trash the stack with an sz that's too big (or -ve), you can segfault in several ways - you can trash cread's stack frame, meaning it'll return to 0 (bang!) or (possibly with a -ve sz) mangle cread's copy of the pointer called buf, so that it no longer points to the actual buffer on the stack, but instead to 0 (bang!). That shouldn't be the case if buf is malloced, or is in .bss or .data, however. If that's the case, commenting out the memset should cause the read to succeed. -- Finlay McWalterTalk 01:07, 18 August 2010 (UTC)[reply]
To clarify, I'm suggesting that memset is trashing the stack, but that you don't notice until the read. -- Finlay McWalterTalk 01:09, 18 August 2010 (UTC)[reply]
buf is obtained from malloc(), but I think you're right about the stack; attaching a debugger shows that the pointer that triggers the SEGV points 440 MB above the stack pointer, and is itself on the stack — but then the SEGV shows up in a different function. When I let it die and look at the core, it says it died inside read() and with a completely different pointer. --Tardis (talk) 01:51, 18 August 2010 (UTC)[reply]
Ah, you're screwed then:) If the stack is corrupt, and you can't use tools to help, you're reduced to putting in canaries into the stack (just declaring autos with known odd values like 0x3f5c) and periodically checking them to see when they're intact and when they've been steamrollered. -- Finlay McWalterTalk 02:09, 18 August 2010 (UTC)[reply]
When I've had to do that in the past, I wrote a little library with register_canary(addr, val) and unregister_canary(addr) which stored a little database of canaries. Another thread woke up every 1ms or so and verified all the canaries were intact. If one is missing it segfaults into the debugger deliberately. -- Finlay McWalterTalk 02:14, 18 August 2010 (UTC)[reply]
In general, this is perfect fodder for valgrind or purify; you're seeing a segfault in read because that's where memory misbehaviour was detected, but if memory is already corrupted by some other bad code elsewhere, you'll never find the corruption by worrying about what read does. -- Finlay McWalterTalk 01:19, 18 August 2010 (UTC)[reply]
Unfortunately, this is running under MPI, and is (of course) only failing in parallel; running it with such tools is rather more difficult than it would be otherwise. Thus my interest in a theoretical analysis that might point me at the right part of the code. --Tardis (talk) 01:51, 18 August 2010 (UTC)[reply]
Here's why you can't rely on one function accessing memory identically to another (particularly when passed pathological parmeters). Consider the following two (very naive) implementations:
  void simple_memset(char* p, byte n, int sz) { 
    char * d = p;
    while (d < (p+count)){
      *d=n;
      d++;
    }
  }

  void simple_read(int fd, char * p, int sz) {
    int bytes_read=0;
    while(bytes_read<sz){
      p[bytes_read++]=get_byte_from_file(fd);
    }
  }
Those both look like reasonable implementations. Now consider the following example, assuming a 16 bit address space (I'm too lazy to type all those extra 0000s, but the point is the same in 32 or 64 bits). Say your data segment is located at A000..BFFF, with buf 0x100 bytes beginning at B000. Now say you mess up and instead of passing 0x100 as sz you pass 0x7654. Inside simple_memset, at the beginning, d is 0xB000, p is 0xB000, and count is 0x7654. So p+count would sum to 0x12654, but that just truncates to 0x2654. As d >= 0x2654, simple_memset will terminate without writing any bytes. So that call to simple_memset hasn't validated you can access the buffer. And lo, look what happens when you then run simple_read. It works for a while, even reading far off the end buf buf at 0xB0FF, but eventually bytes_read gets to 0x1000 (which is allowed, because that's much less than 0x7654). It does that p[bytes_read++], where p is 0xB000 and bytes_read is 0x1000, so it's dereferencing memory at 0xC000, which is beyond the bounds of the data segment, and that's a segfault. -- Finlay McWalterTalk 01:59, 18 August 2010 (UTC)[reply]

It was in fact stack-smashing, produced by a truly remarkably bad set of communication functions that sent the wrong data and then stored that incorrect data into (rather than through!) a pointer. I wish I could say that the code was ancient and written by some idiot long since departed, but in fact I wrote it on the 4th of this month, so… yeah. Thanks for reminding me of the obvious. --Tardis (talk) 02:32, 18 August 2010 (UTC)[reply]

Worse, that should generally generate a warning, and some idiot ignored that warning and thought "ah, I'll fix that later" :) -- Finlay McWalterTalk 02:36, 18 August 2010 (UTC)[reply]
Unfortunately, generic interfaces like MPI offer no such type safety:
void recv3(int *p,int src) {
  MPI_Status st;
  MPI_Recv(p,1,MPI_INT,src,0,MPI_COMM_WORLD,&st);  /* convert int* to void*: OK */
  MPI_Recv(&p,1,MPI_INT,src,0,MPI_COMM_WORLD,&st); /* convert int** to void*: OK?! */
  MPI_Recv(*p,1,MPI_INT,src,0,MPI_COMM_WORLD,&st); /* convert int to void*: warning */
}
I may be that idiot, but I run gcc with -pedantic -Wall -Wextra -Wfloat-equal -Wundef -Wredundant-decls -Wpointer-arith -Wwrite-strings -Wshadow -Winline -Wdisabled-optimization -Wstrict-prototypes -Wunreachable-code. --Tardis (talk) 14:24, 18 August 2010 (UTC)[reply]
Lint (software) can often catch pointer and cast conversions that the -pedantic warnings do not catch... this chapter from Linux Clusters discusses the use of splint with MPI; I have never used that tool, but it looks like it can check for common argument mismatches in MPI functions. Nimur (talk) 20:33, 18 August 2010 (UTC)[reply]

wget

Would it be possible for wget to scan say 5 pages and then output a list of every link on those pages into a text file? 82.44.54.4 (talk) —Preceding undated comment added 11:35, 18 August 2010 (UTC).[reply]

Uhh, as far as I know, you can't do that using wget by itself. You could use wget to download the files you want, then use sed to process those files and filter for the <a></a> HTML tags. CaptainVindaloo t c e 19:07, 18 August 2010 (UTC)[reply]
It depends on what you need the URLs for. If you want to create a text file just to feed it back to wget at a later time, extracting the URLs is unnecessary. Just specify the downloaded page as input file and omit the URL(s) on the command line. See the help file for commandline options -i, -F (and you might need -B as well). -- 78.43.71.155 (talk) 20:31, 18 August 2010 (UTC) PS: Prithee, do tell: What are you up to? Creating a local copy of 4chan?[reply]

Mutation in Genetic Algorithms (optimization)

In the Mutation(GA) article, it is not mentioned where the mutation operation is used in the GA. Let's say there are N chromosomes in the last step, "old N chromosomes". I think there are three choises to create the new population.

1) N new chromosomes are generated from three different operations: a) Some are directly copied from initial population b) Some are generated by crossover c) Some are generated by mutation (This is the algoritm used in MATLAB's implementation)

2) N old chromosomes enter crossover, after mating and crossover N new chromosomes are generated. N new chromosomes enter mutation. Best N chromosomes are selected out of 2N chromosomes.

3) N old chromosomes enter crossover, after mating and crossover N new chromosomes are generated. Both N old and N new chromosomes enter mutation. Best N chromosomes are selected out of 2N chromosomes.

Which one above is true? OR can I use any of them? Kavas (talk) 12:19, 18 August 2010 (UTC)[reply]

I could be wrong, but I suspect you'll probably be better off asking this on the Wikipedia:Reference_desk/Science reference desk, this is computing and this doesn't (to me) seem related?  ZX81  talk 18:50, 18 August 2010 (UTC)[reply]
cf Wikipedia:Reference desk/Science#Mutation in genetic algorihms, yesterday. -- Finlay McWalterTalk 20:08, 18 August 2010 (UTC)[reply]
I asked that question too. But, as I use a numerical computing environment (MATLAB) for implementing the GAs, I thought "Computing Desk" should be more suitable. I'm not sure "mutational meltdown" refers to "stuck into a local minimum" there. Kavas (talk) 21:49, 18 August 2010 (UTC)[reply]

Reinstalled OSes, regedit, and Star Wars

I have Star Wars: Empire at War, and its expansion, both legally bought and paid for. I installed them on my computer. Then my OS (Windows XP Home) was eaten by viruses. I also had Windows XP Professional on the computer because I was aware that this might happen. I am now running XP Professional. The problem lies in that I still have Empire at War and Forces of Corruption installed, but not listed in the registry. Can you please tell me what registry keys are necessary for the game to run, and what their contents are? I would reinstall from disk, but I think my EAW disk 1 might be corrupted (it won't run the installation screen even if I go into the drive and run it manually), and FOC refuses to reinstall without EAW being reinstalled. Thanks! 97.125.84.72 (talk) 16:47, 18 August 2010 (UTC)[reply]

Sorry I don't know the answer to your question, however it's possible it's not just registry keys it needs, but also specific files in the Windows system directories. I'd simply contact LucasArts though, it might be a problem with something else that's stopping you from being able to install the game, but even if it is actually is a faulty disc they have a disc replacement policy where for only $5.00 USD per disc they'll swap your broken disc for a working one.  ZX81  talk 18:48, 18 August 2010 (UTC)[reply]

Trying to install an MSDOS program on Vista

I just now downloaded the Shareware version of the original Duke Nukem game from http://www.3drealms.com/duke1, and upon opening the resulting zip file after completing the download, a window with a warning message appeared. Entitled "16 bit MS-DOS Subsystem", the window gave me the following text: "This system does not support fullscreen mode. Choose 'Close' to terminate the application." Any idea how to get this program to install on Windows Vista? Nyttend (talk) 19:35, 18 August 2010 (UTC)[reply]

DOSBOX 82.44.54.4 (talk) 19:42, 18 August 2010 (UTC)[reply]
Program is downloaded, and I've gotten it to work; thanks for the pointer. However, I'm now confused: how do I tell it to run the install program, or how do I tell the install program to run in Dosbox? I've looked and failed to find a "Run with" command when I rightclick on the install program in My Computer, and I can't remember how to work DOS; the readme for Dosbox doesn't seem to have a how-to-run-DOS element. Sorry if there's an obvious answer to my problem; I just can't think of how to do this. Nyttend (talk) 21:29, 18 August 2010 (UTC)[reply]
It will probably work if you just open the dosbox prompt and type the name of the executable, with its full path (e.g. c:\dowloads\duke.exe -- Finlay McWalterTalk 21:32, 18 August 2010 (UTC)[reply]
The program is called "INSTALL.EXE" and in a folder named "DUKE", but typing C:\DUKE\INSTALL.EXE results in a message of "Illegal command: C:\DUKE\INSTALL.EXE". Do I have to type something before the full path? "run" and then the path resulted in a message of "Illegal command: run". By the way, the readme says that I must follow a "mount" command; I don't understand what that does, but I've followed the readme's instructions and gotten the results that it said I should from that. Nyttend (talk) 21:43, 18 August 2010 (UTC)[reply]
You don't use RUN or anything, you just type in "INSTALL.EXE" after you have mounted the right directory as a drive in Dosbox. --Mr.98 (talk) 21:46, 18 August 2010 (UTC)[reply]
If Dosbox on Vista works the same as it does on OS X, what you do is install Dosbox, then you have to "mount" the directory with the program as a virtual drive within Dosbox (e.g. "MOUNT c d:\yourprograms\duke" makes it so that the C:\ drive in Dosbox corresponds to the folder on your D: drive as indicated). Then you run it from within Dosbox (e.g. "c:\duke.exe"). If you have forgotten your basic DOS commands, type in HELP and it'll give you them. --Mr.98 (talk) 21:45, 18 August 2010 (UTC)[reply]
Okay, it installed; the program isn't running properly, but I suspect that it's a compatibility issue. I'll try running it on an XP computer. Thanks, especially, for the HELP command; I had no idea that there was such a thing, but I was wishing that there were. Nyttend (talk) 23:10, 18 August 2010 (UTC)[reply]
There might be special Dosbox settings that will help. ("Dosbox -- all of the old frustrations of Dos, today!") I tried Googling "Duke Nukem Dosbox," and what do you know, someone has written a guide on getting it to work. Now some of this is about the CD-ROM version and probably doesn't apply, but I thought maybe it'd be a start. The Dosbox FAQ actually says specifically that it does run, but you have to be careful about selecting your graphics settings, because Dosbox is emulating the entire PC at once, and can't necessarily do it as well as the original hardware. This page has more specific .conf settings that might be of help. From the looks of things, Duke Nukem is a little hard to get started, because it — in its own day — pressed CPU resources pretty hard, and emulating that can be a little tricky. It seems do-able though. Good luck. --Mr.98 (talk) 01:14, 19 August 2010 (UTC)[reply]

Latest version of Netscape (and how to speed up netscape)

Hello there, I am using Netscape 9.0 Beta version 3. Is it the latest version? I am also trying to speed up the browser. So I found this (ehow.com/how_6001169_speed-up-netscape-navigator.html website). But the problem is, options mentioned in that article is not present in Netscape 9.0, for example, "Network Connections.", "Preferences" and "Connections" tab. Where could I get this option? thnaks--180.234.38.102 (talk) 20:52, 18 August 2010 (UTC)[reply]

No, the latest version of Netscape was 9.0.0.6 (from February 2008). If you are not already aware, Netscape is no longer actively developed. As explained on that history page, and our article Netscape Navigator, the technology that drove Netscape went through some complicated business dealings and ultimately emerged as the core for the Mozilla project. The newest version is Mozilla Firefox, Version 3.6.8. Nimur (talk) 21:10, 18 August 2010 (UTC)[reply]
(edit conflict)The most recent version is Netscape Navigator 9.0.0.6, released in February 2008. Beta 3 was released in August 2007, making it three years old now. If possible I'd recommend upgrading to a more modern browser. A more recent browser just might run faster. If not, those changes should still be possible with a new browser. Except possibly the first option, which I haven't seen in any browser that I've used. Reach Out to the Truth 21:13, 18 August 2010 (UTC)[reply]

Wikipedia has a problem

Whenever I begin typing the URL to Wikipedia in Firefox, it will automatically suggest "en.wikipedia.org". However, the stored headline for that page is "Wikipedia has a problem". While it is definitely true that Wikipedia has its problems, this was not the headline of the page when I most recently visited it. It has been like this for quite a while, and I wonder if there is a way to fix this without purging all the stored URLs. Thanks, decltype (talk) 21:25, 18 August 2010 (UTC)[reply]

According to Wikipedia:Bypass_your_cache#Mozilla family hold shift and press the reload button to bypass your cache. Taemyr (talk) 21:33, 18 August 2010 (UTC)[reply]
Thanks, I've purged my cache but it didn't help. Perhaps my question was poorly worded — It is only in list that automatically drops down when I begin typing an URL that the headline is wrong. Regards, decltype (talk) 21:39, 18 August 2010 (UTC)[reply]
It sounds like a bookmark thing. Go to Bookmarks -> Organize bookmarks, search for the wikipedia link, highlight it, and at the bottom of the dialog box there should be some text boxes. Under "Name", change it to whatever you want it to say 82.44.54.4 (talk) 21:49, 18 August 2010 (UTC)[reply]
It's not a bookmark thing. Mine used to do that because of the recent serverdeath, but it's stopped doing it. sonia 22:59, 18 August 2010 (UTC)[reply]
I'm not sure this will work, but clearing the entry may solve the problem. Start typing as you have been doing. When the mislabeled suggestion appears, use the down arrow to highlight it and then press the delete (DEL) key. Hopefully that will clear the entry and Firefox will get a new title the next time you visit the page. -- Tom N (tcncv) talk/contrib 00:23, 19 August 2010 (UTC)[reply]
Thanks all. Tcncv's suggestion kinda worked. The entry is gone, but it is not getting readded when I visit the URL in question. Not a big deal though :) Regards, decltype (talk) 04:36, 19 August 2010 (UTC)[reply]

August 19

X-Root

Hi.

   My question is one of mathematics. Does a program exist which can solve for X in the following equation, where Y and Z are known?

Y^X=Z

   Thanks. Rocketshiporion Thursday 19-August-2010, 5:43am (GMT)

Hi there. If you're just doing a quick calculation or two, you can do it with a calculator using logarithms. If Y^X=Z, then X log Y = log Z, so X = (log Z / log Y). You could put this into the language of your choice if you wanted to automate it. Brammers (talk/c) 08:07, 19 August 2010 (UTC)[reply]

HELLO

   How to make a keystroke logger using C++? Can you plz provide me with the code.``````