Wikipedia:Reference desk/Computing

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 217.194.34.103 (talk) at 15:33, 14 October 2010 (→‎outlook envelope on the clockbar of winxp pro: new section). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the computing section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


October 9

Version control recommendations for large data sets

I am looking for recommendations for a version control system (both server and client) for a large and fairly unusual workload.

Workload description:

  • ~1 million files
  • >99% of files are ASCII text, but a small fraction are binary.
  • Mean file size of ~50 kB (so ~50 GB total under version control). Wide range of sizes with ~50% at less than 10 kB and a couple files > 1 GB.
  • Content is batch updated, so that ~10% of files are changed every two weeks in one big update. (Not necessarily the same files every update.) Off schedule changes can be assumed to be negligible.
  • On average, only a few percent of each file's contents changes during each update. So the total diff size might be ~150 MB every two weeks.
  • Server must run on Linux, be web-accessible, support random file / revision access, etc.
  • Clients must be available for both Windows and Linux.

Obviously, any established version control system will support a wide range of features / platforms. Personally, I'm familiar with Subversion. However, I have little experience using any version control system as applied to very large data volumes and was wondering if some might be better at that than others. The little experience I do have with this suggests that some Subversion clients may perform quite poorly if you try to place very large numbers of files under version control.

Any suggestions / feedback would be appreciated. Dragons flight (talk) 03:38, 9 October 2010 (UTC)[reply]

On the web you can find plenty of discussions about the commercial version control system that used to be used for the Linux source code, and the open source one that Linus Torvalds started when they had to change due to licensing problems. This was mainly about performance. Hans Adler 12:19, 9 October 2010 (UTC)[reply]
Git and Mercurial seem problematic due to the lack of partial checkout functionality. When your managed set is 50 GB, most users are only going to want to see a small portion of that. I'm not familiar enough with BitKeeper yet to have an opinion, though using a proprietary system would be an uphill sell. Dragons flight (talk) 20:12, 9 October 2010 (UTC)[reply]
The intended git solution to this problem to have lots of small repositories, corresponding to the partial check-outs you're interested in. I find that this works better than I thought, but I'd still really like to have partial checkout. Paul (Stansifer) 12:35, 10 October 2010 (UTC)[reply]
I am interested in the answer, too, and my only contribution is a partial workaround to the performance issue: TortoiseSVN is indeed pretty slow when you order an Update or Commit to a directory that has many subdirectories and files in them; the speedup is to drill down lower in the folder hierarchy where you do your "Update" or "Commit", so that TortoiseSVN has fewer ".svn" folders to dig through and do compares on. Comet Tuttle (talk) 18:13, 9 October 2010 (UTC)[reply]
Yes. In addition to TortoiseSVN, I've also looked at a couple other SVN clients and they all had similar performance problems. (One client outright crashed when asked to do an add operation on 30000 files.) My suspicion is that this is a consequence of SVN being designed to manage status through lots of little files. If I'm right, the required disk IO is going to make all SVN clients rather slow for many kinds of operations on large data sets (though some clients might use caching more effectively than others). Dragons flight (talk) 20:12, 9 October 2010 (UTC)[reply]
Perforce is a commercial version control system that claims to be faster than the competition. ("Perforce can effortlessly handle millions of changes and terabytes of versioned data across multiple sites", says the web site.) I've used it, but not enough to form a useful opinion. It's worth considering at least. -- BenRG (talk) 04:22, 11 October 2010 (UTC)[reply]
I've used Perforce and I like it, but it isn't cheap. I think it was like $700 per seat license per year, about four years ago. --Trovatore (talk) 04:51, 11 October 2010 (UTC)[reply]
Maybe I got that wrong -- maybe it was a permanent license rather than a one-year license. Or maybe they've dropped their prices since then, don't know. Anyway, see here for pricing. --Trovatore (talk) 06:07, 11 October 2010 (UTC)[reply]
You can always try it out; 2 clients are free, if I remember correctly. Perforce's reputation is high. I've used it, but not for a million files under source control. Comet Tuttle (talk) 18:38, 13 October 2010 (UTC)[reply]

Dragons flight, if I understand you correctly you are looking for something like the "sparse checkout" functionality that was added in git 1.7. So the functionality appears to be there now, except that you can't check out something like a/b/c/d/e/f/g/what_I_really_want/ as just what_I_really_want/ in your current working directory. Instead, git will create all the parent directories in the repository directory. See here. Hans Adler 06:59, 11 October 2010 (UTC)[reply]

please help

we have a system where we maintain a tracker and 5 users enter their respective date in the same page of excel sheetunfortunately i have a colleague who deletes the date in crucial time (can be a single cell or two) due to which the entire average is skewed , as she is doing this to defame me and hold me responsible from getting away with important task. a colleague have caught her red handed once doing it but could not do much. is there a way i can devise or implement a logging system where the login id would be logged into a file whoever deletes this so that i can prove and can reprimand her without which i am helpless.please help me. —Preceding unsigned comment added by 203.122.36.6 (talk) 10:01, 9 October 2010 (UTC)[reply]

Track Changes might be the simplest solution, depending on how "smart" your adversary is. Using Excel as a time tracker for multiple persons is generally a bad idea, though. I'll leave it to the rest of the RD/C volunteers to suggest better alternatives. :-) -- 78.43.71.155 (talk) 14:58, 9 October 2010 (UTC)[reply]
Of course, a low-tech approach would be a regular printout of your sheet, when you've verified it to be in a non-tampered condition, and have a trustworthy co-worker compare the printout with the file and sign it (with date and time), so it's not simply your word against that other person's word. -- 78.43.71.155 (talk) 15:01, 9 October 2010 (UTC)[reply]
A high tech approach would be to set up a source control server, like Subversion, and host the Excel sheet there. Every change is meticulously tracked with the user name, time, and date. You can rewind to previous versions of the sheet to see what happened with each "commit" of changes. The excuse you give for setting this up would be to reduce the number of conflicts that will occur if multiple people are trying to modify the sheet at the same time (because that's what source control systems were designed for). Comet Tuttle (talk) 17:58, 9 October 2010 (UTC)[reply]
Excellent idea! Even better, keep the spreadsheet in comma-separated ASCII format (.csv). Then you can see line-by-line diffs for each checkin. --Trovatore (talk) 06:48, 11 October 2010 (UTC)[reply]

Embedding live wikipedia page on an external website (perhaps in an i-frame?)

Hello all, thanks for reading. I'm working (veeeeery early stages) on a project to build a website something like a network of community-based blogs, articles, creative writing etc.

In any case, I am aware that some websites re-produce the content of wikipedia articles on their site (some credit it, some don't). This sort of thing might be useful for the project that I am working on, but I am also very aware that the articles on wikipedia are all 'living' things, insofar as they get updated, expanded etc.

My question (as per the title really): Is it possible to create an i-frame on a page of my site (Say a page about Barking & Dagenham for example) and have the Wikpedia Barking and Dagenham article appear there live? (Does that make sense, I'm still geting to grips with some of the terms and how some of these things work).

Cheers all, Darigan (talk) 12:58, 9 October 2010 (UTC)[reply]

It's possible, but it's probably not a good idea. You want to present the content, but you'll end up presenting the whole page, including the editing interface, sidebars, etc. Much better to use the MediaWiki API to pull the article text, format it to HTML yourself (there's code around to do that, from MediaWiki and other places), and place that within the pages you're building. Better yet, a bit of smarts in using the MediaWiki API can limit the times you present vandalised info (you would, for example, not recover the latest version, but the last version that had stood for say 3 hours without being reverted - a "revert" being an edit with a summary that matches the general admin, twinkle etc. revert strings, or contains the words "rv" or "vandalism"). It's probably sensible for you to retrieve article contents only occasionally (say every day). -- Finlay McWalterTalk 13:36, 9 October 2010 (UTC)[reply]
Thanks Finlay McWalter - The MediaWiki suggestion you made sounds like a really good way to handle what I have in mind. I was worried that there might be an issue with the i-frame pulling in the entire interface rather than just the article content, and you confirmed that. Thanks as well for the tips about avoiding pulling in vandalised versions of articles. I will certainly follow-up your tips. Thanks again, Darigan (talk) 14:12, 9 October 2010 (UTC)[reply]

Another option would be to use the printable version of the page in your iframe, such as http://en.wikipedia.org/w/index.php?title=London_Borough_of_Barking_and_Dagenham&printable=yes This displays the live version of the page while removing the editing interface, and is a lot simpler than delving into mediawiki api. 82.44.55.25 (talk) 18:11, 9 October 2010 (UTC)[reply]

Thanks IP guy/girl - The project I'm working on involves me learning a lot from scratch, anything to ease that process is very much appreciated. Cheers Darigan (talk) 13:11, 10 October 2010 (UTC)[reply]

Java segment help request

i have a problem with the following java segment,please help me out..

 
interface internal1{
void internal1(int subject1,int subject2);
}

interface internal2{
void internal2(int subject1,int subject2);
}

class student implements internal1,internal2
{
public void internal1(int subject1,int subject2)
{
system.out.println("subject1="+subject1+ "  subject2=" +subject2);
}

public void internal2(int subject1,int subject2)
{
system.out.println("subject1="+subject1+ "  subject2=" +subject2);
}

in this student class i want to add another method void sum , which gives the total of subject1 from internal1 and internals2 , and similarly subject2.what should the parameter list in void sum contain? and also please help me define it.Avril6790 (talk) 13:05, 9 October 2010 (UTC)[reply]

You can define it to be whatever you want. You could define the student class to be stateful and define instance variables to store the values of subject1 and subject2; in that case, sum() needs no parameters; it could store its result in another internal variable, or print the value of the sum of internal variables. This is entirely a design choice on your part. It is my opinion that this would be an incomprehensible design choice; while stateful programming is acceptable, in this trivial example it seems unnecessary and unintuitive. (We don't know what "internal1" or "internal2" are supposed to do, let alone what you want the "sum" of, so how can we design its interface?). I would also point out that your code snippet does not comply with the official recommended Java Code Conventions - class names should be capitalized (class Student implements Internal1, Internal2 { ... ), and your interface names should be more meaningful than "internal" (this does not help you or anybody else know what the interface is or why you need it). If you use more meaningful names in your program, it will help you and others evaluate the best design choices. For example, in Java, if you want to set the value of an internal variable, you should use a get or set method so it is clear that you are modifying the internal state of the Student (i.e., setting his score in subject1 or subject2). Then, you could have a method called "printSum()" - it will be obvious what that method does and when it should be used. I have also formatted your snippets with source tags for readability. Consider:

A well-thought-out Student class...
public class Student {
  private int score1, score2;
    
  public Student() {
   score1 = 0;
   score2 = 0; 
   }

  public void setScore1(int input) {
   this.score1 = input;
   }

  public void setScore2(int input) {
   this.score2 = input;
   }

   // If you need fancier math to "compute" modifications to the score1 or score2, make those methods explicit...
  public void incrementScore1(int input) {
   score1 += input ; 
   }    // ... and so on for more sophisticated calculations

  public void printScores( ) {
   System.out.println( " Score 1: " + score1 + "     Score 2: " + score2 ) ; 
   }
}

Nimur (talk) 16:09, 9 October 2010 (UTC)[reply]

Finding a Mario game for DOS

I am finding a old Mario game for DOS. I often played it in 2000 and 2001.

I can check the reference desk regularly to answer questions about it. I reached the fourth stage and can remember details about the first three stages.

I got the game free, as a email attachment. One special thing I remember about the game is the phrase "back from the death, to rule Frisia again". —Preceding unsigned comment added by Kampong Longkang (talkcontribs) 18:33, 9 October 2010 (UTC)[reply]

There is a list of free Mario clones here: http://compactiongames.about.com/od/freegames/tp/supermario_clones_and_remakes.htm 92.15.17.139 (talk) 18:44, 9 October 2010 (UTC)[reply]
I have this game, Mario.exe .. I believe it was a fan-made clone of the original NES Super Mario Brothers, probably as a coding demonstration (it was only 64kb but looked amazing!). It runs in DOS, and as far as I recall it only had a limited amount of levels. Inside the .exe is the text string “Done by Utter Chaos [DFF]” which was probably some demo or cracking group. This mini-game actually may have started out as a trojan horse or a virus but I've never detected anything with any scanner on any of the copies I've seen. I can email you a copy if you want, I'm pretty sure it's freeware. -- œ 21:38, 9 October 2010 (UTC)[reply]
This http://www.trendmicro.co.jp/vinfo/virusencyclo/default5.asp?VName=HLLP.YAI.A&VSect=T says its malware. 92.24.177.4 (talk) 23:00, 9 October 2010 (UTC)[reply]
The game was actually made by a developer called Mike Wiering. The copy you have is a hacked beta version, the full version (with six levels in total) can be downloaded as freeware from this link. --CalusReyma (talk) 00:09, 10 October 2010 (UTC)[reply]
Ahh, Interesting! How did you come about this information? Is the "...Frisia" or "Utter Chaos DFF" version both hacked betas? Also, if it was once balware I'm sure it is no longer, probably completely eradicated. -- œ 00:23, 10 October 2010 (UTC)[reply]
Yes, I would think both are. I first found the game (the beta version) on a shareware compilation disc. The fourth stage in the beta is unwinnable; there's no exit pipe at the end, so you get stuck. I forget exactly how I came across the finished version (this was years ago); probably just through a search engine. --CalusReyma (talk) 09:15, 10 October 2010 (UTC)[reply]

I found a game with very similar levels at http://www.dosgamesarchive.com/download/mario/ except the fourth level is now the sixth level and levels four and five are new to me. The remaining levels are very similar, with a few tiny differences (I dont remember seeing a star in the previous game). But thanks for the help! q —Preceding unsigned comment added by Kampong Longkang (talkcontribs) 10:32, 10 October 2010 (UTC)[reply]

Does this MS game use a Messenger protocol?

I asked a similar question recently, but now I have more details. Does the game described here www.ehow.com/how_2331394_play-othello-online.html use Messenger as a protocol? When I run it in XP, the process it uses is described as "zclientm.exe".

I now often get an error message saying that the server has not responded - I wonder if this is due to me only having version 4 of Messenger, when the latest version is version fourteen. I dislike Messenger and will only update if that is likely to be the reason for the error message. Thanks 92.15.17.139 (talk) 19:07, 9 October 2010 (UTC)[reply]

If all else fails, run Wireshark and watch the traffic between the game and its server. This Microsoft article lists the ports used by messenger; for games it says it uses naked TCP on ports 80, 443, 1863 and UDP on just about any unprivileged port. Unfortunately 80 is also used for http and 443 for secure-http. -- Finlay McWalterTalk 19:15, 9 October 2010 (UTC)[reply]

Two related questions about device recognition: Linux and Windows

I have two related questions, about computers automatically recognising devices. One is about Linux, the other about Windows.

  1. My own computer runs Fedora 12. When I plug my DSLR camera (Olympus E-520) in to a USB port, the system doesn't do anything. I have to manually mount the camera into the file system, allowing me to access the memory card through a mount point. In contrast, when I plug my mobile phone (Nokia 6303i) in to a USB port, the system automatically recognises it, and offers to launch GThumb to download photographs. If it can do so for one device, why not another? How can I make it do so for the DSLR camera as well?
  2. My father's computer runs Windows Vista. When I insert a CF card into the memory card reader, the system automatically recognises it, and offers to launch Explorer or Windows image viewer. Then, after I select "safely remove device", the system stops recognising the CF card at all. If I take it out and put it back in, the memory card reader's light comes on, but Windows acts as if the card wasn't even there. Only rebooting makes it recognise it again. How can I fix this?

Can anyone help me with either of these problems? JIP | Talk 19:55, 9 October 2010 (UTC)[reply]

I experienced a similar issue with card readers on Windows XP. Apparently windows sees the card reader as a drive whether there's a card in it or not (hence 3 or 4 drive letters being constantly taken up in 'My computer' by empty card reader slots), so when you "safely remove device" it disables the card reader altogether. 82.44.55.25 (talk) 20:26, 9 October 2010 (UTC)[reply]
When you finished using the card, make sure no programs are accessing it, then remove the card. Happened on my XP one as well. Sir Stupidity (talk) 22:12, 9 October 2010 (UTC)[reply]
I use an SD card in a USB/SD card adaptor. When I remove the card it behaves as you state. But if I remove the adaptor, I can plug it in again with a new card and all is recognised. -- SGBailey (talk) 22:13, 9 October 2010 (UTC)[reply]
On modern Linux installations, udev is responsible for detecting new devices, assigning them a /dev name, and (sometimes) for mounting them (sometimes that's done instead by Nautilus). You can watch this in progress by running udevadm monitor and tail -f /var/log/messages as you add (and remove) devices. In this case it sounds like there's a problem in the udev rules (which on my Ubuntu system are in /lib/udev/rules.d) which recognise devices. This article discusses how to write these rules - crucially the "USB Camera" section discusses an idiosyncrasy about how cameras report their "partitioning"; if yours is the same, then he gives a solution. -- Finlay McWalterTalk 23:40, 9 October 2010 (UTC)[reply]
To explain a bit more (mostly for the benefit of the next person who asks, which hopefully will show up in the RD/C search function). When a new USB device is detected, the USB stack signals the kernel, which sends a message into userspace on a netlink socket. udevd listens to that, examines the details of the device, and acts according to its rules. You can configure udev to directly mount a device (by having it run the mount command); that's the case for some of the example content of udev tutorials you'll find, but it's not how Ubuntu at least works. News of the new device is then propagated around using D-Bus (in some systems by HAL, in others by udevd itself). This is received by the GVFS daemon; you'll notice that if a usb disk is inserted before you login to GNOME, the disk appears in the "Places" menu, but hasn't been mounted (as reported by mount). It's also reported to Nautilus (I honestly don't know if GVFS does that, or if Nautilus is a D-Bus client of the relevant stream itself). When Nautilus sees a rising-edge (a new insertion) for a disk, it may automount it (for the setting that controls that, run gconf-editor and navigate to /apps/nautilus/preferences/media_automount). I don't know the procedure for a KDE based system, but I imagine it's generally much the same idea. All of this is wonderfully flexible, but it's clearly complex and sometimes a little fragile. If you just can't get it to work, here's a downright hack: run a cron job (say every 30 seconds) that runs lsusb, searches that output for the ID of your camera, and runs the mount command you've been running manually (and umount if it's removed). -- Finlay McWalterTalk 00:39, 10 October 2010 (UTC)[reply]
I haven't tested this, but I believe that cameras have more capabilities than merely mass storage, and therefore many of them have special Linux device drivers. There is a program called gtkam that is specially designed for interfacing with digital cameras. Looie496 (talk) 01:17, 10 October 2010 (UTC)[reply]
That's generally Picture Transfer Protocol. Some cameras are capable of being remote controlled (to take pictures) over USB, but there seems to be no standard protocol for that (for still cameras). gphoto (the software that underpins gtkam) has a list of those cameras that it knows how to do remote capture to here. -- Finlay McWalterTalk 02:40, 10 October 2010 (UTC)[reply]

Hard drive size

I bought a computer and it was advertised to have 250 gb of hdd space. However, Windows reports it to have like 232 gb. Why is that? —Preceding unsigned comment added by 71.224.49.81 (talk) 20:38, 9 October 2010 (UTC)[reply]

Packaging is often in gigabytes (109 bytes), but computers tend to think in gibibytes (230 bytes). It is recommended that the former be written GB and the latter GiB, but in many cases people and machines use GB without specifying what they really mean. In any event, 250 GB = 232.8 GiB. Dragons flight (talk) 21:44, 9 October 2010 (UTC)[reply]
Some people recommend the abomination "GiB", you mean. Comet Tuttle (talk) 07:03, 10 October 2010 (UTC)[reply]
Computers don't "tend to think in gibibytes". Microsoft chose to make Windows Explorer report disk sizes in units of 230 bytes. They could have chosen units of 109 bytes instead. Everybody would be better off if they had. -- BenRG (talk) 07:58, 10 October 2010 (UTC)[reply]
Its a sneaky way that disk sellers rip off consumers. 92.24.177.4 (talk) 23:02, 9 October 2010 (UTC)[reply]
Here's some more info [1] 82.44.55.25 (talk) 23:46, 9 October 2010 (UTC)[reply]
Hard disk drive#Capacity measurements. ---— Gadget850 (Ed) talk 02:45, 10 October 2010 (UTC)[reply]
(Your hard drive will actually store more than 250,000,000,000 bytes, but some of these are used during formatting to define sectors for quick reading. If you had a single file of exactly 250 GB of data (232.77 GiB), you could easily store it on your drive using a specially-written operating system, with some space to spare, but this would seldom be useful in practice, so real operating systems use a "wasteful" format of the drive that enables quick and easy access to each small file.) The main reason for the apparent discrepancy is as explained by Dragons flight and others above. It looks like a rip-off, but it is really just confusion over units. Dbfirs 07:08, 10 October 2010 (UTC)[reply]
Yes, it's just confusion over units. Not a rip-off. -- BenRG (talk) 07:58, 10 October 2010 (UTC)[reply]
No, it's both, but it's a large-scale organised rip-off entered into by most (all?) hard drive manufacturers over the last 10 years or so. ;-) --Stephan Schulz (talk) 08:14, 10 October 2010 (UTC)[reply]
... so how many bytes would you expect a 250 GB drive to hold? They are already being generous in giving you more than 250,000,000,000 bytes. Would you expect a 250 GHz oscillator to run faster than 250,000,000,000 cycles per second? I have no shares or interest in hard drive manufacturers, just an interest in SI units. Dbfirs 16:30, 10 October 2010 (UTC)[reply]
It's not just normal SI/traditional confusion. In this particular case, it's an SI unit given the name of an existing traditional unit. (which, of course was given prefixes from other SI units.) Imagine if the SI length unit was called a "yard". People would forever be complaining about 'extra inches' and 'missing centiyards'. APL (talk) 19:18, 12 October 2010 (UTC)[reply]
So how many extra bits are there in your byte? The mis-naming is the other way round. Early computers had multiples of 1024 bytes that they called Kilobytes to save inventing a new prefix. Dbfirs 01:21, 13 October 2010 (UTC)[reply]
If I had a penny for every time I've seen this question, I would be retired in the Bahamas by now... Sandman30s (talk) 11:31, 11 October 2010 (UTC)[reply]

Harmful computer monitor radiation?

Surprisingly, I could not find any article on computer monitor radiation. This, however, was helpful in answering my question of whether it was a myth that it is harmful, but I'm still wondering about older computer monitors, CRTs specifically, before standards such as MPR II and III (could not find any article on these either) or any other standards, could there in fact have been enough radiation emitted from these monitors to be not just harmful, but lethal given enough exposure? Is it even plausible, under extreme circumstances, for computer monitor radiation to be deadly? I mean just the fact that these 'Low Radiation Emission' standards do exist means that there was indeed enough harmful radiation emitted from these older monitors that it actually warranted putting these standards in place, so I'm wondering just how much radiation was reduced? by what percent? How much safer are we now? (i mean those that still use CRTs of course, which is rare these days ;) -- œ 22:05, 9 October 2010 (UTC)[reply]

The suspected danger was radio frequency EM fields emitted by the CRT, with a particular worry about its effects on the unborn. This and this (both from the Health Physics Society) cover this. It seems they set a standard that they were confident was safe. That doesn't mean it was a myth, or that it wasn't, but rather that it was easy enough to set a generally low level - it's often difficult, and ethically very problematic, to empirically demonstrate that such standards are unnecessarily low. -- Finlay McWalterTalk 23:28, 9 October 2010 (UTC)[reply]
See Cathode ray tube#Ionizing radiation. -- Wavelength (talk) 23:41, 9 October 2010 (UTC)[reply]

example.com

How much traffic does http://example.com receive daily? 82.44.55.25 (talk) 23:53, 9 October 2010 (UTC)[reply]

It would probably be hard to come across such information without owning the website. What can be found is its Alexa rank. example.com's rank (which is calculated using both the number of visitors and the number of pages those visitors view, which for one-page example.com doesn't matter that much) over the past three months is 9937, and for today it was 11651. For comparison, the website of Kingston Technology was ranked 11653 for today, PC Pro's website was 11670, that of Radio France was one spot below PC Pro, and MTV's German website (mtv.de) was 11695. (Google, Facebook, and YouTube were the top 3 for today.) So while hard statistics are difficult to come by, at least it can be determined that example.com, on this particular day, gained more traffic than some websites of fairly large companies. Xenon54 (talk) 03:03, 10 October 2010 (UTC)[reply]
I disagree with the last conclusion. The Alexa article which you linked to makes it clear the Alexa rankings are far from perfect and given the way they are derived, this is hardly surprising. The IMO proper conclusion is "according to Alexa example.com, on this particular day, gained more traffic than some websites of fairly large companies". To use an example I liked to use, the main www wikipedia page lists some top languages in accordance to number of visitors to each language. For whatever strange reason, when this was first implemented Alexa rankings were used. However someone pointed out that these were different from the WMF statistics, not extremely so but enough to change the order of at least one or two languages IIRC. Another good example (as mentioned in the article) is the fact that Alexa themselves have changed their ranking system in the past, in attempt to improve the accuracy and these changes have had a clear effect on the rankings. Nil Einne (talk) 03:50, 10 October 2010 (UTC)[reply]


October 10

What's the difference between a compiler and an interpreter?

hello all...i'm having a problem in knowing Whats the difference between a compiler and an interpreter exactly. whats the main difference ?117.204.2.193 (talk) 12:14, 10 October 2010 (UTC)[reply]

For a language L, an interpreter is a program that executes a program in L. A compiler translates a program in L into some other language. Often, compilers translate programs into machine code, so that the machine can directly execute them, but some compilers just translate programs into a (usually) lower-level language for which a good interpreter or compiler already exists.

Could you please elaborate.117.204.1.96 (talk) 13:56, 10 October 2010 (UTC)[reply]

At this point it looks as if you might be interested in reading WP:HOMEWORK. Of course, if you have any specific further questions you are welcome to ask them. Hans Adler 14:05, 10 October 2010 (UTC)[reply]
The best explanation of this I've been able to spot is in the source code article, near the top. Our other relevant articles are unfortunately pretty technical. Looie496 (talk) 16:09, 10 October 2010 (UTC)[reply]
In general, compiled code uses the instruction set architecture of the CPU that it runs on. Interpreted code does not; it is translated by another program, the interpreter, that does use the machine's ISA. The distinction has gotten blurrier, of course, as machine instruction sets have become so sophisticated that they are often interpreted at the hardware level - see microcode. (As such, it is not always clear whether an implementation of a CPU's instruction set architecture is actually a machine-instruction or a software-emulation provided by a language's system-library, the platform's operating system, the CPU firmware, or some other abstraction-layer). But loosely, a compiled language runs "on the metal" - the binary codes in the compiled file directly correspond to "settings for switches" in the circuitry of the electronic computer that will run the program. An interpreted program does not directly contain "settings for switches." The contents of a program-file in an interpreted language contain a high-level instruction set, which is processed and decoded in real-time by another program. It used to be the case that compiled programs were "invariably" faster than interpreted programs. So in 1990, you would write perl (an interpreted language) only if the benefits outweighed the performance hit compared to C. Because computers, software, and instruction-sets are so complicated now (in 2010), this general rule is not always true; interpreters can be smarter than compilers in some cases and may actually run the same algorithm faster than a compiled program. (Most notably, intelligent software branch prediction, dynamic code profiling, and cache-aware prefetching make this sort of improvement possible - and though these improvements could be made in compiled code, they are often too platform-specific to do well). Languages like Java specify a virtual machine, which again blurs the line between compiler and interpreter; on some machine architectures a Java VM is a true compiler; and on others, it is a true interpreter; and on still others, the Java implementation decides at compile-time whether to act as a compiler or an interpreter (selecting the option with the best performance for some particular code). Nimur (talk) 16:45, 10 October 2010 (UTC)[reply]
Traditionally, interpreters parsed the source code as it was running, executing routines in a 1:1 correspondence with the statements in the source code. A traditional compiler, on the other hand, had a separate "compilation" stage, where the source code was parsed and turned into machine code (the instructions that directly runs on the processor) some time (perhaps months) before the program was run. In recent years, however, this distinction has become more fuzzy, as a large number of languages now use bytecode compiling, where a program is compiled to "bytecode", an instruction set for a theoretical processor which does not have a 1:1 correspondence with the source code, but doesn't match a physical processor either. This has been extended to just-in-time compilation where the translation to bytecode (and then from bytecode to native machine instructions) happens dynamically as the program is executing. To confuse things further, most common modern processors use microcode, where the processor chip doesn't directly execute the machine code instructions it receives, but instead translates them internally into a different instruction set on which it operates. -- 174.24.199.14 (talk) 17:53, 10 October 2010 (UTC)[reply]

http post disabled

I need a proxy with HTTP POST disabled. The reason for this is that I only want to read pages on a particular site, but every proxy I have tried has been previously used by other people to post things and thus been blocked from the site, so when I try to read a page I just get a banned message. Does any such proxies exist? I've tried searching desperatly but I can't find any. They must have http post disabled 124.84.247.143 (talk) —Preceding undated comment added 15:39, 10 October 2010 (UTC).[reply]

It's irrelevant whether POST is disabled. The problem is that the proxies you are visiting have been banned from the websites you are seeking. A server administrator can ban a proxy (or any other client) for any reason they want - whether POST is enabled or not. (Besides, "HTTP POST" seems like the least disruptive thing people do with anonymous connections - despite what you may have heard, this is probably not the reason the proxies have been blocked). Open proxy servers have a bad reputation, so many reputable server-operators block all of them without concern or consideration. Have you considered purchasing your own proxy? Have you considered using Tor (anonymity network)? How about accessing your websites from a public place like a web cafe or a library? Your workplace, university, or friends may be able to provide you with a proxy-server. In general, though, proxy servers are not public; and if you're looking for an open proxy for the purposes of preserving anonymity, you are better off using Tor; if you're looking for a proxy for some technical purposes (circumventing firewalls, performance/caching, ...), you are better off purchasing one of your own and administering it yourself. Nimur (talk) 16:32, 10 October 2010 (UTC)[reply]
No, all the proxies I have tried have been banned for things like "spamming" or "posting off topic threads" etc, they are not blanket proxy bans. So finding a fast proxy with HTTP POST disabled would let me read the site. I tried Tor but it's painfully slow and also banned. I don't care about anonymity I just want to read pages 124.84.247.143 (talk) 18:34, 10 October 2010 (UTC)[reply]
Why do you then need a proxy at all? Roger (talk) 19:01, 10 October 2010 (UTC)[reply]
Because this public computer is also blocked from reading the site 124.84.247.143 (talk) 19:29, 10 October 2010 (UTC)[reply]
What site is it? 87.113.56.9 (talk) 19:43, 10 October 2010 (UTC)[reply]
I hope the OP does understand that a user can "post" to a forum using only HTTP GET method. The proxies may have been banned because they were used to post spammy messages - but this is completely unrelated to whether they performed those message transactions using HTTP POST. Despite the name, HTTP GET can be used to upload data to a server; so if the server admin blocked something, it's because of bad behavior, not on account of the protocol in use. Nimur (talk) 07:44, 11 October 2010 (UTC)[reply]
My impression from their responses is that the OP (who asked the same question before Wikipedia:Reference desk/Archives/Computing/2010 September 1#Need Proxy), was aware it was poor behaviour that got people blocked, not the protocol used. Unfortunately they were under the mistaken impression disabling HTTP POST would be enough to stop this poor behaviour (spamming and other things) and therefore stop people getting banned. Nil Einne (talk) 19:37, 11 October 2010 (UTC)[reply]

Suffix code

What is a suffix code? --84.61.131.141 (talk) 16:21, 10 October 2010 (UTC)[reply]

We need some more context - suffix codes are used for everything from internet domain names to Chevy engine specifications. Maybe you want the list of Internet top-level domains, which specifies suffixes like .com, .org, and .us? Nimur (talk) 16:50, 10 October 2010 (UTC)[reply]
Other possible interpretations: suffix tree, suffix array. Paul (Stansifer) 17:44, 10 October 2010 (UTC)[reply]
It could be the reversal of a prefix code. I'd never heard the term before, but here's a paper that does use it that way. -- BenRG (talk) 02:15, 11 October 2010 (UTC)[reply]

Is the Klingon alphabet an example of a suffix code? --84.61.131.141 (talk) 10:18, 11 October 2010 (UTC)[reply]

You need to give us a better idea of what you mean by "suffix code" before we can answer that question. The Klingon language certainly uses suffixes to encode grammatical information - see Klingon language#grammar - but it also uses prefixes too. Gandalf61 (talk) 10:41, 11 October 2010 (UTC)[reply]

A suffix code is a code which is a reversal of a prefix code. --84.61.131.141 (talk) 14:27, 11 October 2010 (UTC)[reply]

Then you seem to have answered your original question. Any alphabet is trivially both a suffix code and a prefix code - no item in the alphabet is a prefix or suffix of any other item because every item is, by definition, a single character. Gandalf61 (talk) 15:53, 11 October 2010 (UTC)[reply]
This is true for plqaD, but not for the Klingon Latin alphabet, which uses digraphs and even a trigraph. As far as I can see, the Klingon Latin alphabet is indeed a suffix code, though it is not a prefix code.—Emil J. 16:08, 11 October 2010 (UTC)[reply]

xulrunner

How do you make mozilla xpi files work on xulrunner? 82.44.55.25 (talk) 19:33, 10 October 2010 (UTC)[reply]

You could check the ActiveState Komodo's source code, it handles .xpi. --Dereckson (talk) 21:04, 10 October 2010 (UTC)[reply]
I don't understand 82.44.55.25 (talk) 21:11, 10 October 2010 (UTC)[reply]

Superscalar CPU Pipelines

Hi. I was reading about the IBM POWER6 CPU here. On page 6, it says that each core is "7-way superscalar." Do they mean that it can complete seven identical microinstructions per clock cycle or that its pipeline consists of seven microinstructions? Or possibly they mean that it can execute seven non-identical instructions per clock cycle? I'm also curious how long its pipeline is.

Also, I read that the Pentium processor can execute two identical microinstructions in one clock tick. Since then, every Intel processor can execute three: [2] -- right?--Best Dog Ever (talk) 20:58, 10 October 2010 (UTC)[reply]

See Superscalar. It does mean that the CPU can (in the best possible case) complete 7 different instructions per clock cycle. However, that requires that there is sufficient parallelism in the code that 7 instructions can be executed in parallel without dependencies, and it requires that the instruction mix exactly matches the availability of functional units. But yes, modern superpipelined and superscalar processors can have hundreds of instructions in flight at any one time. --Stephan Schulz (talk) 21:49, 10 October 2010 (UTC)[reply]
What "n-way superscalar" means that that the processor can issue up to n instructions in a cycle. If the processor breaks instructions into microinstructions, then it can issue up to n microinstructions in a cycle. Some definitions of superscalar differ: some say instructions are issued, some say they are executed — but they should be identical in function since issued instructions will be executed, and executing or executed instructions have been issued. The instructions issued do not have to be identical in type (integer, floating point), nor do they have to be the same operation (addition, subtraction). Most of the time they are a mix of instructions since in some cases it is not worth the expense of having a processor be able to issue multiple instructions of a certain type (I will not provide examples since this is dependent on the application of the processor). Regarding pipeline length, the length of the pipeline is independent of the width of issue. You also mentioned instruction completion. Since the pipelines themselves do not have to be identical in length, the instructions issued in a cycle do not have to finish at the same time. It is common to find the integer pipelines shorter than the floating point pipelines, and for some execution units such as division to be unpipelined. Rilak (talk) 01:48, 11 October 2010 (UTC)[reply]
ok. Thank you very much. Good info.--Best Dog Ever (talk) 02:04, 11 October 2010 (UTC)[reply]
If I'm remembering correctly, all post-Pentium, pre-Core-2 Intel x86 processors can decode up to three x86 instructions per clock cycle. This obviously limits you to executing three instructions per cycle in the long term, no matter how many execution units you have. Core 2 can decode four per cycle. NetBurst (Pentium 4) had a micro-op cache to avoid the decoding overhead, so it might have been able to exceed the limit in some circumstances, but Pentium M and Core went back to the traditional undecoded cache (probably because undecoded instructions are much smaller, so you can cache more of them). -- BenRG (talk) 02:44, 11 October 2010 (UTC)[reply]
I don't think that the P6 was a three-way superscalar processor. The P6 had two simple decoders and one complex decoder. The simple decoders generated one microinstruction every cycle and the complex decoder could generate up to four. The maximum number of microinstructions generated on a cycle is thus six, unless there was a restriction on the parallel operation of the decoders (I can't remember if there was one or not). Since the P6 is capable of out-of-order execution, the microinstructions were held in a 48-entry buffer. The issue logic determined which microinstructions could be issued, and from these, up to five were issued in a cycle. Therefore the P6 is a five-way superscalar processor, although I don't recall if it was referred to as such. Rilak (talk) 05:14, 11 October 2010 (UTC)[reply]
After I wrote that the P6 is a five-way superscalar processor, I thought to myself, "Isn't a bit odd that you can't remember the superscalar-ness of the P6?" So I searched Google Scholar and found no mention of the P6 being five-way. All results referred to it being three-way. But in a conference paper written by two Intel systems engineers ("High performance software on Intel Pentium Pro processors or Micro-Ops to TeraFLOPS"), I found this interesting snippet: "I think"The Pentium Pro processor is often referred to as 3-way superscalar because of its ability to decode, dispatch, and retire three instructions per a clock." Rilak (talk) 05:38, 11 October 2010 (UTC)[reply]

Hotmail

What the hell is going on?! almost-instinct 21:51, 10 October 2010 (UTC)[reply]

Mine's fine. Random Googling produced a website called Downrightnow, which has a Hotmail page... Vimescarrot (talk) 22:27, 10 October 2010 (UTC)[reply]
It seems accounts have been losing access - sometimes for long periods - on a one-by-one basis. There is nothing about this on the Hotmail home page or on MSN, but on the Hotmail page on Facebook there are a couple of mumblings from MSN, something about accounts being hacked. Searching twitter for "Hotmail" + "down" / "not working" / "fucked" showed a definite but far from overwhelming pattern (the last of those search terms giving the most results) My recommendation, to avoid this happening to you, should anyone care, is to change your password. But that might be total nonsense. almost-instinct 08:47, 11 October 2010 (UTC)[reply]

Help: Skype changes my folder icon

Since installing Skype on my Windows 7 x64 computer, the icon of the containing folder [like "Program Files," just under a different name] has been changed to a folder icon which contains the Skype logo(!). Well, I know the rumors that Skype isn't exactly shy messing with users' computers... but this is going a bit too far! So I changed the icon back to what it used to be (i.e. just a plain Windows folder icon) via Properties... just to find that Skype didn't work anymore. I tried back and forth a bit... nothing. So I de- and re-installed the program... and now not only Skype, but also the modified icon is back. I still don't want it!!! To my utter surprise though, I haven't found anything about this problem on the Kraken.

Any ideas what's going on... and how to make sure my icons remain Skype-free while allowing me to use Skype? Thanks, thanks, Thanks for answering (talk) 23:28, 10 October 2010 (UTC)[reply]


October 11

MacBook trackpad problem

I have a 2007-era MacBook. If I press down with my palm to right of the trackpad (not touching the trackpad at all, but completely to the right of it), it will consistently register a click. To the left, no. Why is this happening, and can it be fixed? --140.232.178.118 (talk) 00:27, 11 October 2010 (UTC)[reply]

This is, I suspect, a mild design flaw. I have a 2006-7 era MacBook Pro that will occasionally move the mouse with pressure on the left palm rest. you might try turning on the 'ignore accidental trackpad input' option in the preferences. --Ludwigs2 01:12, 11 October 2010 (UTC)[reply]
I found the source of my problem: a swollen battery. When I remove the battery, the trackpad button works nice and crisp with no hypersensitivity. My 3-year Applecare has expired, so I'll have to see whether they're willing to replace it. --140.232.179.169 (talk) 01:57, 11 October 2010 (UTC)[reply]
Ah, interesting. FYI, you can get decent replacement batteries on-line for about 2/3 the cost of a direct-from-apple replacement, assuming applecare has expired. --Ludwigs2 17:15, 11 October 2010 (UTC)[reply]

Quick question - Nvidia cards

 Done I'd say

Hello, everyone. Quick question - Which card would be better for general work and watching movies (and having nice desktop effects): Nvidia 9400GT or 8600GT, upgrading from (no laughing, please!) an 8400GS. The bus is PCI-E. All three have 512 megs. The system is a P4 3,06 with 2 Gigs of RAM running Linux. Because of space constraints in the box I don't have much to choose from. Prices for both of the new cards are similar. Thanks :) --Ouro (blah blah) 08:36, 11 October 2010 (UTC)[reply]

The 9400GT is the fastest. Are you sure about the 512 MB? Compare [3] and [4] and you'll see there's more graphics memory, faster graphics memory, and a faster GPU.--Best Dog Ever (talk) 09:28, 11 October 2010 (UTC)[reply]
Yes, I'm sure about the 512 MB. So You'd go with the 9400 despite it being in the entry-level range and the 8600 in the performance range as shown here? I do not do any graphics-intensive games, but I want the user interface, working on large documents in OOo and any possible graphics manipulation (simple photo trimming and editing mostly) plus of course films to be swift. --Ouro (blah blah) 09:57, 11 October 2010 (UTC)[reply]
The 9400GT is perfectly fine for your requirements (you would even be able to play most pre-2010 commercial games on that in lower resolutions). The later series is usually better when it comes to Nvidia due to tech changes in a rapidly changing sector of computing...Sandman30s (talk) 11:28, 11 October 2010 (UTC)[reply]
Thanks loads! I don't play games, I prefer to spend my free time away from the machine whenever I can. Cheers! --Ouro (blah blah) 12:09, 11 October 2010 (UTC)[reply]
I strongly disagree with the claim "The later series is usually better when it comes to Nvidia due to tech changes in a rapidly changing sector of computing". There are very many cases when a 'newer' card is a lot worse then an older card, whether Nvidia or ATI. This is even more acute if you're comparing the GeForce 9 series vs the later GeForce 8 series (8800 GT, 8600 GT etc i.e. not the original 8800 GTS/G80) since AFAIK there weren't any major feature changes between the two. You'd need to look at the benchmarks to be sure but from an educated guess and a quick search the 8600GT is in fact better then the 9400GT in nearly every way (the only areas I can think of where the 9400GT is likely to come out on top is power consumption and therefore heat, the 9400GT also has PCI-express 2.0 support but it seems a moot point since neither card is every likely to get close to saturating a PCI-express 1.0 link). Nil Einne (talk) 18:17, 11 October 2010 (UTC)[reply]
You can't compare the bottom of the range of 9000 series to top of range 8000 series. The top of the range of a new series is always better than the top of range of an old (9800GT vs 8800GT). However, comparing the 9600GT to top of range 8800GT, I think the 9600GT has the edge (I don't want to quote websites here, I've had both cards and the 9600GT just felt better on my computer). This comparison applies to every other Nvidia series change since about the 5000 series. Of course it is a generalization and there are exceptions, but clock speed is not everything... new series of cards always include new features and pipeline optimizations and because of hardware support for new features, takes away the need for that extra clock speed used for DirectX software calculations. Sandman30s (talk) 06:45, 12 October 2010 (UTC)[reply]
Who's comparing the bottom of range to the top of range? Also to be frank claims like 'just felt better on my computer' are silly particularly without specifying under what conditions and what applications this 'feel good' occured (double blind? games?). This may not be the science desk, but it is still a reference desk. For some actual references [5]. (I'm not saying this is definite, a single review never is.)
I agree clock speed is not everything, but what 'new features and pipeline optimisations' are you referring to? As I've already said, comparing the later 8 series to the 9 series, there aren't really any significant new features. I hope you're aware that the 8800GT uses the same G92 based chip that is used certain models of the 9800X line (well basically all use the G92 but there were two versions of it a 65nm and 55nm).
Presuming it's clear from all benchmarks that the 8800GT is better or the same as the 9600GT, are you still going to insist that the 9600GT 'feels better'? (Of course you haven't specified the clock speeds and memory sizes of both cards but I'm presuming spec.) If so perhaps you could explain what causes this 'feel better' behaviour.
P.S. I hope you understand why I refer specifically to the later 8 series. It's something that I hope doesn't need to be explained to someone offering help on comparisons of various Nvidia cards. I also hope you understand the big difference between the move from say the 7 line to the 8 line and the 8 line to the 9 line.
P.P.S. I would point out this is getting a bit off topic anyway but I dislike claims likely to be misleading. The OP's question was about the 9400GT vs the 8600GT and from what I've seen so far there's no question in terms of performance, a 512MB 8600GT is better if it's running at the original 8600GT spec speeds. (I emphasise spec because it wasn't really established whether the 8600GT 512MB was at normal spec speeds, as I pointed out the 512MB version often uses slower memory). For example, something potentially relevant to the OP [6]. Nil Einne (talk) 11:34, 12 October 2010 (UTC)[reply]
I don't have the time nor the energy to hunt for references and data sheets and comparison charts and debate this any further. I agree that it wasn't very encyclopaedic of me to say 'it felt better' but sometimes first-hand experience with things beats a page full of numbers. You win. Thank you for your feedback. Sandman30s (talk) 17:59, 12 October 2010 (UTC)[reply]
The NVIDIA page I linked to says that the 8600 GT actually has 256 MB of memory. Like I said, it also has a slower GPU and the memory operates at a slower speed. Every part that impacts performance is slower in the 8600 than the 9400.
Nevertheless, if you won't be playing games, the 8600 GT will work fine.--Best Dog Ever (talk) 18:10, 11 October 2010 (UTC)[reply]
512MB 8600GTs definitely exist (relying on official specs to know what exists for graphics cards is usually a rather bad idea). Also what on earth are you talking about? The page you linked to lists the memory speed of the 8600 GT as 700mhz and the 9400GT as 400mhz. (Both have a 128 bit bus.) How is the slower? The core clock is marginally slower at 540mhz vs 550mhz and there's a bigger difference in the shader clock at 1140mhz vs 1400 mhz but the 8600GT has 32 stream processors while the 9400GT has 16 stream processors so the 8600 mhz is still likely to come out on top. All these specs are somewhat moot though since the OP should be looking at the cards they're intending to buy, not generic Nvidia specs. Particularly for the 512mb 8600GT since higher memory cards often have slower clocks. Having said that, it's not clear to me why the OP wants to upgrade at all. I admit I don't know much about the Linux desktop but is it really that demanding that the 8400GS isn't more then enough? Nil Einne (talk) 18:25, 11 October 2010 (UTC)[reply]
My mistake. Yes, it looks like the 8600 GT is faster. I don't know about Linux desktop effects, but I tested Vista Aero a while ago on a desktop with a GeForce 7300 GT, and it ran great.--Best Dog Ever (talk) 18:37, 11 October 2010 (UTC)[reply]
Wow, I didn't think... I must have looked at everything too superficially earlier on. Nil, especially Your input is appreciated. Truth be told, I'm just having concerns whether my graphics sometimes tend to be choppy because the card is inadequate or because the nvidia drivers for Linux are not perfect yet (might be both). It might be that this improves in the futue (Fedora 14 is coming out in less than a month officially, the test release is already there). I guess what the 8400 GS has to offer should be more than enough to smoothly operate compiz and these certain effects I have on, but I'm just having doubts as to whether I need more power in there or not. Thanks for Your input, (and I can't think of another word to put here other than) friends. Good night. --Ouro (blah blah) 21:21, 11 October 2010 (UTC)[reply]
I would personally investigate more about what's causing any issues you encounter before bothering to upgrade. In particular choppy graphics could be other things like the processor not keeping up for example. (I'm not very good at remote diagonostics and don't know Linux so can't really offer much more help.) Nil Einne (talk) 11:34, 12 October 2010 (UTC)[reply]

HTML5 canvas typography

I want to print x2 on a HTML5 canvas graph.

context.fillText('x<sup>2</sup>', 100, 100);

It doesn't work.

Certainly I can manually arrange the positions of each character so it shows up like x2 on Canvas.

I can also put rasterized images on canvas.

Is there a better solution?

Is there a way to create more complex things such as 2
1
H
? -- Toytoy (talk) 09:06, 11 October 2010 (UTC)[reply]

Perhaps overlay a normal markup layer over the canvas with z-index? --Sean 17:55, 12 October 2010 (UTC)[reply]
Can you use unicode context.fillText('x²', 100, 100); Graeme Bartlett (talk) 09:41, 13 October 2010 (UTC)[reply]
²₁H with subscript 1 in unicode, but I don't know how to make the two characters combine ̩²H is looking closer with the combining diacritical mark with superscript 2. Graeme Bartlett (talk) 09:55, 13 October 2010 (UTC)[reply]

QList from Qt and C++ operator

Resolved
 – Aha moment right after I post this... antilivedT | C | G 12:03, 11 October 2010 (UTC)[reply]

I'm trying to use the indexOf() method from the QList class in Qt4, containing a custom class called "Resident". I've implemented the == operator in the Resident class as below:

bool Resident::operator == (Resident& resident) {
	return this->name == resident.getName();
}

But it gives the error no match for ‘operator==’ in ‘n->QList<T>::Node::t [with T = Resident]() == t’ at compile time if I try to use the indexOf() method:

QList<Resident> users;
int id = users.indexOf(someUser);

It works if I do it manually:

int id = 0;
QList<Resident>::iterator i;
for (i = users.begin(); (i != users.end()) && !(*i == someUser); i++) {
	id++;
}

But I would prefer getting indexOf() to work. How do I implement the == operator properly so that indexOf() would be happy? Thanks for your help. --antilivedT | C | G 11:58, 11 October 2010 (UTC)[reply]

And a few seconds later I've figured it out - the parameter of the == operator cannot be passed by reference (no & allowed). I've removed the & and it's all fine now. --antilivedT | C | G 12:03, 11 October 2010 (UTC)[reply]
The equality operator should accept constant objects both on the left-hand side (the hidden this parameter) and on the right-hand side (the explicit parameter). Thus your operator should be
        bool Resident::operator == (Resident const& resident) const { ... }
or possibly
        bool Resident::operator == (Resident resident) const { ... }
but probably the former, since the latter will cause the RHS argument to be copied.
The fact that your modification worked is something of an accident. Probably QList<T>::indexOf is implemented more or less the same way as your explicit search, with the test being (*i == argValue) and argValue having the type T const&. If the test had been written (argValue == *i) instead, your modified version would have failed as well. -- BenRG (talk) 21:50, 11 October 2010 (UTC)[reply]

Windows product activation of XP VirtualBox virtual machine

I've installed Windows XP home edition as a VirtualBox virtual machine (PUEL license) under ubuntu 10.04, using an OEM CD, legit, not previously used. Everything seems to be running smoothly, and I will need to do a Windows Product Activation within a couple of weeks.

  • First question: Is there any reason to expect problems with the windows product activation?
  • Second question: If product activation is successful, will I be able to move the virtual machine to a different physical machine, maintaining the validity of the product activation? Note: I'm not asking whether moving the machine is acceptable according to Microsoft's EULA, but whether it will technically work, i.e. whether the virtual machine will detect that it has been moved.

Thanks, 95.34.148.81 (talk) 12:46, 11 October 2010 (UTC)[reply]

At my workplace, I seem to either create a new Windows 7 (VirtualBox) VM or copy an existing one to another computer at least once a week. We have a volume license, so activation is not a problem. Assuming XP works about the same, then 1) you will not have any problems with activation, but your license will be considered used, and 2) if you move it to another computer, you will need to re-activate it with a valid license; unless you have a volume license, then do not expect to be able to use the same license key again. 124.214.131.55 (talk) 14:18, 11 October 2010 (UTC)[reply]
But to what extent does the windows software detect the hardware that the virtual machine is running in? For example, the Mac address of the network card that the virtual machine sees is not the Mac address of the physical card, and will not change when the machine is moved. When the hardware is emulated, how can Windows detect that the physical machine has changed? --95.34.148.81 (talk) 14:37, 11 October 2010 (UTC)[reply]
I'd recommend seeing our article Windows Product Activation as it lists the 8 categories of hardware that it checks/compares. OEM licences are sold for the original machine only (unlike retail which can be transferred). If you activate it on the Virtual machine then that would be using up your licence and it should be expected that trying to reactivate on a different machine would fail (since the hardware would be completely different). It doesn't matter that the original machine is virtual/isn't physical, it's still a machine so that's what would count. I'd recommend simply not activating it until it's installed on the proper physical machine to avoid breaking the licence agreement as well as activation woes you're sure to face.  ZX81  talk 15:00, 11 October 2010 (UTC)[reply]
Just adding to my previous post as I think I may have misunderstood your question. When you say moving to a "different physical machine" do you mean that on this new machine the Windows XP installation will still be virtualised? If so then you won't have any problems at all. You can literally install VirtualBox on the new machine, copy the virtual hard disk files across/setup the configuration as before and it'll continue to work with no idea that the outside hardware has changed and as such the activation won't be challenged either (assuming you set it up with the same memory/optical drive configuration etc).  ZX81  talk 15:04, 11 October 2010 (UTC)[reply]
Yes, exactly. I mean copying the .vdi file which implements the virtual machine, along with the stuff in ~/.VirtualBox, to a machine on which Ubuntu with virtualbox is installed beforehand. When I do this with the non-activated machine, it behaves exactly similarly, the activation countdown gives the same number of days left until activation etc. 95.34.148.81 (talk) 15:30, 11 October 2010 (UTC)[reply]
It will probably see the actual CPU's model and stepping and (if present and enabled) the processor serial number. The other hardware is emulated and shouldn't change. If you're moving between systems with identical CPUs and there's no PSN, then even that shouldn't change. Note that it is possible to detect that you're running within a particular VM product and take additional steps, but I've never heard of Microsoft doing that and I'm pretty sure that I would have heard about it if they did. -- BenRG (talk) 19:14, 12 October 2010 (UTC)[reply]
Thanks, everyone! --95.34.148.81 (talk) 11:57, 13 October 2010 (UTC)[reply]
I checked it out with this utility program (note: compiled program, use it only if you trust the website, or on "discardable" virtual machines). It confirms that BenRG is right, the real processor's details are reported. 95.34.148.81 (talk) 10:56, 14 October 2010 (UTC)[reply]

SVG indefinite repeat / radial gradient

I have this bit of code I've been working on, its basically got several ellipse shapes overlaying each other and they are creating a circular rainbow. These however switch colours as time goes on. Basically I want them to change colour and back again but then continue doing this indefinately. The problem I have is: putting repeatCount="indefinite" into my code just repeats the colour changing back and not the colour switching first. How do I put this action in correctly?

Below is an example of one of the circles, it switches from red to violet and back again and I would like this to be continuous

<ellipse id="ellipse7" fill="#EE82EE" cx="400" cy="600" rx="150" ry="150">
 <animateColor id="ellipse7" attributeName="fill"
 from="#FF0000" to="#EE82EE" 
  begin="0s"
  dur="10s"/>
  <animateColor id="ellipse7" attributeName="fill"
  from="#EE82EE" to="#FF0000" 
  begin="12s"
  fill="freeze"
  dur="10s"/>
</ellipse>

Also one more question, can this effect be applied to ellipses with a radial gradient? Thanks in advance195.49.180.89 (talk) 15:03, 11 October 2010 (UTC)[reply]

As for the first question, you can merge the two animateColor elements to one, and put repeatCount="indefinite" to that:
<ellipse id="ellipse7" fill="#EE82EE" cx="400" cy="600" rx="150" ry="150">
 <animateColor attributeName="fill"
    values="#FF0000;#EE82EE;#EE82EE;#FF0000;#FF0000"
    keyTimes="0;.42;.5;.92;1"
    dur="24s"
    repeatCount="indefinite"/>
</ellipse>
Incidentally, since rx=ry, you could as well use circle instead of ellipse. As for the second question, you can animate gradients just like any other elements, for example
<svg xmlns="http://www.w3.org/2000/svg">
<defs>
  <radialGradient id="grad" cx="50%" cy="50%" r="50%" fx="50%" fy="50%">
     <stop offset="0" stop-color="white"/>
     <stop offset="1" stop-color="#FF0000">
        <animateColor attributeName="stop-color"
           values="#FF0000;#EE82EE;#EE82EE;#FF0000;#FF0000"
           keyTimes="0;.42;.5;.92;1"
           dur="24s"
           repeatCount="indefinite"/>
     </stop>
  </radialGradient>
</defs>

<circle id="ellipse7" fill="url(#grad)" cx="400" cy="600" r="150"/>
</svg>
Emil J. 18:11, 13 October 2010 (UTC)[reply]
more gradient fun
<svg xmlns="http://www.w3.org/2000/svg">
<defs>
  <radialGradient id="grad">
     <stop offset="0" stop-color="#F00">
        <animateColor attributeName="stop-color"
           values="#F00;#EE82EE;#F00"
           keyTimes="0;.5;1"
           dur="12s"
           repeatCount="indefinite"/>
     </stop>
     <stop offset="0" stop-color="#F00">
        <animate attributeName="offset"
           values="0;.5;0;.5"
           keyTimes="0;.5;.5;1"
           dur="12s"
           repeatCount="indefinite"/>
        <animateColor attributeName="stop-color"
           values="#F00;#EE82EE"
           keyTimes="0;.5"
           dur="12s"
           calcMode="discrete"
           repeatCount="indefinite"/>
     </stop>
     <stop offset=".5" stop-color="#EE82EE">
        <animate attributeName="offset"
           values=".5;1;.5;1"
           keyTimes="0;.5;.5;1"
           dur="12s"
           repeatCount="indefinite"/>
        <animateColor attributeName="stop-color"
           values="#EE82EE;#F00"
           keyTimes="0;.5"
           dur="12s"
           calcMode="discrete"
           repeatCount="indefinite"/>
     </stop>
     <stop offset="1" stop-color="#F00">
        <animateColor attributeName="stop-color"
           values="#F00;#EE82EE;#F00"
           keyTimes="0;.5;1"
           dur="12s"
           repeatCount="indefinite"/>
     </stop>
  </radialGradient>
</defs>

<circle fill="url(#grad)" cx="400" cy="600" r="150"/>
</svg>
Emil J. 16:49, 14 October 2010 (UTC)[reply]
Thank you, that is more help than I could actually have hoped for and the effect you have achieved in the last section of code is amazing. I shall be having an experiment with that to see what I can come up with. Good spot on the ellipse/circle too. I hadn't actually seen that simple error. Cheers :D 195.49.180.89 (talk) 13:46, 15 October 2010 (UTC)[reply]

Why is Word awful at saving documents as HTML?

For the first time in years, this weekend I tried taking a Word document and saving it as HTML in order to post it to a web site. I used Word 2007. The results were atrocious. There was nothing particularly complicated about the Word document — some inline photos with text set to wrap around them was the most complicated bit. The resulting HTML file was awful! The images were placed all wrong, the spacing between lines was wrong, the place where word wrap occurred on each line was wrong. It substituted Verdana for my font of choice, which was expected.

When thinking about why the premier document creation software from the world's premier software company (yes, that point is arguable) was so poor at this task, I could only think of a few logical possibilities: (a) The feature is crippled in order to try to get consumers to purchase a different Microsoft product that excels at this. However, when I tried importing the Word document into Publisher, the output was also awful. It just wasn't *as* awful. (b) Microsoft puts "Save as HTML" very low in its priority list and is satisfied with its current performance relative to the quality level of the entire product. (c) "Save as HTML" is a Hard Problem. I don't believe either (a) or (c) — I know that HTML must be more difficult than RTF because of the need for browser independence, variable page width ... but you should have seen how bad it was! Could (c) really be correct? Comet Tuttle (talk) 15:37, 11 October 2010 (UTC)[reply]

I think "converting to HTML" is not a "hard problem" as much as it is a paradigm shift. A Microsoft Office document is a (proprietary, controlled) file format that specifies a lot of things - text layout, image positioning, exact details of the way a page should be rendered. That is its purpose: it is designed to be a desktop publishing tool and not a purely-digital format. So when you try to convert from a fixed-form document into a "free-form" format like HTML, you have a lot of difficult decisions to make. To what extent should you (attempt to) exactly preserve formatting? Knowing that individual browsers may override your layout, font, and positioning instructions, can you even try to preserve the document's flow without mangling it horribly? On the other hand, if you punt on the export and strictly copy/paste the text and images and let the browser render them however it so desires, you have violated the user's expectations of formatting. So, it is not a problem of difficulty in translating the document format, it's trying to figure out exactly what the user wanted when switching from a controlled-format rendering engine to a free-format engine with text and graphical reflow. The two types of documents serve different purposes - there is not an exact one-to-one mapping between formatted- and free-format content; so the export engine does the best it can to heuristically decide which formatting decisions are relevant, preserve those, and free-flow the remainder. Nimur (talk) 16:04, 11 October 2010 (UTC)[reply]
You may have an option to save as type Web Page, Filtered. This will cut down on the CSS formatting. Also word 97 is much simpler in its HTML output if you still have that around. Graeme Bartlett (talk) 10:06, 13 October 2010 (UTC)[reply]

Microsoft SQL Server

What is "Microsoft SQL Server" for? I have this in task manager taking up 70mb of ram and I've never ever used it. Is it doing something important in the background that's necessary to run Windows, or can it be turned off? If it can be turned off, how do you turn it off? I never installed it so I don't know where it's settings are or how to turn it off. The OS is Windows 7. 82.44.55.25 (talk) 18:49, 11 October 2010 (UTC)[reply]

Microsoft SQL Server is a database; it is a general-purpose utility that might be supporting some other program. It is often used by web-servers, developers (computer programmers), and "industrial users" to store data. But it can also be a support utility for a lot of other programs that you might have installed. You can try to stop or uninstall the server and see if any program stops working. Unfortunately, the number of programs that might use SQL Server is too numerous to list - it really could be anything - games, word-processors, email clients, music library programs, .... do you recall what you recently installed before you noticed SQL Server was running? Nimur (talk) 20:36, 11 October 2010 (UTC)[reply]
Thanks for the info. I only noticed the SQL server today but then again I only looked through the processes in process explorer today; so it could have been there since I installed Windows 7. I guess I'll just leave it alone 82.44.55.25 (talk) 22:31, 11 October 2010 (UTC)[reply]
I assume it's the compact version of SQL Server? I always remove it when from my customers' computers. Nothing bad ever seems to happen. It doesn't come with clean-installations of Windows, unless your computer's manufacturer installed it with all the other bloatware they put on your computer. It's included alongside certain extra programs, like Microsoft Visual Studio. I would remove it. You can always download it and install it again, if you need it. The compact and express editions are free. I bet it degrades the start-up time of your computer, and could also create security problems, given the fact that SQL Server is a networked program.--Best Dog Ever (talk) 00:16, 12 October 2010 (UTC)[reply]
Security is greatly improved in the newer versions. MS SQL Server 2008 (and 2005 too, if I recall correctly), a great number of features are disabled by default, including remote connections. This is to reduce the surface area for attacks, requiring one to specifically enable the features that they want. In any case, if you wish to just disable it, open up Services and set the Startup Type for each SQL Server service to Disabled. The next time that you reboot, they will not be started. If in the event that you later find that you need it for something, you can always go back and set it to Automatic. 180.11.188.56 (talk) 09:24, 12 October 2010 (UTC)[reply]

Coq help

I have a fairly specific question about Coq, probably too specific for this desk. (Unless there are any experts here?) Is there a good (active) forum online where I can get good help? Thanks- Staecker (talk) 20:21, 11 October 2010 (UTC)[reply]

The Coq web page lists something called the "coq club" -- that would seem like the natural place. I somewhat misdoubt me that experts are in plentiful supply. Looie496 (talk) 20:36, 11 October 2010 (UTC)[reply]
You might try the Math Desk - there are a lot of people there with surprising levels of familiarity with obscure mathematical tools and techniques. Nimur (talk) 20:37, 11 October 2010 (UTC)[reply]



Google Earth / Streetview Crashing

Is it just me? If I click on a camera icon in Google Earth to fly into a Streetview picture, and then try to follow the road by successively clicking on subsequent icons, after repeating three or four times Google Earth crashes on me. I'm using Windows XP. Rojomoke (talk) 21:45, 11 October 2010 (UTC)[reply]

Is it laggy, or does the computer shows symptoms of CPU and memory stress? That is probably why. Sir Stupidity (talk) 07:10, 12 October 2010 (UTC)[reply]

attributes and simpleType in XML/XML Schemas

Simply put, how do I make the following code from an XML schema I am working on valid:

<xs:element name="ISSN"> <xs:simpleType> <xs:restriction base='xs:string'> <xs:pattern value='[a-zA-Z0-9]{4}-[a-zA-Z0-9]{4}'/> </xs:restriction> <xs:attribute name="IssnType" type="xs:string"/> </xs:simpleType> </xs:element>

The problem lies with the attribute - simpleTypes cannot have attributes. This element needs to be data type checked for that restriction/pattern, as well as have an attribute attached. xs:all, xs:simplecontent, xs:complexcontent and xs:sequence cannot have restrictions. I have agonized over this issue for hours trying to figure out a solution. Please advise.

--Baalhammon (talk) 22:05, 11 October 2010 (UTC)[reply]

October 12

Wanted: Offline Windows Live Installer Wave 2, English

Does anyone have it?

It was located at http://g.live.com/1rewlive/en/WLSetup.exe, which redirects to http://download.microsoft.com/download/3/6/e/36e9a77e-6eee-4b8c-b223-5d8b5b4e2a28/EN/WLSetup.exe. Unfortunately, the file has been taken down.

P.S. This is for my personal collection, so no newer versions and online installers, please.

118.96.161.28 (talk) 00:01, 12 October 2010 (UTC)[reply]

User Account

Hi There,

My user name is HappyGod. Recently I changed my password, and I am no longer able to login.

Please note that I am certain of my password, and have not forgotten it. I think you may have bugs in your change password functionality.

Anyhow, could I suggest that you include a "Forgot your password" option on the login page? In the meantime, could you please advise as to how I can regain access to my account?

Regards, Matt Vermeulen —Preceding unsigned comment added by 203.161.88.182 (talk) 08:11, 12 October 2010 (UTC)[reply]

For Wikipedia, you must go to the Help Desk and ask there. However, I think there is no way you can get it back, unless a bureaucrat does it... And yes, you can also suggest a Forgot your password at the Help desk as well. The help desk can be accessed by going to the top of the page and clicking on the link on the right hand side of this page. Sir Stupidity (talk) 08:24, 12 October 2010 (UTC)[reply]
There is already a facility for forgotten passwords on the login page:
"If you entered your e-mail address when you signed up, you can have a new password generated. Click on the "Log in" link in the upper-right corner. Enter your user name, and click the button near the bottom of the page called "Mail me a new password". You should receive an e-mail message with a new random password; you can use it to log in, go to your preferences, and change your password to something you'll remember."[7].
Unfortunately, if you have changed your e-mail address and forgotten to update your profile, then this will not work, so you might just have to create a new account, but if this is the case, then ask at the help desk, as advised above. Dbfirs 09:28, 12 October 2010 (UTC)[reply]

Thumb drive security

I am using a U3 thumb drive and wish to minimise the impact if I lose it. So:

1. Is there a way to password protect individual files or folders on a thumb drive?

2. How can I easily backup/synchronise all files between two thumb drives (assuming all files are password protected)?

Also, I am looking for portable security software that I can install on thumb drives (antivirus software, virtual sandboxing, etc.). —Preceding unsigned comment added by 59.189.218.201 (talk) 08:17, 12 October 2010 (UTC)[reply]

TrueCrypt 82.44.55.25 (talk) 09:38, 13 October 2010 (UTC)[reply]
For hardware based encryption see Comparison_of_encrypted_external_drives and Ironkey#Competing_products.Smallman12q (talk) 13:04, 13 October 2010 (UTC)[reply]
I use TrueCrypt, too. I create an "encrypted file container" on the thumb drive of, say, 1GB, install TrueCrypt on all my machines, and then when I need the files from the thumb drive, I use TrueCrypt to mount the encrypted file container as my G: drive (or whatever). There is one significant disadvantage to this: If you take the thumb drive to a friend's house or a print shop or a client, then they also need TrueCrypt installed in order to get to any of the files. (I keep the TrueCrypt installer, unencrypted, on that thumb drive, of course, to take care of those situations.) Comet Tuttle (talk) 22:02, 13 October 2010 (UTC)[reply]
TrueCrypt can be run in portable mode directly from the usb drive [8] 82.44.55.25 (talk) 22:16, 13 October 2010 (UTC)[reply]

Is there a way to burn a CD from Ubuntu Live

Resolved

I am in the process of upgrading my Ubuntu OS to 10.10. I started installing and the installation failed with "unable to copy from CD". After some investigation I discovered that:

  • My computer is now unbootable - I have already overwritten the system partition
  • The CD image I downloaded is incomplete (I know I should have done an md5sum before installing.

I have now downloaded a new copy of the image and verified it (booting from Live CD). If I can burn this image then I can continue. The only problem is that I am running from the LiveCD and every attempt to remove the CD and enter a blank results in failure (the write-cd crashes). Is there a way round this or will I have to reinstall the previous version, write the CD, then install the new version again? -- Q Chris (talk) 11:12, 12 October 2010 (UTC)[reply]

Rather than burn a CD/DVD, do you have a usb drive (a flash drive will do) from which your computer will boot? You should be able to make a bootable USB system with Ubuntu Live USB creator. -- Finlay McWalterTalk 11:16, 12 October 2010 (UTC)[reply]
A good idea but I'm afraid its an old laptop and the BIOS doesn't support booting from a flash drive. I have started reinstalling the previous version, but if anyone does know a way it might help other people save time. -- Q Chris (talk) 12:44, 12 October 2010 (UTC)[reply]
I have now gone through the process of reinstalling the old system, burning the CD then installing the new one. I will mark as resolved but if there is a "shortcut" other people might be interested. -- Q Chris (talk) 06:19, 13 October 2010 (UTC)[reply]

Game programming

How is a game like Doodle Jump likely programmed? Would the various platforms be given rules about where they can be placed, along with enemies, power-ups, etc. and then the game creates a new board for each new game played? This could provide an infinite number of maps. Or would the programmer put together maps, say maybe 100 or so, and the game picks one randomly at start up? Dismas|(talk) 12:59, 12 October 2010 (UTC)[reply]

If it's really infinite, like the article says, you'd use procedural generation to build content, based on pre-defined rules. If it's for something vital like the playfield itself (rather than textures, terrain, sound etc.), where a defect could cause an unwinnable scenario, you'd either use a set of generator rules that you could show were guaranteed to produce a winnable game, or you'd write a little analyser than checked a generated map to verify that it was winnable. A decent generator should be tunable to allow for a difficulty gradient: as the game progresses, magic swords get fewer, lava streams wider, and giant aliens more frequent. It can be a challenge to write generators that produce game content that, at the higher reaches, still feels credible. -- Finlay McWalterTalk 13:31, 12 October 2010 (UTC)[reply]
Or "procedural generation with hacky fudging". A friend of mine wrote a Oblivion-like adventure game, with fractally-generated terrain. But sometimes the randomly generated locations for things was such that you just couldn't walk from A to B. So he wrote a thing that detected this, and blurred the map over a wobbly pattern between A and B (like a giant finger that had descended from the sky, squishing the offending obstacle). It mostly worked okay, but if you knew what to look for, you could stand on a mountain top and see where A or B might be, as the fingerprint of the giant finger was a bit too visible. -- Finlay McWalterTalk 13:54, 12 October 2010 (UTC)[reply]
Interesting. Thanks! Dismas|(talk) 23:11, 12 October 2010 (UTC)[reply]

price/performance sweet spots for a Tower system.

What are some price/performance sweet spots for a Desktop system today? (I mean, perhaps you can get a really adequate desktop for office work for $50, and the next higher level gets you to an dual-core i5 with a graphics card that can run any modern game and has 4 GB of RAM, and this setup costs $275, and the next higher level you can get a quad-core i5 with 16 GB of RAM, an HD graphics card and 1 TB hard-drive and this setup costs $470, and the next higher level you can get a quad-core i7 with 64 GB of RAM and dual HD-graphics cards that work together, and 2 TB storage plus 256 GB solid-state storage, and this setup costs you $1300...) I am looking for the very best sweet spots YOU could assemble (with new-egg components). What are these systems, and these price-points? Thank you. 84.153.253.103 (talk) 16:41, 12 October 2010 (UTC)[reply]

This is probably not really a "reference" request. You might find a website like Tom's Hardware helpful for evaluating and comparing hardware performance; you already know about newegg... Nimur (talk) 16:54, 12 October 2010 (UTC)[reply]
This article I read last month was useful, but I don't necessarily agree with all of their choices. Check Tom's Hardware for more in-depth analysis. Coreycubed (talk) 20:20, 12 October 2010 (UTC)[reply]

When you buy a domain name separate from hosting

Do you tie the domain to your web server's IP address via a dashboard with the domain name vendor site or some other way? Thanks. 20.137.18.50 (talk) 19:46, 12 October 2010 (UTC)[reply]

Yes, assuming you were also using their nameservers, you'd use whatever DNS manager they have to set the A records (or any other DNS records) to your existing IP address. If you are just buying the domain from them, you can even point to another nameserver in many cases if you desire. Coreycubed (talk) 20:08, 12 October 2010 (UTC)[reply]
Let's say, for an example, you registered "snorkspork.com". The registrar has to fill out the NS record, which goes to the root domain name system servers. That way, if anyone wants to know the actual IP address of chat.snorkspork.com, they ask their DNS server (which in turn asks a root server) for the NS record for snorkspork.com. That gives two addresses (a main and a backup) for name servers you control. They then ask one of those for the specific address for chat.snorkspork.com; that's probably a trivial question for a personal or small business domain, but it's a big deal for a large outfit like google, where there are hundreds of xyz.google.com machines, and google's public-facing name server has to tell visitors which one to send their traffic to. When you buy web hosting package (say from Rackspace or Dreamhost or whomever) you get that name server as part of the deal. All that remains is for the public NS record for snorkspork.com to point to the name server at (say) Rackspace that knows about your account. How that comes to be, and how it gets changed, depends a bit on your setup. If you register a new domain, the registrar will ask you for those two name server IPs; most registrars have a dashboard that lets you change these. Alternatively your hosting company can change them on your behalf. Obviously if anyone could change the name servers for anything, it would be trivial to maliciously redirect the traffic from (say) google.com to hotnakedfatblokes.com, which would be bad. So changes to the NS record by third parties are often marked as "CLIENT TRANSFER PROHIBITED" (you'll see this in whois records a lot). Extensible Provisioning Protocol describes the framework under which different entities (hosting companies and registrars, mainly) interact, and this might also help. -- Finlay McWalterTalk 20:08, 12 October 2010 (UTC)[reply]

A beginner's language for databases or business?

I like the idea of databases as Attribute-value systems or the Entity-attribute-value model. Are there any beginner's languages that I could write and manipulate such a database with?

On a slightly different topic, are there any beginers languages that are suited to writing business rules and procedures with? An important feature would be date and financial functions, and firing off responces when stated conditions are met. Or is COBOL still the one to use? I do not want to use a spreadsheet. Thanks 92.15.31.184 (talk) 20:02, 12 October 2010 (UTC)[reply]

If you do not want a full relational database, you might find Java Properties useful. These are simple attribute=value mappings (key-value pairs). The Java programming language has the advantage that it is widely used, is easy to learn, and will allow you to migrate to more elaborate database schemes (including relational databases) if you ever need to. According to many surveys, Java is the most widely used business programming language (or at least in the top 2 or 3). Nimur (talk) 20:20, 12 October 2010 (UTC)[reply]

That is way too complicated for me, and requires a lot of prior knowledge which I do not have. I'm looking for something suitable for a beginner, for occasional use. 92.15.31.184 (talk) 20:36, 12 October 2010 (UTC)[reply]

For a complete beginner, there is no reason that you cannot use Access. It is a front-end and database all in one. It is limited in functionality, but can do most things that people want to do. It is even used for full business applications. I've seen it used for inventory systems, cash register systems, and laboratory studies. -- kainaw 21:43, 12 October 2010 (UTC)[reply]

Thanks, but I cannot use Access as I deleted Microsoft Office from my HD long ago, and I intend to migrate to Ubuntu in the future.

Perhaps I can try another tack: would it be easy to write an AV or EAV database in BASIC or some other simple language? Thanks again. 92.15.31.184 (talk) 22:03, 12 October 2010 (UTC)[reply]

How about OpenOffice.org Base ? It is similar to Access in features and interface, but is entirely free software. Nimur (talk) 22:08, 12 October 2010 (UTC)[reply]
Any modern scripting language should be possible. I happen to like Python. It has data structures that support attribute/value pairs (dicts), and indeed the whole language is implemented around that design, although that is not visible at the first glance. --Stephan Schulz (talk) 22:12, 12 October 2010 (UTC)[reply]
(edit conflict)If you're intending to do some simple programming, you'll find most modern programming languages essentially implement an attribute-value system straight out of the box, either as an array or as an associative array. For an array, you refer to each record by number (e.g. "retrieve record # 4"); for an associative array you refer to each record by something like a name (e.g. "retrieve record 'jenny'"). A modern BASIC implementation like FreeBASIC can (apparently) do this; it's particularly easy to do (and pleasantly legible) in python. -- Finlay McWalterTalk 22:21, 12 October 2010 (UTC)[reply]

October 13

Where are they now and How can I get them ?

Two questions. I just had my college bag stolen out of my car - it was an old laptop bag, so they might have thought it had a computer in it - along with irreplaceable lecture notes, it contained what is known as a PE ACA scientific calculator, made in China. I purchased this and another in 2004 in a supermarket in Christchurch for about eight dollars each, and have not seen them sold anywhere since. The beauty of it, is one can programme a number of one's own formulae and equations in, in addition to ones already put in, and I would like to know where I could buy any more, since I enjoyed using it, and even assuming I get my bag and notes back, I am sure I can kiss that one and my Casio 9750 good bye. The other one I bought is broken - I made the mistake of putting it in my back pocket to carry around. Silly. Either that, or are there other calculators that allow one to programme equations like that ? I believe I still have my FX 82, which was not in my bag at the time, but is lost somewhere in my car and I could look for it - I see it has letters, but I know not if that allows one to programme equations.

My second question concerns a calculator, may be Casio, that came out in New Zealand in the early eighties. We called it the Space Invaders Calculator, as it had a game where you matched a number you could control with numbers as they built up across the screen, and if your controlled number matched one of the approaching ones, you could shoot it down before you got invaded. I had one, but one of my sisters took it to school in 1983, and it got stolen. I have not seen one since 1989 or early 1990 - does anyone know if they still make them, and where can they be gotten from ? Thank You. The Russian Christopher Lilly 03:52, 13 October 2010 (UTC)[reply]

When I was in high school, some 15+ years ago, we used to program formulas and games into our TI-81 calculators. I've never seen a PE ACA (article?) but it sounds a bit the same as a TI. Dismas|(talk) 04:23, 13 October 2010 (UTC)[reply]


Thank You for that. If I cannot get another PE ACA, I shall try for a similar one of a different make.The Russian Christopher Lilly 06:47, 14 October 2010 (UTC)[reply]

External Hard Drives and USB ports

Hello, I own two USB external hard drives. I want to get another one, but my laptop only has two USB ports. Is there a device that is the same concept of a Power strip but for USB connections? If such a device exists, I do buy it, and I do connect my third USB hard drive to it, will I experience any problems (like all three hard drives competing for "resources"/"bandwidth" at once, slowing them down or any other problem)? -- 24.251.101.130 (talk) 12:18, 13 October 2010 (UTC)[reply]

Yes, you need a USB hub. As all the devices on a hub share the same connection to your PC, they share the bandwidth. But in practice, unless you're doing things like copying data wholesale from one disk on the hub to another disk on the hub, they're unlikely to all be moving data simultaneously, so the bandwidth issue won't be too bad. -- Finlay McWalterTalk 12:45, 13 October 2010 (UTC)[reply]
Thanks. If you don't mind, I have another question. I have a 2002-era internal hard drive. Is it possible to convert the internal hard drive to an external hard drive? Would it be a better idea to convert the internal and use it directly or make a copy of the disk image and move it to a new drive? What would be the preferred method of doing the latter; a product or a service? -- 24.251.101.130 (talk) 13:12, 13 October 2010 (UTC)[reply]
Yes, it's possible, but it's not worth doing. You can buy a USB disk enclosure (these are slightly different if the internal drive is a 2 1/2 inch laptop drive or a 3 1/2 inch regular size drive), transplant the internal disk into that, and then use it. But modern disks are so much bigger (more capacious) than a 2002 disk, it's not worth keeping that old one around. Ideally you'd clone the existing internal drive to one of your current external disks (with say Clonezilla), assuming there's space. Then you'd replace the internal drive with a larger modern one, and clone it back from the image on the USB. It looks like Clonezilla can resize the old, small partition to the size of the new disk. If your existing USB disks aren't big enough to accommodate the image of the internal disk, you can use a temporary adapter like http://www.amazon.com/dp/B0018MCGVU (it's just what I found with google, not a specific recommendation of a brand) and clone directly from the old disk to the new one. -- Finlay McWalterTalk 13:30, 13 October 2010 (UTC)[reply]
e/c Sure it's possible, but you've gotta work out if it's worth it (see here). A 2002 internal hard-drive in a laptop - so unless it was a very expensive laptop (and even then...) it would be pretty small, it would be relatively slow compared to current drives, and surely it must be nearing the end of its life (a general rule of thumb with any hard disk is not if it's going to break down, it's when). If you wanted to make it an external drive, I would personally recommend pulling the data off it, reformatting it, then putting the data back on, while putting a new disk in the machine with a clean installation of the OS. After eight years who knows what corruptions, etc, may have occurred, and data would quite possibly be badly fragmented and slower to access - copying the disk image would simply copy across all these problems. Reformatting and doing clean installations would help overcome these issues. IMO... --jjron (talk) 13:37, 13 October 2010 (UTC)[reply]

UND - Ubuntu's Not Debian: difference in "/etc/alternatives" system

I recently wanted to write a wrapper script around /usr/bin/mail and noticed that on Debian 5 (Lenny), it's a symlink to /etc/alternatives/mail, which is another symlink pointing to /usr/bin/bsd-mailx, while on Ubuntu 8.04 (Hardy Heron), /usr/bin/mail is the actual binary. Ubuntu does have an /etc/alternatives directory, but doesn't contain a link to the mail program.

I like the idea of using /etc/alternatives to link to my wrapper script. Is there anything I can do to use it on Ubuntu, without breaking package update mechanisms (i.e. when a new mail_somethingorother.deb gets rolled out, it will not get confused about the symlink in place of the expected executable)? -- 78.43.71.155 (talk) 13:38, 13 October 2010 (UTC)[reply]

Start your script with something like:
if [ -f /etc/alternatives/mail ]; then
    mail=/etc/alternatives/mail
else
    mail=/usr/bin/mail
fi
and then just use $mail to refer to the binary. --Sean 14:50, 13 October 2010 (UTC)[reply]
(edit conflict) Um, no, that's not going to help me achieve what I want to do.
The script snippet you posted is unneccessary, as /usr/bin/mail points to /etc/alternatives/mail, which in turn points back to the proper mail binary (/usr/bin/bsd-mailx).
What I want to - and can - do on Debian is changing the symlink from /etc/alternatives/mail to my own /usr/local/bin/mail-wrapper (which in turn calls /usr/bin/bsd-mailx), so that every program trying to use /usr/bin/mail will run my wrapper script instead.
old:
/usr/bin/mail -> /etc/alternatives/mail -> /usr/bin/bsd-mailx
new:
/usr/bin/mail -> /etc/alternatives/mail -> /usr/local/bin/mail-wrapper (which calls /usr/bin/mailx when done)
What I'm looking for is a way to activate the /etc/alternatives system (which is active for other binaries on Ubuntu, just not /usr/bin/mail) so that replacing /usr/bin/mail with my wrapper won't break the next package update for mail_whatever_foo.deb. -- 78.43.71.155 (talk) 15:54, 13 October 2010 (UTC)[reply]
You can configure any program to use the Alternatives system by following these instructions for update-alternatives (or using a graphical configuration tool like GAlternatives that wraps the command-line process). In Ubuntu, you would only have an /etc/alternatives/mail setup if (a) you installed multiple mail programs, and (b) the configuration-script in apt properly configured the alternatives for you. So if this is not already the case, you will need to configure it with the update-alternatives tool. Nimur (talk) 15:49, 13 October 2010 (UTC)[reply]
Interesting. So that means my Debian system contains another mail program, otherwise it wouldn't be using /etc/alternatives/mail. Now you got me curious. I'll have to check which one that is. :-) -- 78.43.71.155 (talk) 15:54, 13 October 2010 (UTC)[reply]
Okay, a quick check gives the following results:
  • Debian uses /etc/alternatives/mail even though there is only one mail program installed
  • Ubuntu has two packages containing /usr/bin/mail that cannot be installed simultaneously: Installing one will uninstall the other, and attempting to install both at the same time gives: mailutils: Conflicts: mailx followed by an abort.
Nimur, since you said any program can be set up to use the Alternatives system - is there a way to do so without breaking updates? If so, I couldn't find it in the link you provided. I know I could just do
echo '#!/bin/bash' >/usr/local/bin/mail-wrapper
echo '#do something here' >>/usr/local/bin/mail-wrapper
echo '/usr/local/bin/mail-real "$@"' >>/usr/local/bin/mail-wrapper
chmod 755 /usr/local/bin/mail-wrapper
mv /usr/bin/mail /usr/bin/mail-real
ln -s /usr/local/bin/mail-wrapper /etc/alternatives/mail
ln -s /etc/alternatives/mail /usr/bin/mail
#(or use update-alternatives for the two steps above)
...but I'm afraid that the next update will break this, either aborting as /usr/bin/mail doesn't look like expected, or overwriting either the /usr/bin/mail symlink or my /usr/local/bin/mail-wrapper script with the patched binary. -- 78.43.71.155 (talk) 16:15, 13 October 2010 (UTC)[reply]
Well, I think your work-around is functional, but it will probably get overwritten by the next update, unless you manually disable update for mail packages and hand-tune this each time you update mail tools. Alternatives is pretty robust; your actual program won't be uninstalled; but your "currently-selected" may get mucked around with. (I'm not sure exactly what conditions that will occur under, though). Maybe a shell alias (e.g. a setting in your .bashrc) is a more suitable workaround than moving/changing the contents of /usr/bin/ or /etc/alternatives/ ... ? A shell alias will never be modified by any update. The biggest issue I've found with using them is "inconsistent" behavior between scripts that execute with- or without- parsing your alias definition file (usually your .bashrc, but you could use some other login-script or dot-file). Nimur (talk) 17:37, 13 October 2010 (UTC)[reply]
Is there a place where I can put a system-wide alias?
The reason why I'm trying to wrap /usr/bin/mail in a shell script is that I want/need to log (to syslog, a text file, or a database - not sure yet what I'm going to use) all outgoing mail sent by the various daemons on my system, as some of it might be useful for debugging purposes, but I don't want every single bit of information ending up in my inbox.
That is why I want the wrapper to scan for "known harmless" subject lines (like "CRON-APT completed on..." telling me that it failed to download updates - at 2am) and have it skip over the line that calls /usr/bin/mail in that case. I know I could set an empty mail address in crontab, but I don't want the messages to get lost completely, I just don't want them in my inbox. -- 78.43.71.155 (talk) 18:00, 13 October 2010 (UTC)[reply]

Pricing question

A new computer with 4GB of ram is £300 from Pc World. I heard you can get 16gb of ram with fast cpu and etc for the same price if you build it yourself. is this truth? —Preceding unsigned comment added by 71.197.38.32 (talk) 13:51, 13 October 2010 (UTC)[reply]

Well, my goto for RAM Crucial.com offers a pair of 8GB sticks for £977.59 (inc. VAT). If you have a 4-slot motherboard, you could plug in 4 x 4GB sticks for £112.79 each. Those are DDR2 SDRAM. A pair of 8GB sticks of DDR3 would be between £751.99 and £902.39 depending on the data rate. Crucial prices are sometimes a bit more expensive than others, but I couldn't find anywhere which would sell 16GB of memory for £300 (never mind budgeting for a CPU, motherboard etc.) Also, if building/upgrading a computer yourself, you need to remember to budget other items such as cooling fans, power supplies and possibly additional cables and connectors. --Kateshortforbob talk 14:25, 13 October 2010 (UTC)[reply]
There was a time when you could save quite a proportion of the cost of a computer by building it yourself, and it is still a good option if you need a non-standard configuration, but if the £300 model from PC World meets your requirements then there is little financial advantage to be gained in building it yourself because the builders of the PC World units can purchase parts in bulk at preferential rates. They also provide a guarantee that your computer will work first-time and that the components are compatible. Dbfirs 15:52, 13 October 2010 (UTC)[reply]
No, it's not truth, for the reasons Dbfirs writes above. I'll add that when you build the computer yourself you will probably choose improved-quality components — for example, HP sells many PCs with puny 300W power supplies — but the cost is not going to be lower than a commodity PC from HP or Dell. Comet Tuttle (talk) 18:29, 13 October 2010 (UTC)[reply]

Javascript

Resolved

In javascript for greasemonkey, how would you select the following checkbox:

<input type="radio" name="example" id="example2" value="1234"> <label for="example2">Checkbox</label>

82.44.55.25 (talk) 17:46, 13 October 2010 (UTC)[reply]

document.getElementById('example2').checked=true; -- kainaw 17:48, 13 October 2010 (UTC)[reply]

Thanks 82.44.55.25 (talk) 17:56, 13 October 2010 (UTC)[reply]

How do I get rid of this unwanted, unhelpful, search popup.

I use Internet Explorer 8 with Windows XP. This very unhelpful popup has recently started putting in an appearance whenever I use a search function on a website - any site. I really want to exterminate this irritating thing but I can't even identify it or find it anywhere on my PC. Screen capture image here [9] Roger (talk) 17:54, 13 October 2010 (UTC)[reply]

Sounds like malware. Install and run Spybot Search and Destroy and it ought to be able to take care of this. --Mr.98 (talk) 18:01, 13 October 2010 (UTC)[reply]
Resolved
Thanks! Spybot found a whole bunch of stuff including the offending popup thing. Roger (talk) 19:15, 13 October 2010 (UTC)[reply]
I urge you, today, go to Wikipedia:Reference desk/Computing/Viruses and do what it says in step 4 about user accounts. If you regularly use an account without administrator rights then it greatly reduces how badly you can be affected by malware. Comet Tuttle (talk) 19:35, 13 October 2010 (UTC)[reply]

October 14

How to fix Windows 7 font

I am having bitter experience in reading fonts in my W7 OS. After installing Microsoft Net Framework 4.0 (64 bit version)and some other software, the font shape suddenly changed and now it looks blur. Even fonts in different website are behaving in same manner. I checked Font folder in "C" drive and also in browser's option (Mozilla and IE8). All of them are in normal state. I couldn't fix them yet. How can I rectify this? I am thinking to reinstalling the OS. Thanks--180.234.32.4 (talk) 05:19, 14 October 2010 (UTC)[reply]

Try to disable ClearType. 180.11.188.56 (talk) 10:17, 14 October 2010 (UTC)[reply]

Choice of data archive format

I have an application that involves a large number of data files, which are grouped into sets. Each set of data files consists of about 1000 files, each of which is about 2 MB. At any time, about 20000 such file sets need to be easily accessible. When a file set is generated, the individual files are generated one by one until the full set is collected. What would be a good archive format for combining the individual files in a set into a single file object? Are some archive formats better than others? Since each file set is built up incrementally, adding a file to an archive should not be an expensive operation. The archive format should also allow easy random access to the files in a set. Any suggestions? My thanks in advance. --173.49.77.140 (talk) 11:33, 14 October 2010 (UTC)[reply]

By my calculations, that is 40 terabytes... which is very high end indeed. I don't think standard file archiving is going to help you very much. What is your budget for implementing a solution? An RDBMS like Oracle can handle that kind of volume with storing those files in LOBs (Large OBjects, or the 'single object' you are referring to)... but it's going to take pretty long to direct-load (incrementally and with simple compression) and your licensing is going to very VERY expensive. Oracle is excellent at reading high volume data with appropriate SQL commands. Then again maybe you meant 40 gigabytes which opens a whole lot of other cheaper and free options. Sandman30s (talk) 13:19, 14 October 2010 (UTC)[reply]

'Site Build It' / www.SiteSell.com

What is "Site Build It" ?( company name = www.SiteSell.com ) Found Wikipedia page for Ken Evoy (founder of company / product) but nothing to explain "Site Build It". It is NOT 'software'- NOT a 'download' ( is IS a 'Subscription' e-Learning "Process" I think ?!?? ) It is NOT in your CMS 'computer management system' List - it is NOT an "entry" in ANY of your many 'Lists' or 'Comparables' - It IS a mystery ! Maybe, it IS a "Business Building System" or maybe a "Website Builder" or maybe a "Blog Maker". WYSIWYG - Yes - But also needs SOME HTML knowledge ? Cannot find it in any of your Lists of 'Editors'. Is there nothing COMPARABLE with it ? Is it a Web 2.0 application ? Or something else ? Would be very grateful indeed for your kind assistance, if you can help. —Preceding unsigned comment added by 83.67.57.245 (talk) 11:35, 14 October 2010 (UTC)[reply]

web cache

I read the web cache article but I don't fully understand. Would sending all internet requests through something like Polipo on my own computer reduce bandwidth and increase speed, or are the benefits from web caches only noticeable on large scale implementations? 82.44.55.25 (talk) 14:02, 14 October 2010 (UTC)[reply]

The portion of content that is fetched more than once is going be the portion of bandwidth that is saved. This is a good application for Amdahl's law. You have P as the portion that is fetched more than once and S as the speedup you get from using cache. is a measure of your overall speedup. By estimating values for S and P, you can see how much of a benefit you would get. -- kainaw 14:18, 14 October 2010 (UTC)[reply]
Thank you for the answer but I don't understand maths or equations. Could simplify the answer for me 82.44.55.25 (talk) 14:32, 14 October 2010 (UTC)[reply]

outlook envelope on the clockbar of winxp pro

I right-clicked on it and selected 'hide'. Is there a way to have it appeared back? t.i.a. --217.194.34.103 (talk) 15:33, 14 October 2010 (UTC)[reply]