Wikipedia:Reference desk/Archives/Computing/2012 November 21

From Wikipedia, the free encyclopedia
Computing desk
< November 20 << Oct | November | Dec >> November 22 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


November 21[edit]

C++ constructors[edit]

Suppose that I don't want a subclass's constructor to call the superclass's constructor. I also don't want to make a fake superclass constructor. I believe that both are against my religion.. Is this possible? --140.180.241.187 (talk) 06:52, 21 November 2012 (UTC)[reply]

I am not sure I understand your question, but if I understand you right, then the answer is no. That is, the language is explicitly constructed to prohibit what you want to do.
I only have the 2003 language standard at hand. There is paragraph 4 from chapter 12.6.2 ("Initializing bases and members", [class.base.init]) which says that if a given base class is not named by the constructor init list of a constructor of a subclass, and the base class is a non-POD class, then the subobject inherited from the base class is default-initialized, i.e. the default constructor of that base class is implicitly called. Furthermore, by paragraph 5 from chapter 12.1 ("Constructors", [class.ctor]), if there is no user-defined default constructor and if the default constructor cannot be implicitly defined, the program is ill-formed.
Anyway, why would you want to leave the member variables that were inherited from the superclass uninitialized? — Tobias Bergemann (talk) 13:15, 21 November 2012 (UTC)[reply]
I've just had a quick look at N3337, the working draft most similar to the published C++11 standard. Here the situation is somewhat more complicated because now constructors can delegate their work to other constructors, but I think what I wrote is still true, see 12.6.2/8. — Tobias Bergemann (talk) 13:27, 21 November 2012 (UTC)[reply]
What is being talked about is a way of breaking the basic contract for use of a class. If a class is designed to ensure something then if it can just have everything zero because someone left out a constructor call then the whole basis of classes is broken as far as C++ is concerned. If you include the definition of the class in the same module or there is cross module optimization then there may be opportunities to inline any required work. Dmcq (talk) 17:52, 21 November 2012 (UTC)[reply]

32-bit or 64 ?[edit]

I just bought a Dell Optiplex 320 with Windows 7. I'm pretty sure that's the 32-bit version, but how can I tell for sure ? And does that model even support the 64-bit version ? StuRat (talk) 07:06, 21 November 2012 (UTC)[reply]

Control Panel > System and Security > System, I'm guessing from the screen print here. Ssscienccce (talk) 07:44, 21 November 2012 (UTC)[reply]
The tech spec [1] doesn't list Windows 7 as an OS, so it was probably installed by whoever he bought the system from. The CPUs listed all support 64-bit operating systems, but you'll have to check the system properties to know what version is installed. Windows key + Pause/Break is a quick key combination to bring it up. "System type" under the "System" section will tell you if it is 32 or 64. 209.131.76.183 (talk) 13:25, 21 November 2012 (UTC)[reply]
OK, thanks. It's 32-bit, as I suspected. StuRat (talk) 18:00, 21 November 2012 (UTC)[reply]
Update: whoever YOU bought the system from... I swear last time I read it I saw "My brother just bought", so apparently I am going insane. 209.131.76.183 (talk) 15:11, 21 November 2012 (UTC)[reply]
Must be, I never typed that. StuRat (talk) 18:00, 21 November 2012 (UTC) [reply]
Wikipedia has dangerous side-effects, we should have a warninglabel: "Warning: This website is highly addictive and may cause insanity". Trio The Punch (talk) 19:20, 21 November 2012 (UTC)[reply]

Converting a CTRL key into a FN key[edit]

I have a lovely new laptop. Its quite small so some keys have duplicate functions, eg the four arrow keys become, when FN is pressed, the HOME UP DOWN END keys. Unfortunately the FN is on the left hand side and the arrow keys are on the right hand side, so two hands are needed just to jump to the end of a page. The keyboard has two CTRL keys, one on either side. Would it be possible for me to convert the right hand CTRL key into a second FN key? I am running Windows 7 at the moment. At some stage in the future I'll be changing over to a Linux OS, hopefully almost-instinct 10:05, 21 November 2012 (UTC)[reply]

It may depend on the make and model, but this can often be done by changing a BIOS setting - e.g. see here. Googling "swap fn ctrl keys" gives lots of promising-looking links. AndrewWTaylor (talk)
Sorry, I misread the question - the link above is about swapping the left-hand Fn and Ctrl keys. AndrewWTaylor (talk) 12:52, 21 November 2012 (UTC)[reply]
Keyboardremapping is usually simple (KeyTweak) but the FN key on laptops is more complicated. Some models that have a [fn] key do send detectable messages when that key is pressed/released. However, these messages are not standardized between different devices and also, some devices with a [fn] key do not produce any message at all when the key is pressed (the key is handled at the hardware level on that device). Which laptop do you have? Trio The Punch (talk) 13:32, 21 November 2012 (UTC)[reply]
Its a Samsung NP530. Since [fn]+various other keys operates things like brightness and volume I can see why that key might be treated differently :-( almost-instinct 15:18, 21 November 2012 (UTC)[reply]
Yeah, just been looking in the manual for Keytweak: "Keytweak cannot affect the Fn key of most laptops. This because the Fn key itself does not generate a scancode, but rather modifies the scancode of other keys on the keyboard. The scancode modifications take place upstream of KeyTweak’s functionality". I think that draws this conversation to a close. Boo. Thanks for your time. Actually while I've got your attention.... Why when one's machine has a 128GB drive does the computer say "Total size: 92.3GB Space free: 64.7GB" I figure the difference between the two numbers is thanks to the OS and all the other preinstalled junk, but where's the other 35.7GB? Is the computer reserving that for its own purposes? almost-instinct 15:27, 21 November 2012 (UTC)[reply]
You may well be able to do this AutoHotkey. I've personally used it to do something very similar before. ¦ Reisio (talk) 18:08, 21 November 2012 (UTC)[reply]
Is the drive an SSD? Often, a significant percentage of a SSD's space is used for load-leveling. As cells in the memory are written and re-used, they slowly wear out. By swapping around where things get stored, the drive can be worn out evenly, so it lasts much much longer. This can only be done if there is enough unused space, so part of the capacity is reserved. The system may also have a hidden partition that is used for recovery. You can use disk manager (in computer management) to see if this is the case. 209.131.76.183 (talk) 15:59, 21 November 2012 (UTC)[reply]

It sounds like you have a recovery partition. It may be somewhat hidden. It contains a backup of the preinstalled OS and drivers. Trio The Punch (talk) 16:08, 21 November 2012 (UTC)[reply]

Try this trick, maybe you can get the keycode. If you press a button the site displays the keycode. Trio The Punch (talk) 15:51, 21 November 2012 (UTC)[reply]

The keycodes for the arrows are 37, 38, 39, 40. With the [Fn] key down that becomes 33, 34, 35, 36. With the right hand side shift key down they stay 37, 38, 39, 40. Perhaps I can reassign the shift+arrow keys to 33, 34, 35, 36. And simply forget about the [Fn] key, which (unlike SHIFT and CTRL) doesn't produce a keycode. Thanks for the links to that useful-looking software. I've a friend who will help me play around with it. As for the drive - yes its an SSD. I choose it because of the speed benefits - all my big files will living somewhere else - so I'm not fussed that its doing this in the interests of speed. Thank you for your thoughts on that too. You're all very lovely :-) almost-instinct 19:33, 21 November 2012 (UTC)[reply]
I will mark this as resolved since you seem to be on the right track, but feel free to remove the {{resolved}} tag if you have more questions. Trio The Punch (talk) 20:19, 21 November 2012 (UTC)[reply]
Resolved

XKCD[edit]

What does u+2o2e refer to here? Rojomoke (talk) 12:26, 21 November 2012 (UTC)[reply]

Unicode character 202e, RIGHT-TO-LEFT OVERRIDE, reversing the direction of character presentation. Graeme Bartlett (talk) 12:03, 21 November 2012 (UTC)[reply]
Unfortunately explainxkcd.com is not up to date. Trio The Punch (talk) 12:06, 21 November 2012 (UTC)[reply]
Thanks Rojomoke (talk) 12:26, 21 November 2012 (UTC)[reply]
To explain the joke... standing guy starts some boring rant, black hat switches direction so standing guy now speaks in reverse. Read it backwards for what he is saying (and see if you can spot the small error). Astronaut (talk) 12:42, 21 November 2012 (UTC)[reply]
explainxkcd.com is up to date if you know where to look (they should really update their home page..). AndrewWTaylor (talk) 12:47, 21 November 2012 (UTC)[reply]
No, it is not up to date, you just agreed with me by disagreeing with me. Let's get married! Trio The Punch (talk) 20:23, 21 November 2012 (UTC)[reply]
Off-topic: the reversed 'THE' is misspelled... --CiaPan (talk) 14:10, 21 November 2012 (UTC)[reply]
How, exactly, does one use this character? I've seen youtube videos get it from a character map; can it be done on a keyboard? I feel the answer must be "yes" because I've had it happen to me accidentally once or twice. 146.87.49.176 (talk) 14:16, 22 November 2012 (UTC)[reply]

Efficiency of bit serial CPUs for massively parallel High Performance Computing and graphic processing.[edit]

How do a Bit-serial architecture such as a Serial computer perform in in terms of operations per joule and operations per second and square millimeter chip area. I am speculating about a system with bit-serial CPUs integrated on memory chips, preferably DRAM memory. About 10 000 CPUs per GB memory. Given that they would only need a few thousand transistors per CPU this would be a small part of the chip area. Since each individual CPU is relatively slow it would not need cash memory or other optimizations such as out of order execution to hide the memory latency. These optimizations totally dominates the area and power consumption of a normal CPU. Could this be a reasonable approach? Gr8xoz (talk) 16:08, 21 November 2012 (UTC)[reply]

BTW, I wanted to help you cache your mistake. :-) StuRat (talk) 17:55, 21 November 2012 (UTC) [reply]
The problem is what would it be good for? There were a number of bit serial massively parallel computers around thirty years ago but for instance the ICL DAP had 4096 elements a 64x64 array which is a workable size. Much larger than that and there just aren't the problems that map properly without very much better interconnection. The later DAPs had a 8 bit processors and some local 8 bit memory rather than expanding the array. There's not really much point in using anything except a proper processor with floating point nowadays for a large array and for small arrays the effort of making them wouldn't be worthwhile for a manufacturer that I can see. So you'll need a killer application first. Dmcq (talk)
How about finite element analysis ? This only requires that each node interact with the adjacent nodes. StuRat (talk) 17:55, 21 November 2012 (UTC)[reply]
I think the wide market for powerful graphic cards and FPGAs show that there is a market for simple computing elements in parallel. GPUs are used more and more for general computing (GPGPU) applications. If a massively parallel bit serial design could be used as a GPU but use less power and be more flexible that could be a killer application. Other than that there are a lot of embedded applications that currently uses FPGAs to implement parallel computations. Some applications could be video/audio encoding/decoding, computer vision, data compression, scientific simulations and so on.
The ICL DAP had all elements work in lock step as a SIMD system, that limits the range of problems the system could effectively solve considerably. I actually think you equally well could define it as being a special 4096-bit CPU as 4096 1-bit CPUs. I was thinking about full CPUs with floating point operations in a serial implementation. Each CPU can access all memory but have fastest access to the 1 Mbit memory block that are closest. As I understand it the most costly part of a a system today is the data paths, not the logic and this minimizes data transfer. Since each CPU can operate independently with full memory access it is much more flexible than a GPU. The question is could it beat it on performance for a given power and chip area? Gr8xoz (talk) 20:41, 21 November 2012 (UTC)[reply]
If you have a full processor you might as well access a larger chunk at once. Memories nowadays return a number of bits at once, and allow the data close by to by accessed faster as the line it is in has already been selected and just a small chunk returned externally. By the way another problem to cope with is error detection, a major reason the DAP only allowed access to the same address in each processing element was so error detection could be across each row and column of the array rather than having anything within the elements. Dmcq (talk) 00:17, 22 November 2012 (UTC)[reply]
Yes a typical DRAM is organized in a number of matrices and access one row at a time, often a few hundred bits per row. The problem is that it takes about 10 ns to access one row. A parallel processor needs to wait or use a cache large enough to hide the latency. In that time the serial processor operating at lets say 2 GHz reads about 20 bits per channel, one channel per operand, one for instructions and writes out the result, in total 80 bits. The transfers would obviously go through a few shift registers holding a row of data each, acting as a micro cache of a few bytes. Normal computers does not have bit error detection hardware, some memories have but that could easily be implemented on the DRAM row level. The DRAM row length should be optimized with regards to the number of CPUs that likely tries to access the same memory matrix, their relative timing and the number of data channels and so on. Then of course the next question is how the different semiconductor processes would mix if it should be done on a single chip. Gr8xoz (talk) 01:17, 22 November 2012 (UTC)[reply]
Well actually the error detection is less important nowadays as one can replicate the difficult bits and detect errors that way. So you are actually talking about internally working in a bit serial fashion in the processors. The big problem with that is the multiplying, that takes time proportional to the square of the number of bits in a word. That I believe is why later DAPs had 8 bit local processors added. Dmcq (talk) 10:25, 22 November 2012 (UTC)[reply]
By the way Ericsson recently acquired a firm that designed something along the lines you are talking about - not integrated with the memory but no-one has bothered with that - the chips are used mainly for video compression. The search term for that sort of stuff is 'associative string processors' Dmcq (talk) 10:45, 22 November 2012 (UTC)[reply]
Yes serial operation internally and maybe more importantly in the routing system/buses that connects the CPUs to the memory matrices. The actual read out of a memory matrix is of course parallel. As I understand it there are a mainly serial implementation of multiplication that scales linearly with the word length in both time and number of gates, called serial parallel multiplier or something similar. A pure parallel implementation scales the number of gates as the square of the word length. I will look up 'associative string processors' later, I need to return to work from my lunch break now. What do you mean with "replicate the difficult bits and detect errors that way" do you talk about repeating important calculations and check that the results agree? Gr8xoz (talk) 12:18, 22 November 2012 (UTC)[reply]
By replicating I mean just duplicate the processor on the chip and compare the outputs, that way you can detect errors in the processors as well as the data. It can be cheaper than replicating all the hardware or running problems twice. Of course if you don't want any downtime duplicating or triplicating all the hardware is better anyway and you can get away with less hardware per processor for detecting errors. Dmcq (talk) 15:49, 22 November 2012 (UTC)[reply]

Partitioning hard disk in Windows 7[edit]

Is there a built-in utility to do this ? The 1 TB USB external hard drive in question is already partially occupied by movies, so I'd like to leave the occupied part in place. StuRat (talk) 18:20, 21 November 2012 (UTC)[reply]

http://technet.microsoft.com/en-us/magazine/gg309170.aspx Trio The Punch (talk) 19:14, 21 November 2012 (UTC)[reply]
Thanks. I tried that, but the "Shrink Volume..." option is grayed out for that partition, which is FAT32. The other hard disk can be shrunk, and is NTFS formatted. Is that the problem ? If so, am I just SOL ? StuRat (talk) 02:11, 22 November 2012 (UTC)[reply]
DiskPart is something of an underachiever (presumably to leave market space for the paid for disk partitioning software). You're probably better off booting your computer from a Linux live CD (ubuntu, fedora, debian, or gparted-livecd) and use gparted or KDE Partition Manager. 87.113.165.189 (talk) 02:31, 22 November 2012 (UTC)[reply]
I have Puppy Linux. Are either of those available on it ? StuRat (talk) 04:40, 22 November 2012 (UTC)[reply]
Do you have another drive that you could move the contents of that HD to, then repartition it, and then move them back? That would probably be safer. Bubba73 You talkin' to me? 06:24, 22 November 2012 (UTC)[reply]
I've used the partition editor in Puppy. I mostly works OK (I only had one instance of it screwing up the partition in some weird way). Like Bubba advisies, it is safer to back up to another disk first. Astronaut (talk) 18:33, 22 November 2012 (UTC)[reply]
How do I run the partition editor ? StuRat (talk) 15:49, 23 November 2012 (UTC)[reply]
Googling "puppy linux partition editor" indicates that Puppy Linux (like a large number of Linux variants) uses the Gparted partition editor. Either look for "Gparted" or "Partition Editor" in the system menu, or run the command "gparted" at the command prompt as root or with sudo. (It's a graphical editor, even if you launch it from the command line.) -- 205.175.124.30 (talk) 21:41, 23 November 2012 (UTC)[reply]
Thanks. Tried it, but the resize option is again dimmed out. Does resize not work on FAT32 partitions ? StuRat (talk) 23:45, 23 November 2012 (UTC)[reply]
The partition may be dirty; scan it with Microsoft ScanDisk and make sure to cleanly unmount it in windows. -- Finlay McWalterTalk 23:51, 23 November 2012 (UTC)[reply]
I couldn't find ScanDisk, but ran a defrag with no problems. That means it's "clean", right ? StuRat (talk) 15:28, 24 November 2012 (UTC)[reply]
And obviously make sure, if you're doing this in Linux, that all partitions on the target disk are unmounted. -- Finlay McWalterTalk 11:58, 24 November 2012 (UTC)[reply]
Hmmm, if the disk is unmounted, gparted's "Move/Resize" option is grayed out. If mounted, the option is lit, but only allows me to increase size, not decrease, for this partition. StuRat (talk) 15:44, 24 November 2012 (UTC)[reply]

How did the team making Eclipse (which itself is mostly written in Java) for Windows make it an exe?[edit]

I know there are various third party applications out there for turning Java programs into Windows runnable .exe programs, but how specifically did the folks that produce the Eclipse binary for Windows make that software an .exe and not a JAR? I read somewhere that Eclipse is built with PDEBuild, but it seems that that is something that uses Ant, which itself orchestrates the building of things Java, which doesn't answer the question (without plumbing the depths of PDEBuild source) how exactly at the root of it all Eclipse, which was built with PDEBuild, which used Ant, which ran javac commands, made an .exe file instead of a JAR. 20.137.2.50 (talk) 20:00, 21 November 2012 (UTC)[reply]

Eclipse starts from a native code launcher, Eclipse Launcher, the job of which is to find an appropriate JDK (or, I guess if you're not building for Java targets, a JRE) and launch javaw.exe. Doing this, rather than have the shortcut be to an invocation of javaw.exe, gives a bit of resilience (e.g. in cases where someone uninstalls the JRE) - it should give a more useful error report than "executable not found". That page shows how the IDE can be started manually with an invocation of the java executable, rather than using the native launcher. -- Finlay McWalterTalk 20:29, 21 November 2012 (UTC)[reply]
If I were an engineer there I would continually surprise my bosses with miraculous speed improvements to Eclipse's startup routine, i.e. what is available while launching, how soon it's available, how fast it is to interact with in the first few moments. I'd do this by slyly, slyly, ever so slowly, translating Eclipse into C++ and writing it into the launcher. The other teams wouldn't even notice until it was too late! since it's c++, they wouldn't even notice the modest size increase versus the rest of the bloated codebase! As they saw the C++-baesd launcher's footprint catching up with the rest of the project, their only thought would be "yeah, that was bound to happen eventually. wonder what took so long"? Little would they suspect that their precious Eclipse was about to be eclipsed. I would next perform a whole coup and having reimplemented all of of Eclipse with a faster C++ version right in the launcher program, I would simply stop launching Eclipse at all! People would catch on, and start downloading just the 'streamlined' launcher. Muhahahahaha. Step one: take over responsibility for maintaining the launcher. Step two: the world. --178.48.114.143 (talk) 21:32, 21 November 2012 (UTC)[reply]
Guess what! You can get started right now! Here's the Eclipse Platform common code repository. Chances are very high that you won't have permission to make a commit to the official repositories, but you're encouraged to clone the git repository. If your changes are an improvement, it's very likely that you can make a strong case for inclusion. If, however, your suggestion is to re-architect the core platform code without telling anyone, that's not very good software engineering. Good communication about the intent and implementation of program code is a prerequisite for effective professional software development. Making changes that you think are wonderful, without consulting the many other experts who have worked on the project for a long time, is not necessarily the best way to make improvements. Nimur (talk) 00:48, 22 November 2012 (UTC)[reply]