Jump to content

Wikipedia:Reference desk/Archives/Computing/2010 January 30

From Wikipedia, the free encyclopedia
Computing desk
< January 29 << Dec | January | Feb >> January 31 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


January 30

[edit]

Cannot delete user accounts

[edit]

I'm trying to delete a user account on Windows XP, but each time I click on the "Delete Account" button, the program freezes, I have to end the process, and as a result the account is never deleted. What do I do about this? 24.189.90.68 (talk) 00:42, 30 January 2010 (UTC)[reply]

I hate it when I am told to try this, but without knowledge of any known bugs in this area, I have to suggest re-booting the PC. In other words, shut it down and turn it OFF.(maybe even at the power point) Wait a minute, turn it on and try gain. --220.101.28.25 (talk) 04:12, 30 January 2010 (UTC)[reply]
You can try removing it from the command line. For example, simply typing net user bob /delete will delete the account named bob. -Avicennasis @@09:28, 30 January 2010 (UTC)[reply]

All I get is: NET USER

             
[username [password: *] [options]] [/DOMAIN] username {password: *} /ADD [options] [/DOMAIN] username [/DELETE] [/DOMAIN]

And the account isn't password protected. 24.189.90.68 (talk) 00:40, 31 January 2010 (UTC)[reply]

OK, never mind the above, I found a way to delete the account through Computer Management with no problem, but thanks for the suggestions anyway. 24.189.90.68 (talk) 00:48, 31 January 2010 (UTC)[reply]

Java bytecode vs. machine bytecode

[edit]

I've always heard that compiled-then-interpreted languages like Java run slower than machine code, but I've never heard by how much. I know it probably depends on many factors, but I'm just interested in an estimate of how many times faster machine language would perform a task, say, repeated manipulation of a large text file, over Java. Does using the Java Native Interface to do work-intensive operations make a big difference, or to truly utilize the sped-up benefits of machine code, do you have to avoid a Java implementation altogether? Thank you!--el Aprel (facta-facienda) 03:34, 30 January 2010 (UTC)[reply]

The traditional rule of thumb was that a bytecode interpreter cost you a factor-of-ten slowdown. But if the bottleneck is the network or the disk rather than the CPU then using bytecode makes no difference, and if the bottleneck is loading or caching code (which it actually can be in some large applications) then bytecode may be faster than native code because it tends to be more compact.
In any case, Java bytecode is usually compiled (JITted) to native code before being run these days, though possibly with less aggressive optimization than modern C++ compilers employ. There are also compilers that compile Java directly to native code, like gcj, which uses the same optimizing backend as gcc. I think performance problems with Java have less to do with the bytecode than with features of the language itself, like those I mentioned here. -- BenRG (talk) 06:18, 30 January 2010 (UTC)[reply]
As always, you have to benchmark in order to be sure of a particular speedup or slowdown - and it is highly situation-dependent. I can't tell you how many "super-optimized" FORTRAN codes I've seen that used unbuffered disk I/O. This particular detail, which entails less than a half of a line of Java code, could give break the execution bottleneck of a huge class of computer programs. But if you are comparing direct implementations and execution time of native code vs. Java bytecode that are otherwise implementing identical operations, the native code will mostly execute faster. Nowadays, even this is not necessarily accurate - more instructions is not equivalent to longer execute time, because the CPU can do intelligent prefetching and branch prediction if the instructions are well-ordered; various instructions and operations of the x86 pipeline have different execution times; and as always, memory locality and cache performance will dramatically affect execution - probably more so than instruction count. So, I'd go for a compiler that is cache-aware, and a language which permits the user to ignore those details and let the compiler optimize them. Lately I've been mixing FORTRAN90, C, and Java with JNI. All I can say is, I wish the F90 would disappear. (Disappear__(), rather). I'm unconvinced by my benchmarks that it provides a comparative speedup at all. Nimur (talk) 18:03, 30 January 2010 (UTC)[reply]
As to "possibly with less aggressive optimization than modern C++ compilers employ", Sun's Java compiler, called HotSpot, is a very aggressive modern optimizing compiler. Java code routinely runs at speeds comparable to statically compiled languages - sometimes a few percent slower, sometimes a few percent faster (artificial benchmarks will inflate to NN% slower to NN% faster). JIT compilation really works; writing out machine code to a disk file before executing it does not make it faster!
Static compilers like gcj typically produce less performant code as they can't do things like inline most methods -- rather an important optimization in object oriented code with many small methods. Dynamic compilers can do that, giving HotSpot an edge over static Java compilers as well as statically compiled other languages. 88.112.56.9 (talk) 04:53, 31 January 2010 (UTC)[reply]
I didn't mean to imply that bytecode makes Java slower. Any given optimizing backend will produce pretty much the same machine code from bytecode as from the source code "directly". But when you're JITting, the compilation time counts as part of the run time, which makes some expensive optimizations less desirable. That's why I thought that JITters tended to dial down the optimization a bit. I know there's academic research on deciding (quickly!) which methods are likely to benefit from extra optimization, but I don't know what the commonly used VMs actually do.
JITters benefit from having access to the whole program and to runtime call traces, but so do modern C++ compilers. Since VS.NET (2002), Microsoft C++ has supported link-time code generation, in which the .obj files contain not machine code but a serialized intermediate representation—bytecode!—which is then optimized at the whole-program level. Microsoft C++ also supports profile-guided optimization. Intel C++ has supported whole-program optimization for even longer. I don't know the status of whole-program optimization in gcc. I think it has been delayed by licensing/political problems: defining an on-disk format for the intermediate code makes it possible for third parties to add passes to the compiler that don't fall under the GPL. I would expect that to hurt Java more than C++, because in C++ the functions most suitable for inlining are often placed in header files where they're available for cross-module inlining even when separately compiling. -- BenRG (talk) 09:12, 31 January 2010 (UTC)[reply]
(addendum) One factor that isn't stated often enough is that java (and many others) do more error checking a runtime than C - (of course you really should know how big your arrays are anyway..) - this puts most languages at a disadvantage in the computer language benchmarks game with respect to C (OCAML allegedly also does well).87.102.67.84 (talk) 09:43, 31 January 2010 (UTC)[reply]
Thank you, all, for the interesting and thorough information!--el Aprel (facta-facienda) 17:22, 1 February 2010 (UTC)[reply]

Getting Office 2007 Student on my eeepc

[edit]

Hey all. I recently bought (well, won actually) an Asus eeepc, which, as you will know, does not have a CD/DVD drive. I'm looking to buy a copy of Office 2007 Student edition to put onto it, but I'm worried that I'll just get a disk through the post. On the other hand, it came pre-installed with a trial edition - can I just buy a licence key somewhere online? What's the easiest way of doing this? - Jarry1250 [Humorous? Discuss.] 11:33, 30 January 2010 (UTC)[reply]

Upon further research (putting my problem into words helped), I think it may come down to cost. I live in the UK, and can get a legit copy presumably on disc for £36 (approx $57) thanks to an RM/Windows partnership thing. Download sources seem vastly more expensive e.g. $150. Maybe I should just go ask the £36 source about getting a download version... - Jarry1250 [Humorous? Discuss.] 11:42, 30 January 2010 (UTC)[reply]
Actually, I must be going mad. Surely I can just buy the CD/DVD version, look at the packaging, grab the licence key and chuck it into my trial? Yes, that sounds sensible. - Jarry1250 [Humorous? Discuss.] 11:44, 30 January 2010 (UTC)[reply]
You can only do that with the same EXACT version of office that the trial is (I believe it should be Home and Student? I'm typing on an eeePC right now.) If you get a Basic or Ultimate key, it will not plug in. Mxvxnyxvxn (talk) 23:48, 30 January 2010 (UTC)[reply]

If it comes on a CD, just make an image file on a computer with a CD drive, copy the image file to usb drive then on your ASUS Eee PC mount it with magic disk or similar software. —Preceding unsigned comment added by Kv7sW9bIr8 (talkcontribs) 12:10, 30 January 2010 (UTC)[reply]

In Australia, I recall seeing a version that actually came on a thumbdrive! (My memory may need defragging! ;-) ) Otherwise an external USB DVD drive? The Student version DVD was available for A$99. --220.101.28.25 (talk) 12:23, 30 January 2010 (UTC)[reply]

MHTML VS HTML

[edit]

When I save a web page, I have the option to save as normal html with images etc in a separate folder, or save as a single mhtml file. If I'm saving thousands of web pages, which is the better option for long term storage, viewing, searching etc etc —Preceding unsigned comment added by 82.43.89.14 (talk) 12:44, 30 January 2010 (UTC)[reply]

I think that a MHTML file is much more convinient. Then you get one file for the entire web page. I would definitely use this format if I were to save and archive a large number of web pages. But I suppose that the compatibility is slightly better for HTML + folder of images; I do not know if all major browsers can read .mhtml files. --Andreas Rejbrand (talk) 14:19, 30 January 2010 (UTC)[reply]
Not all can, but MHTML indicates that the big ones (Firefox, IE) can. I don't think it really matters either way. The upside of MHTML is that it results in fewer files to keep track of. But that's it. --Mr.98 (talk) 17:57, 31 January 2010 (UTC)[reply]

CSS question

[edit]

I'm trying to whip myself up a simplified speed-dial type page. Using CSS, is there a simple way I can divide the window into four quadrants, with text centered inside each, both horizontally and vertically? I could try positioning text links, but I want the entire upper-right area to serve as a link, the lower-right area to serve as a link, etc -- the entire area, not just the text. I know I described it poorly, but yeah -- is there a way to do that using plain CSS? 202.10.95.36 (talk) 15:16, 30 January 2010 (UTC)[reply]

Here is kind of a crude way. The difficulty (and crudeness) comes from the browser trying to figure out whether you've hit the limit for needing a scrollbar or not. (You could put this off if you had this code launched by Javascript in a window with no scrollbars.) This also kind of breaks official HTML DOM order—you aren't supposed to put DIVs inside of A HREFs, I don't think, even though it works (and is the only solution for certain types of things, unless you want to use javascript OnClick's, which I think are worse).
<html>
<head>
<style type="text/css">
<!--
body {
	margin: 0;
	padding: 0;
}

.box {
	text-align: center;
	height: 49.9%;
	width: 49.9%;
	vertical-align: middle;
}

#topleft {
	border: 1px solid blue;
	position: absolute;
}

#topright {
	border: 1px solid red;
	position: absolute;
	left: 50%;
}

#bottomleft {
	border: 1px solid green;
	position: absolute;
	top: 50%;
}

#bottomright {
	border: 1px solid yellow;
	position: absolute;
	left: 50%;
	top: 50%;
}

.innertext {
	position: absolute; 
	top: 50%;
	width: 100%;
	text-align: center;
}

a:hover .box {
	background-color: silver;
	color: white;
}

a:link {
	text-decoration: none;
}

//-->
</style>
</head>
<body>

<div id="holder">
	<a href="#"><div class="box" id="topleft"><div class="innertext">topleft</div></div></a>
	<a href="#"><div class="box" id="topright"><div class="innertext">topright</div></div></a>
	<a href="#"><div class="box" id="bottomleft"><div class="innertext">bottomleft</div></div></a>
	<a href="#"><div class="box" id="bottomright"><div class="innertext">bottomright</div></div></a>
</div>

</body>
</html>

Hope that helps. --Mr.98 (talk) 16:09, 30 January 2010 (UTC)[reply]

The big problem right now is that CSS doesn't properly support vertical centering. If you omit the requirement for vertical centering, just make four divs. Make each one 50% height and 50% width in CSS. Set position to absolute. Set left and top for them using 0% and 50% as necessary. Set text-alignment to center to center vertically inside the div. For vertical alignment, you have to place a div inside each div. Then, set the top and bottom margin to auto. Then, hope that the user isn't using an old version of IE. Then, hope that new versions of IE don't break all the code you just spent days working on. Then, give up on CSS and just use a table with heigh/width at 100% and 4 data cells. -- kainaw 21:08, 30 January 2010 (UTC)[reply]

http://reisio.com/temp/speed-dial.html
This will work in all relevant browsers, uses standardized code, and is semantically correct. ¦ Reisio (talk) 22:55, 30 January 2010 (UTC)[reply]

how does an Intel Atom compare to an old Pentium III?

[edit]

I can't for the life of me seem to get even the performance out of this netbook that I had on my old Pentium III, so my question is:

  • how does an Intel Atom N280 @1.66 GHz compare with a Pentiumn III @ 999 MHz in terms of performance?

Thanks. 84.153.221.224 (talk) 18:40, 30 January 2010 (UTC)[reply]

What performance do you mean? For your reference, here's the direct specs for N280 and Pentium 3. Most of the way through, the Atom is at or above the specification for a Pentium 3. Perhaps the performance bottleneck is somewhere else (e.g. a slower hard-disk, or comparing wireless network to wired network?) It seems unlikely that the netbook has less RAM than the original P3 computer, but this could also contribute to performance issues. Perhaps you had a better graphics card in your previous computer - netbooks tend to have flaky, bottom-of-the-barrel graphics chips because they are intended only for 2D graphics and "web surfing". Finally, check if SpeedStep is enabled - the Atom may be scaling back its performance intentionally in order to preserve battery life. You can disable that option in the power management control interface. In closing, I would just ask that you perform a side-by-side comparison to verify your claim - a lot of performance issues are subject to user-bias. If you actually measure load or execution times with a stopwatch, do you actually see that the P3 is outperforming your Atom N280? Nimur (talk) 18:49, 30 January 2010 (UTC)[reply]
Thanks, I do mean ACTUAL processing performance. I heard like fourth-hand that the Atom architecture is about as well-performing as a celeron at half the speed. Now, is a Pentium III @ 999 MHz markedly better than a celeron at 833 MhZ (half of 1.66 GHz)? Was what I heard even correct? Thanks for anyhting like that you might know, I'd hate to actually have to go through the trouble of doing a benchmark myself... 84.153.221.224 (talk) 18:58, 30 January 2010 (UTC)[reply]
They probably meant an Atom at 1.66 GHz is comparable to a Celeron (the original?) at 3.32 GHz, which seems plausible to me. -- BenRG (talk) 08:01, 31 January 2010 (UTC)[reply]
There's some benchmarks here http://www.cpubenchmark.net/low_end_cpus.html
  • Atom 280 @ 1.6GHz scores 316
There's so many pentium III's I don't know which to choose - there's Pentium III mobile @ ~1GHz scoring 193 up to about 228 . Higher figures are better. - it's not clear how they make the numbers - so I'm not sure if double = twice as fast..87.102.67.84 (talk) 20:19, 30 January 2010 (UTC)[reply]
Oh - your atom is dual threaded - I assume the old pentium was single core - for some applications the threading doesn't work very well - ie CPU activity doesn't get above 50-70% - if this is the case then this may cause the pentium III to appear better. Try a comparison in the task managers - and see if this is the case.87.102.67.84 (talk) 20:26, 30 January 2010 (UTC)[reply]
A dual-core CPU may peg at 50% instead of 100%, but a single-core hyperthreaded CPU like the Atom N280 can only peg at 100%. If the CPU usage is less than that then the CPU is not the bottleneck. -- BenRG (talk) 08:01, 31 January 2010 (UTC)[reply]
Is the N270 same as the N280? On a N270 I've seen various applications (eg ffmpeg) top out at 50% - opening a second instance increases this to 100% - As I understand it the atoms use hyperthreading to present 2 virtual cores to the machine.? Also the likely other factors preventing cpu over 50% include memory bandwith as well for anything with more than 1 (virtual) core?87.102.67.84 (talk) 09:04, 31 January 2010 (UTC)[reply]
Okay, admittedly my claim was not based on experience. Now that I think about it, a hyperthreaded CPU might well peg at 50% depending on how the OS measures things. But it's not actually half-idle in that situation like a dual-core CPU running a single execution thread. -- BenRG (talk) 09:21, 31 January 2010 (UTC)[reply]
Yes, Task manager isn't the best measuring stick. For a simple interpreted 'round robin' program a single instance takes 60s, two instances take ~85s each, beyond that (3 or 4 instance) the average times increase (slightly better than) roughly linearly with number (~10% base load) (ie as expected).
So the hyperthreading works ok - but might be a reason why single threaded programs (1 of) make the atoms seem slow compared to an old pentium III.87.102.67.84 (talk) 10:55, 31 January 2010 (UTC)[reply]
I remember similar complaints when the first core 2 duo's came out (@1.8 to 2.6GHz) - that they weren't as fast as a proper desktop pentium @+3GHz (despite being intrinsically faster in general) - nowadays standard core 2 duo parts run between 2.2 and 33 GHz so even applications that don't take advantage of the extra core work just as well or better.87.102.67.84 (talk) 11:03, 31 January 2010 (UTC)[reply]
This site is good in that it provides individual benchmark data (and a lot of it) - if you look closely you can see the effects of cache, processor speed, pipeline, and general architecture by comparing the individually listed benchmarks. This http://www.roylongbottom.org.uk/cpuspeed.htm is the best way in. You can get classic benchmarks to run as well from the same site if you wish87.102.67.84 (talk) 20:41, 30 January 2010 (UTC)[reply]


some of you said you didn't know which Pentium to compare with: it was a non-mobile normal Pentium III @ 933 MhZ (I had misremembered the 999 MHz). Now, that PIII scores 228 compare with my netbook's Atom at 316. Should be about a 38% speed improvement. Far from it. This thing is way, way, way slower.

There are two HUGE differences I can think of to explain this:\ 1) this is running Windows 7 whereas the PIII was running Windows 2000. 2) this thing has a Mobile Intel 945 Express Graphics Card, versus a dedicated NVidia in the old PIII desktop. Now this 945 Express is such a piece of junk, it's depressing. can anyone find benchmarks comparing this mobile graphics chip to old real graphics cards?

But this netbook is realy awful. It can't even play an MP3 while I browse without sometimes getting huge stuttering (I mean about 1000 milliseconds of it - that's 1 second of awful screeching) and other incredibly ugly artifacts. It's a total piece of shit! 84.153.221.224 (talk) 13:59, 31 January 2010 (UTC)[reply]

The 945 is about as low as you can go nowadays - but people have still compared it's performance with other things - http://www.google.co.uk/search?hl=en&rlz=1C1CHNG_en-GBGB363GB363&q=intel+945+graphics+benchmark&btnG=Search&meta=&aq=f&oq= it'll run Neverwinter Nights (medium). However you shouldn't be getting stuttering on mp3 no matter what.. Might be something wrong - (or maybe you have something like norton antivirus installed or some other performance hog). What does windows task manager tell you? I'm not sure about win7 but it might be a problem if you try to run the full aero thing. or maybe flash on browser sites is killing it? A usual atom+945 can just about do 720p video and doesn't have a problem with mp3 at all. Here's two of many places you can find 945 vs other video card numbers [1] [2] ~85fps on Quake3 (high).87.102.67.84 (talk) 17:31, 31 January 2010 (UTC)[reply]
Increasing the program priority of the mp3 program might fix it - all (single core) cpus will peak briefly to 100% utilisation when opening a web page - just right click on the process and select, not sure how or if this makes it permanently higher priority. If that helps there are ways to make the priority permanently increase.87.102.67.84 (talk) 17:45, 31 January 2010 (UTC)[reply]
The same site as before has a big list of video cards http://www.videocardbenchmark.net/gpu_list.php 87.102.67.84 (talk) 18:28, 31 January 2010 (UTC)[reply]
Another possible explanation (for audio problems) in win7 drivers - this page http://www.tomshardware.com/reviews/windows-7-xp,2339.html from a while ago suggests issues - I know that XP is fine on notebooks. New audio drivers?87.102.67.84 (talk) 19:07, 31 January 2010 (UTC)[reply]

AnandTech compared Atom N270 with Pentium M (see [[3]]) and concluded that depending on task, the Atom is equivalent to a Pentium M at 800 to 1200 MHz. Atom N280 is a more advanced variant of a N270 and a Pentium M is a further developed version of Pentium III. Thus, a Pentium III at 999 MHz should be even at its best equivalent to an Atom N280, and more often slower. The conclusion is that something is probably wrong with your Atom system. Is there any chance that it has a solid state disk? Bad performance of budget SSD's was an industry-wide problem until very recently, and stuttering is a possible symptom. 85.156.64.233 (talk) 21:42, 1 February 2010 (UTC)[reply]

Ubuntu Dual Boot

[edit]

I have an HP Mini 1000, Running Windows XP Home Edition. I have 3.63 GB of free disk space, and I'm not sure how much RAM I have. I want to dual boot it with Ubuntu 9.10. This will be the first time I have ever tried something like this, and need help. Thanks. MMS2013 19:51, 30 January 2010 (UTC)[reply]

No, you certainly don't have enough diskspace. The system requirements lists 4 GB for "bare minimum" operation and 8 GB for comfortable operation. The installer should allow you to resize your Windows partition if it has a lot of space that you aren't using. Just make sure you shut down Windows properly before fiddling with its partition, otherwise Ubuntu will refuse to touch it. Consider using Xubuntu (1.5 GB minimum, 6 GB recommended) or, if applicable, Ubuntu Netbook Remix. Xenon54 / talk / 20:05, 30 January 2010 (UTC)[reply]
And if you're not set on Ubuntu, there are a number of low-diskspace linuxes, such as Damn small linux and Puppy linux. See Mini Linux for more (or List of Linux distributions). However, depending on which one you choose, it may have less user-friendliness/community support than Ubuntu. -- 174.21.224.109 (talk) 20:24, 30 January 2010 (UTC)[reply]
It might be a bad idea to fill your free disk space with another OS - where would XP put new programs, documents, photos, etc.? Astronaut (talk) 13:57, 31 January 2010 (UTC)[reply]
It might be prudent to ask yourself whether you really really really need two OSes. I actually have Zenwalk and XP, but don't really use XP, because, y'know... ;) --Ouro (blah blah) 08:14, 1 February 2010 (UTC)[reply]
There is something you can do; Get a 4 gig usb drive, and install Ubuntu on that. The installation might replace your MBR/boot loader on your main HDD so it might be prudent to pull it out first. I have used this exact thing on my carputer. I have also used my flash drive to help me fix and recover virus infected windows computers for friends and coworkers. – Elliott(Talk|Cont)  14:33, 2 February 2010 (UTC)[reply]

Is an ATX psu suitable for 24vdc supply?

[edit]
Resolved

I need at 24vdc power supply for a stepper motor I'm testing. I do not have anything here that can push 24v. If I used an old ATX psu and used the +12 as positive and -12 as ground, would I be able to get 24v out of this setup?

A quick googling yeilded this site showing a 17v supply created from +5 and -12. Before I blow anything up, I just wanted a second set of eyes to tell me go/no go before I start splicing. Thanks aszymanik speak! 20:26, 30 January 2010 (UTC)[reply]

Usually you'd be ok - one obvious mistake I can think of would be to use +12 and -12V lines rated at different currents (or Watts) - if you do this the lower powered one is likely to overheat (or something) under load.. There probably are other things that can go wrong as well but I can't think of them.87.102.67.84 (talk) 20:56, 30 January 2010 (UTC)[reply]
An ATX power supply is designed to be controlled by a computer motherboard. So, you will have to rig up some controls to make it work. I think it would be much easier and safer to simply purchase a 24vdc power supply. Depending on amp requirement, you can get them for under $10. -- kainaw 21:03, 30 January 2010 (UTC)[reply]
I agree with Kainaw. I'd definitely go for a cheap AC/DC power supply, or string up a couple of 6 or 12-volt lantern batteries. This will be generally safer and less likely to cause hardware damage. Your ATX power supply is designed to supply power to a motherboard, and your unusual, out-of-spec usage may result in undefined behavior, potentially dangerously. Really, what you want is one of these, a variable DC power supply (unfortunately, the cheapest I could find was $99). However, if you're seriously experimenting with electronics, you really can't live without one or two of those (or even better models). Nimur (talk) 01:44, 31 January 2010 (UTC)[reply]
(edit conflict)[4] page 21 table 9 - the -12V rail is usually rated at about 0.1 A (whereas the +12V rails can take/give over 1A) - so you can't use more than a fraction of an amp without overloading the -12V rail - ie don't take much more than 1-2Watts for a ~300W powersupply. [5] suggests a limit of about 0.5 A for the -12V rail - your powersupply probably has similar figures on the side - or you can look it up on the web - summary - try to take 1Amp of current (24W) and there will probably be smoke!87.102.67.84 (talk) 21:06, 30 January 2010 (UTC)[reply]
Ok, thank you. That is some wise advice. I hadn't considered the likeliness of overloading the bus and definately dont want any smoke ;). I've been debating on getting a proper benchtop power supply. Maybe now is the time. aszymanik speak! 06:53, 31 January 2010 (UTC)[reply]
I would point out it's actually quite easy to start an ATX PSU without connecting it to a motherboard. People do it all the time when testing them for example. You just have to short the power on wire to any of the ground wires. The article has more details However ATX PSUs are generally designed with the expectation they will have resonably loading on the 3.3V, 5V and 12V and the regulation may be poor if you are only drawing a few watts. So even if you are trying to use them for the 3.3V, 5V or 12V I would use caution when using them as a generic PSU Nil Einne (talk) 15:52, 31 January 2010 (UTC)[reply]
While there certainly is no substitute for a good variable output power supply, I can say from experience that a well made switching-regulated ATX power supply can be a *very* cost competitive way to do some basic electronics work. The nameplate limitations are good to follow but on major brand units these values are rated based on full output and high heat. I can suggest Antec as a brand that routinely outperforms it's nameplate (I have used them in this exact setting). Also, a well made unit will voltage limit itself before melting down, so its relatively safe to experiment with as long as you are mindful of the voltage and cut off the load quickly when the supply is overloaded. --Jmeden2000 (talk) 15:44, 1 February 2010 (UTC)[reply]

Accidental forks?

[edit]

Are there any known cases where a branch of a software project became a fork only because attempts to reintegrate it with the trunk failed? (If the trunk was then abandoned, that doesn't count, because then the branch has become the trunk.) NeonMerlin 22:13, 30 January 2010 (UTC)[reply]

Mmmh (interesting) Microsoft's version of Java might count - though it wasn't exactly reintegrated (or supposed to be a branch) - more 'kicked out' - did that become c# I wonder? via Microsoft Java Virtual Machine Visual J++ and J# - dunno - I give up what's the answer? :) 87.102.67.84 (talk) 22:30, 30 January 2010 (UTC)[reply]
A rough approximation of the Unix lineage. Nimur (talk) 01:42, 31 January 2010 (UTC)[reply]
How about Unix? This has got to be the most well-known example of forks that become entirely new projects. You can also count the Linuxes, too, which are technically not a "fork" as the original kernel never shared a codebase with any Unix. At this point, it's best to think of a modern *nix operating system as a "mesh" which includes pieces of code from a lot of historical lineages. Aside from key utilities like the core kernel, the X server, and certain well-demarcated system utilities, it's very hard to say exactly where any particular component actually came from on any particular distribution. Often, the manpage or the source-code will document the lineage, and you can perform some technology archaeology of your own to trace back the versions. Nimur (talk) 01:42, 31 January 2010 (UTC)[reply]
I'm not sure that any of those forks were "accidental" in the sense that there was an intent to keep a unified codebase and then that attempt failed. APL (talk) 05:13, 2 February 2010 (UTC)[reply]