Jump to content

Wikipedia:Reference desk/Computing

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by X42bn6 (talk | contribs) at 19:42, 11 June 2010 (→‎Real-time post-processing audio from speakers: esc). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the computing section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:

June 6

date of software installation

How do can I check in my computer the date when MS Office 2007 was installed? I bought my laptop on February 20, 2010 and then I installed MS Office 2007 few days later. Your requirement to get free MS Office 2010 is to install MS Office 2007 before March 5, 2010. I am hoping that I installed the program on or after March 5, so I can avail of the free installation or upgrade of MS Office 2010 in October.

Right now, I use the beta version and I enjoy it a lot. I worry that I won't be able to use the 2010 version in November. I hope the product will reach the Philippines on time.

Will it be a good idea to buy again MS Office 20007 and reinstall it just to avail of the MS Office 2010? Thank you. —Preceding unsigned comment added by 112.202.194.203 (talk) 02:12, 6 June 2010 (UTC)[reply]

I'm pretty sure you are referring to this promotion. However, it says that you're eligible if you have "purchased, installed, and activated" the Office 2007 product, which seems to indicate you are out of luck. This page has more details about eligibility requirements. It sounds like they will eventually require your sales receipt, which will show an earlier purchase date, so it sounds to me like you may not be able to get the free version, in the end. Comet Tuttle (talk) 03:00, 6 June 2010 (UTC)[reply]
The most relevant part:
If you purchased your PC from an authorized reseller between March 5, 2010, and September 30, 2010, activated Office 2007 by September 30, 2010, and meet the other eligibility requirements you are eligible for the Tech Guarantee.
Suggests as CT said that unfortunately you're ineligible
Nil Einne (talk) 06:30, 6 June 2010 (UTC)[reply]

How does the Windows XP Installation CD decide if repair is a viable option?

I have acquired another used laptop, this time one with a thoroughly wrecked Windows XP Professional installation, so bad that I get an error on bootup saying "c:\windows\system32\config\system is missing or corrupt" and suggesting I could try a repair with the installation CD. The file it mentions actually does exist on the disk, so I assume it is corrupted. However, when I boot off the installation CD, it detects the broken Windows but doesn't offer the repair option. Instead I get options to install in a different partition, format the hard disk or overwrite the broken installation and in the process destroy the settings of all the other installed programs. I would rather not destroy the settings for the other programs until I have had the chance to evaluate what is actually installed and whether it is useful or not, and especially since one of them is Microfoft Office Professional for which I don't have installation disks.

I am thinking that perhaps I could copy some files from another PC with Windows XP, but so far have had no success copying the system executables and libraries (the .exe and .dll files from c:\windows and c:\windows\system32). What other files should I copy over to the broken Windows installation, so that the installation CD will offer the repair option? Astronaut (talk) 04:00, 6 June 2010 (UTC)[reply]

I would like to suggest a different approach. You seem to have borked your registry. Go to http://support.microsoft.com/?scid=kb%3Ben-us%3B307545&x=9&y=15 and start where it says "Part one". -- 109.193.27.65 (talk) 20:08, 7 June 2010 (UTC)[reply]

Why always modulo by a prime?

In programming competitions, when the correct result is very large, often you will be asked to return the correct result modulo a prime number. Why do they always choose prime numbers?--220.253.100.43 (talk) 07:34, 6 June 2010 (UTC)[reply]

Because modular arithmetic is particularly clean and easy when done relative to a prime. See for instance Primitive root modulo n. You don't have to cope with all the factors of the number specially if you know it is a prime. Dmcq (talk) 10:47, 6 June 2010 (UTC)[reply]
You can imagine a problem that is difficult to solve, but which has some easier-to-find factors. (trivial example: it's easy to see that 15! is divisible by 1000, since 5*5*5*2*2*2=1000) For a prime p, everything but multiples of p is relatively prime to p. If you're asked for a number mod p, and p isn't one of the factors of the answer, the trick doesn't work. "Highly composite" (so to speak) numbers often come up in combinatorics problems. Paul (Stansifer) 02:17, 7 June 2010 (UTC)[reply]

Waiting for Godot to hard boot

A long time ago, I was told you should wait 10 seconds after turning off your PC before switching it back on. Is there anything to this? Clarityfiend (talk) 08:22, 6 June 2010 (UTC)[reply]

Similar statements are made in the instruction booklets for various things...My Sky box and my router, for example. I've always assumed that it was to ensure people don't switch it off and then immediately switch back on, since some things won't actually switch off immediately and may behave like they haven't been reset at all, but I'd be interested to know if I was right. Vimescarrot (talk) 09:33, 6 June 2010 (UTC)[reply]
I was told that it had to do with capacitors. Capacitors in electronics store charge, and it can take them several seconds to self-discharge once the power is removed. The suggestion to leave electronics in the "off" state prior to restarting is to allow the capacitors to discharge fully, so that when you turn the power back on, all the electronics are in a consistent state. For computers, there is the additional issue of DRAM (standard memory), which stores the 1s & 0s in capacitors. You keep the computer off for a while so that there isn't any unknown garbage left in the memory when the computer restarts. -- 174.24.203.234 (talk) 17:21, 6 June 2010 (UTC)[reply]
My laptop power-supply continues to provide power for about six seconds after turning off so I allow more than ten seconds to make sure that the restart is "clean". Capacitors in older TV sets can retain a charge for hours. Dbfirs 07:16, 7 June 2010 (UTC)[reply]
I agree that it's something to do with capacitors - I recall blowing up a BBC Micro about 25 years ago by turning it off and straight back on again at the mains. Bobby P Chambers (talk) 13:22, 7 June 2010 (UTC)[reply]
Thanks everyone. Clarityfiend (talk) 04:54, 8 June 2010 (UTC)[reply]

i am a 11th student

am in commerce stream wit IP . what can i learn after my 12th? what r the jobs that cud b available to me? —Preceding unsigned comment added by 117.206.42.237 (talk) 09:27, 6 June 2010 (UTC)[reply]

For reference WHOIS indicates that the OP is located in India. I'm afraid I can't answer the question though as I have no knowledge of the Indian education system or job market. Equisetum (talk | email | contributions) 11:01, 6 June 2010 (UTC)[reply]

Networking

I wanted to know..if at all I wanted to be on a given web page for a long period without working on it,then will it be chargeable?Rohit.bastian (talk) 11:49, 6 June 2010 (UTC)[reply]

The simple answer is 'no', but with lots of additional questions/comments.
By 'chargeable', I assume that you are charged for the amount of data that you send and receive over the internet?
Once you load a web page from the internet into your local browser, it is in the memory of your own machine. If you unplugged your internet, it would still be there, right? And, in that case, you could look at the page for as long as you liked, without being connected.
There are, however, many complications - including;
  • Some pages update themselves automatically - this happens a lot on news pages, for example. Each time they update, more data will be sent/received over the internet.
  • All the time that you are connected to the internet, there are likely to be various 'control messages' flying back and forth; these depend a lot on your computer settings, but there are likely to be some pings, DNS lookup requests, and that sort of thing; you may have additional software to "check for updates", or to track instant messages - all of these will send and receive data.
Perhaps the best answer would be, to save a local copy of the page on your machine, and then disconnect from the internet. Most browsers allow this, with "File", "Save page as..." or something similar.  Chzz  ►  12:00, 6 June 2010 (UTC)[reply]

Actually i have only a limited downloading of 1.5 GB ..so charges in the sense,will the download meter keep running??I'd not want it to exceed this limit.So what if i want my girlfriend to start a chat on gmail with me first rather than i doing so??I want to be connected!!Just an example..will it still cost?? —Preceding unsigned comment added by 117.204.3.131 (talk) 12:18, 6 June 2010 (UTC)[reply]

Gmail is an expensive e-mail system on download size compared with other systems, but 1.5GB allows a lot of e-mails. I have only 0.5GB allowance, so I don't use gmail for regualar conversations. One useful trick is to disable downloading of images. Dbfirs 12:52, 6 June 2010 (UTC)[reply]
You can also configure Gmail to work with offline news readers. That way, you would connect briefly to the internet, send/receive your email, and then disconnect. You can then spend as long as you like reading/replying, without being connected, and reconnect when you need to send, or wish to check for new messages.
All email used to work this way; Webmail has become more popular in recent years, but plenty of people still read their email offline.
I am not sure what operating system you are using, so I do not know what software you have. Most versions of Windows include 'Outlook Express', which is an offline mail reader - if you have that, you could Google for it; for help with other operating systems, try googling for 'gmail' plus the name of the email reader you have. For other options, see Comparison of e-mail clients.
One tip: be careful about downloading attachments; you should be able to disable those. The text of even a very long email will only be a few kB, but attachments (pictures, etc) can be very large.  Chzz  ►  15:00, 6 June 2010 (UTC)[reply]
I don't know if I'd say 'most' versions anymore. Ignoring Windows 95 as well as Windows NT branded OSes completely, Windows 98 is over 10 years old and support died a long time ago, Windows ME is nearly 10 years ago and support also died a long time ago. Windows 2k is over 10 years old and support is about to end. Windows XP and Windows Server 2003 are therefore the only ones it makes sense to still be using if you connect to the internet. And indeed Windows XP is probably the only Microsoft consumer OS with a fair number of users. Usage share of operating systems. Windows Vista & Windows 7 plus Server 2008 all of course don't have Outlook Express. Of course since Windows XP still dominates most Windows users have Outlook Express.

In terms of the question, I'm not sure how interested the OP actually is in e-mail. They said "So what if i want my girlfriend to start a chat on gmail with me first rather than i doing so" which suggests to me they're more interested in the chat function then e-mail. In such a case, they can download Gtalk or some other XMPP client but obviously it won't work offline. There will be some bandwidth usage, but it shouldn't really make that big a dent in the data cap unless you include, voice, video or a decent amount of file sharing (including things like photos desktop sharing etc). In other words, keep it to text and you should be fine. However it will use some data, so don't spend all your data on other things.
Nil Einne (talk) 19:57, 6 June 2010 (UTC)[reply]

Could you elaborate on the thing of Outlook Express?? I have a windows 7 ultimate.I did get an account created there.Now how do i use it?? —Preceding unsigned comment added by 117.204.3.182 (talk) 09:36, 7 June 2010 (UTC)[reply]

If you have Windows 7 Ultimate then you would have Windows Live Mail not Outlook Express. This [1] result of a quick search for 'windows live mail gmail' along with plenty of other guides from the same or a similar search should help you set up Windows Live Mail for Gmail e-mail if that's what you want to do. It won't help with the chat component of Gmail Nil Einne (talk) 17:43, 9 June 2010 (UTC)[reply]

Replacing .htaccess files with <Directory> sections in apache2.conf

I'm moving a web site from a hosted server to one I run myself. On the hosted server, I used .htaccess files to restrict access to certain directories. After some fiddling I managed to get .htaccess working on the server that I run myself. However, the apache2 docs advise against using .htaccess files if you have complete control over the server, as I have in this case, and instead use <Directory> sections in apache2.conf. "Any configuration that you would consider putting in a .htaccess file, can just as effectively be made in a <Directory> section in your main server configuration file."

I'm unable to figure out how to follow the above advice.

My apache2.conf contains this section,

<Directory "/var/www/*">
     AllowOverride All
</Directory>

The directory that I want to protect contains this .htaccess file

AuthUserFile /some/directory/.htpasswd
AuthGroupFile /dev/null
AuthName "NO ENTRY!"
AuthType Basic

<Limit GET>
require valid-user
</Limit>

The .htpasswd file in /some/directory/ was created using the htpasswd program. According to the docs, there's a performance cost in using .htaccess files, because the apache server needs to check a lot of directories for .htacess files, hence the reccommendation of using <Directory> sections instead. I'm running Apache/2.2.9 (Debian 5.0.4) PHP/5.2.6-1+lenny8 with Suhosin-Patch mod_python/3.3.1 Python/2.5.2 mod_perl/2.0.4 Perl/v5.10.0, in case the answer is version- or OS-dependant.

I'd be grateful if someone could show me how to protect a directory, say /var/www/mydir, such that only user nblue, whose password is 123, can access it — using <Directory> sections instead of .htaccess files. Thank you. --NorwegianBlue talk 12:19, 6 June 2010 (UTC)[reply]

I'm pretty sure it's actually ridiculously simple: just create another Directory section for /var/www/mydir, and put all the .htaccess stuff in it. You can put it right below the other Directory section, I think. Like so:
<Directory "/var/www/mydir">
     AuthUserFile /some/directory/.htpasswd
     AuthGroupFile /dev/null
     AuthName "NO ENTRY!"
     AuthType Basic
     
     <Limit GET>
     require valid-user
     </Limit>
</Directory>
Does that work? Indeterminate (talk) 04:21, 9 June 2010 (UTC)[reply]
Thanks, Indeterminate. I read your answer from my work PC earlier today, and was about to write that that was the very first thing I tried. But I decided to try it a second time, just to make sure, before responding. And whaddyaknow... it worked. I must have made some silly mistake, and spent too little time pursuing the obvious solution, before moving into more exotic solution-attempts (trying to include the contents of .htpasswd in apache2.conf). Thanks a million, for restoring my faith in docs, the universe and everything! --NorwegianBlue talk 21:21, 9 June 2010 (UTC)[reply]
Resolved

Recommendation of free blogging application needed

I'm looking for a free locally-hosted blogging application for small group collaboration. It doesn't need to have advance features (being easy to install/use/administer is more important). Any suggestions? --173.49.77.55 (talk) 12:46, 6 June 2010 (UTC)[reply]

I'm afraid you've given too little information about the hardware that is going to host the application, and about your requirements, to get a good answer. The MediaWiki software, which Wikipedia is built upon, would be a good choice IMO, if the hardware is a Linux machine (I've got no experience with MediaWiki on windows). --NorwegianBlue talk 20:05, 7 June 2010 (UTC)[reply]
Applications that run on Linux would be good, which I assume MediaWiki is one. What would be a good way of using it for blogging? --173.49.77.55 (talk) 02:53, 8 June 2010 (UTC)[reply]
You would need to learn how to use the Mediawiki markup. The best way of doing that is by browsing around Wikipedia, and see how things are done. A useful feature is transclusion. You would then write your posts as separate articles, and transclude them on the main page, like so: {{:Welcome to my blog}}. Note the colon after the double braces. Without the colon, the Mediawiki software will assume "Welcome to my blog" is in the template namespace, not the main namespace.
A slightly more technical addendum: Wikipedia uses a lot of templates. If you want wikipedia templates to work, you'll need to known that one template usually depends on other, more primitive templates, and on css and javascript. If you want a specific template to work, you'll need to copy the template to your site, along with the css/javascript (MediaWiki:Common.css, MediaWiki:Common.js, MediaWiki:Monobook.css or MediaWiki:Vector.css), as well as the more primitive templates. --NorwegianBlue talk 12:14, 8 June 2010 (UTC)[reply]
If you're really looking for "blogging" and "easy" and "free", rather than "collaborative editing" and "hard to set up" and "free", I would recommend something like WordPress over Mediawiki. Mediawiki is powerful stuff, but it's a wiki. Using it as a blog is kind of overkill. WordPress is a blog. It is super easy to set up, and can run on anything that has PHP and MySQL, and very straightforward to blog with and manage (and modify, if you really want to). It's much less of a pain than Mediawiki. --Mr.98 (talk) 13:16, 8 June 2010 (UTC)[reply]
I have no experience with WordPress, but I've experimented a little with Joomla, and have used the MediaWiki software quite a lot the last couple of years, for various projects. You'll need the same basic skills for all three (being able to set up Apache, a MySQL database, and PHP). I've found that it's easy to mess things up irrecoverably in Joomla, while the MediaWiki software is pretty robust. I'd prefer MediaWiki over Joomla for just about any task, but as stated, I haven't tried Wordpress, and the comparison is not really fair, because Joomla too is more than just a blog engine. To me, the MediaWiki software is kinda like emacs, it can be used for anything from toasting bread to writing an encyclopedia. --NorwegianBlue talk 21:56, 9 June 2010 (UTC)[reply]
For a blog specifically, how does one generate the requisite RSS/ATOM syndication feeds using MediaWiki? The standard feeds that MediaWiki generates are change syndication rather than content, and it seems the other syndicated wikipedia content is generated by non-MediaWiki software. Is there a plugin for this? -- Finlay McWalterTalk 22:14, 9 June 2010 (UTC)[reply]
Not sure, but this appears to confirm that what is transmitted is change (modificaion) of content, not total replacement of content. As for the pros and cons of usig a wiki for blogging, see [2]. --NorwegianBlue talk 23:02, 9 June 2010 (UTC)[reply]

Opening files in reverse alpha order

I have a list of files to go through and I'd like to have them open in reverse alpha/numeric order. Right now for example, if I highlight five files then 1 opens, 2, and so on. But I want to view 1 first, then 2 and so on, so I need 1 to open last and therefore be in front of the others. Is there a key sequence or setting that will allow this? I'm on a Mac running 10.5. Thanks, Dismas|(talk) 14:29, 6 June 2010 (UTC)[reply]

The easiest way to do it is to change the sort order in the Finder window. For instance, if the files are currently sorted ascending by name, clicking the 'name' column header will change the finder window so that the files are sorted descending by name. Files will be sent to the application in top-to-bottom order as shown in the Finder window, so this will reverse the order of opening. of course, it depends on what application you're using: Preview, for instance, will always open files in alphabetical order. If you have stubborn files, you can always create a quick applescript to open them in any order you like, but I think changing the finder window ordering should do it for you. --Ludwigs2 17:40, 6 June 2010 (UTC)[reply]
Hrm... I normally use column view and to list them in reverse order requires list view. I guess I can switch when I need to and then switch back... Not very elegant but it works. Thanks, Dismas|(talk) 20:49, 6 June 2010 (UTC)[reply]
If you want a script that does it for you, let me know. it's easy to make. --Ludwigs2 18:28, 7 June 2010 (UTC)[reply]

Network

Resolved

I have two standard computers, both modern-ish (WinXP and Win7). I want to connected them so they can share files and between them and both use the internet connection. Is this possible with just an ethernet cable? Or does one need more complicated set up like hubs and routers and special software? 82.43.89.11 (talk) 16:17, 6 June 2010 (UTC)[reply]

Yes, you don't need a hub, or a router, and you don't need a null cable either. You need to enable "internet connection sharing" on the PC that has the internet connection (assuming it's a USB connection) and have the other get its IP address (etc.) via DHCP. If, however, your internet connection comes via an ethernet cable (say from a cable or adsl modem) then you would need a hub or switch (assuming the router doesn't have one already). -- Finlay McWalterTalk 16:25, 6 June 2010 (UTC)[reply]
Thank you. I'm slightly confused though. The internet does come via an ethernet cable to the first computer. However the first computer has two ethernet ports. Can I still connect the second computer to the first computers 2nd ethernet port, and share the internet? Or will two ethernet ports in use at the same time cause problems? 82.43.89.11 (talk) 16:29, 6 June 2010 (UTC)[reply]
(ec) The standard way to do this is to buy a router. You use one Ethernet cable to connect the router to your cable modem or DSL modem, and then you connect each computer to the router with one Ethernet cable (or via a wireless connection, if it's a wireless router). The router uses a technology called DHCP to give each of your two computers its own IP address, and so they both share the Internet connection. If you don't want to buy a router, you can use one Ethernet cable to connect the DSL modem or cable modem to one PC (I'll call it "A") that has 2 Ethernet ports, and then use a second Ethernet cable to connect "A" to the other PC; and you'll set up "Internet Connection Sharing" on "A". Here is a Microsoft article about Internet Connection Sharing on Windows XP, and here is an article about setting it up on Vista (and Windows 7 should be the same). Comet Tuttle (talk) 16:31, 6 June 2010 (UTC)[reply]
Could I do the same with an XP computer, and one or two Linux computers? Is that possible please? Thanks. 92.24.182.231 (talk) 09:05, 7 June 2010 (UTC)[reply]
Yes. If the XP computer is the one with the 2 Ethernet ports, you would set up Internet Connection Sharing as noted above, and then plug your Linux computer into the second Ethernet port of the XP computer. The XP computer will tell the Linux computer its IP address. This page has some details, or google "share internet connection linux". Comet Tuttle (talk) 18:15, 7 June 2010 (UTC)[reply]

Thanks 82.43.89.11 (talk) 17:25, 6 June 2010 (UTC)[reply]

html -> mht

Resolved

I need a program that can convert thousands of html files to mhtml, including all images and css and stuff. I've searched everywhere, sourceforge, google code, website after website etc etc and there's absolutely nothing. There's loads of programs for converting mht to html but that's the exact opposite of what I want. Does anyone know any program on any OS (yes, if I have to I'll install a different OS for this) that can do what I want? Hell if someone here could write a program which does this I'd pay you for it. The prospect of continuing to manually open batches of 100 tabs in firefox and save them with unmht (which takes HOURS) is making me want to kill myself 82.43.89.11 (talk) 17:30, 6 June 2010 (UTC)[reply]

And there's nothing in these search results - such as the very first one - that's of any use to you? --Tagishsimon (talk) 17:49, 6 June 2010 (UTC)[reply]
Tried batchwork, doesn't work. I even emailed them over it and they didn't reply. Perhaps I should be more specific in my wording; when I said "there's absolutely nothing" I mean "there's absolutely nothing that works". 82.43.89.11 (talk) 18:22, 6 June 2010 (UTC)[reply]
If you're willing to pay, it's a pretty trivial job to get someone on ELance (or whatever) with Win32 skills to do it (C++, C#, VB), as Microsoft's Collaboration Data Objects IMessage object has an API to do it. For my own amusement, I figured it out in Python - use this as you will:
push show for python code example
here we gooooooo

To run this, you'd need python2 (2.6 whatever) and pywin32. It takes two local files (cat1.jpg and cat2.jpg), authors an index.html to refer to them (this is needed as CreateMHTMLBody works by saving a page and its associated content, so you need a page to act as an index to that content). Then it creates fin.mht, an MHTML archive containing that index and both files.

# Call IMessage CreateMHTMLBody to author an MHTML file with local content.
# http://msdn.microsoft.com/en-us/library/aa487621%28v=EXCHG.65%29.aspx

# Python code mostly from:
# http://timgolden.me.uk/python/win32_how_do_i/create-an-mhtml-archive.html
import os, sys
from win32com.client.gencache import EnsureDispatch as Dispatch

# sample index page - I'd imagine in practice you'd generate this programmatically
index="""Here are some cats:
<img src="cat1.jpg"/><br/>
<img src="cat2.jpg"/>
"""

f = open("index.html","w")
f.write(index)
f.close()

URL = r"file:index.html"
FILEPATH = r"fin.mht"

message = Dispatch ("CDO.Message")
message.CreateMHTMLBody (URL)
stream = Dispatch (message.GetStream ())
stream.SaveToFile (FILEPATH, 2)
stream.Close ()
-- Finlay McWalterTalk 20:04, 6 June 2010 (UTC)[reply]
I really, really appreciate the effort and thank you so much :) but I don't understand anything about python, so I don't know what to do with this code. How do I use it to convert 2000 html file urls to mht? 82.43.89.11 (talk) 20:10, 6 June 2010 (UTC)[reply]
Not by itself; it's proof of concept that it's a fairly trivial task. The ELance freelancer I'm suggesting you hire could use it as a basis of a program that would. -- Finlay McWalterTalk 20:20, 6 June 2010 (UTC)[reply]
The Mozilla maf extension that is used to save web pages in the .maff archive format seems to have a pretty capable conversion wizard that even can convert a pile of html pages into mht files (but maybe the .maff format would be the better choice than .mht), see http://maf.mozdev.org/documentation.html#convertingmorepages —Preceding unsigned comment added by 84.157.72.156 (talk) 20:18, 7 June 2010 (UTC)[reply]
Thank you very much 84.157.72.156 for this extremely helpful answer :) I posted a more exuberant thank you earlier that was removed by the fun police, which you can still view here if you want. Again, thanks for this wonderful suggestion, it works perfectly :D 82.43.89.11 (talk) 21:58, 8 June 2010 (UTC)[reply]

What's the performance load of inner anonymous classes?

Hello! When I was learning Java with Sun's/Oracle's Java Tutorials, they recommended using a java.beans.EventHandler as much as possible in place of anonymous inner classes that implement ActionListener, PropertyChangeListener, etc. They explained the reason for this is the java ClassLoader has to load each inner class separately, whereas EventHandler coalesces *Listeners. My question is how much of a performance load do these anonymous inner classes impose? Is it worth junking up my code with hard-to-read EventHandler methods? Does it help if my inner classes are very short? Does it only really matter if I'm using dozens of inner classes? I'm aware of the technique to use a single ActionListener which delegates what code to run depending on the source of the passed Event, but nothing seems as clean to me as anonymous inner classes. Thank you!--el Aprel (facta-facienda) 18:03, 6 June 2010 (UTC)[reply]

For sane human-written code, I doubt it's worth worrying about. Anonymous inner classes are just inner classes (from the jvm's perspective), and inner classes are mostly just classes. Each class loaded does take some time and some memory (regardless of how small they are) and if you have lots that's starting to be a burden. It's probably not ideal from a memory-locality perspective either, as each little handler is off in its own little line. But, at least on a full (SE,EE) grade JVM, for code you've written yourself, I'd doubt even dozens of such hand-written mini-handlers will make an appreciable difference. When talking about this in Swing, they take a fairly balanced view; if it were me I'd group handlers logically (personally I don't find the little AIC handler method terribly clean, but your mileage may differ) and worry if this is causing a genuine performance problem only if it is. I think really Soracle are writing to another audience - the (often quite mad) people who write tools that automatically generate Java code. Left to their own devices, they'd happily create 50,000 objects of 50,000 different classes and not worry about the heinous abuse they'd do to memory, and particularly the cache, in the process. -- Finlay McWalterTalk 19:15, 6 June 2010 (UTC)[reply]
Thank you, Finlay McWalter! Your response was very informative and helpful.--el Aprel (facta-facienda) 04:28, 7 June 2010 (UTC)[reply]
I should stress that the above is really not true for a small-factor JVM (like CLDC) where memory is much more constrained and CPU cache is pitiful, where you really would expect to see a demonstrable harm from generating a few dozen extra classes than strictly necessary. It seems to be moot on Java Card, however - as far as I can tell, it handles what little "events" it has in a different way altogether. -- Finlay McWalterTalk 15:40, 7 June 2010 (UTC)[reply]

Installable content management system for CentOS

Hello - long time listener, first time caller, so to speak!
I have a small web design company, let's say "www.mywebdesignco.com", that creates websites for clients. I have a virtual private server running CentOS 5.5, on which I host my clients' sites.
When clients want changes, they give me a call, I make the change and upload the changed pages to their site - it doesn't take too long, but I've been looking into pushing the burden back on them, as it were, by installing some sort of content management system.
There are some online options, such as PageLime and Cushy, which would allow me to do this for free, but unless I pay a subscription the facilities are (a) limited in number, and (b) branded with the CMS system, rather than my company's branding.
Given that I have a large server sitting there with loads of space, I've been trying to look into installable solutions so that I could create a sub-domain, say cms.mywebdesignco.com, at which I could install a CMS - I could then give clients their own login, they could log on, it would all be branded with my company's branding, they could make the changes to the relevant bits of their websites, and I don't need to worry about them.
The thing is, I'm great at designing and programming websites, but for some reason when it comes to Linux installations, et al, I go a bit blank - I run Virtualmin, so can handle the administration of my clients' domains, email and websites without too much difficulty, but the easier the better is my watchword when it comes to installing new applications onto my VPS.
So, I guess my question is whether there's such a solution, ideally free or open source, out there?
Thank you for taking the time to read through my question!
—Preceding unsigned comment added by Bobby P Chambers (talkcontribs) 20:16, 6 June 2010 (UTC) Sorry Bobby P Chambers (talk) 20:19, 6 June 2010 (UTC)[reply]

A lot of people use things like Drupal or Django-cms for this kind of thing (we have a lengthy, if perhaps unhelpful, list at List of content management systems). Half-decent ones let you assign roles and permissions (so the customer can change "news" stories and add articles, but can't mangle their page layouts or delete their databases). Unfortunately with great power comes a great big manual, so setting these up can be quite an undertaking. -- Finlay McWalterTalk 20:27, 6 June 2010 (UTC)[reply]


June 7

Advanced rotation in 3DS Max

Precession on a gyroscope

Hey, been looking for an answer to this, but it's been tricky. If you were to do a precession motion in 3DS, how would be the easiest way to do it? Similar to the image at the right, which I did in POV-Ray.

In POV-Ray, I can run multiple transform commands on the same object. What I did there was simply rotate N degrees on the Z axis and then, in a different transform, rotate the object 360° (a bit for each frame) only on the Y axis. This gave me the movement you see there, which looks the way I want.

I can't seem to do that in 3DS, which is a problem I've been dealing with in other programs as of late (AfterEffects, Maya...). I'm stuck with a single transform (a single matrix, it seems), so I have X, Y and Z rotations being done at once, and that just doesn't work. I could manually input things through Euler angles, but that's such a pain, since I'd need to compute them back manually. There must be something trivial that can be done in these cases, something like creating a new reference frame or sub-group. I just haven't found it yet. Any ideas?

Convert Raw to PNG

Hey, I've got RAW photo files from my Canon PowerShot SX 110 IS (taken using CHDK; the camera doesn't support it natively). The file extension is CRW. What's the best way to convert them to PNG files? I understand that Photoshop is good for this, but I don't have it and I don't want to pay for it. The Raw image format article gives a bunch of options, but I have no clue what's good. Does anyone know if Raw Therapee is any good? It looks like it ought to be able to save my raw files as PNGs, TIFFs, or JPEGs, which would be good. Or any other, better suggestions? I'm really unfamiliar with this type of thing. Buddy431 (talk) 01:18, 7 June 2010 (UTC)[reply]

http://www.irfanview.com/ is the easiest, cheapest, most fool-proof option. Won't have some of the fancier RAW developing options you'll find in Photoshop, but if you don't need those you're golden. Just open the file and save as a jpg or png. You can even do batch conversion. Riffraffselbow (talk) 05:30, 7 June 2010 (UTC)[reply]
IrfanView can read the files but it has almost no developing options at all, as far as I can tell. There's little point in shooting raw unless you want to tweak the conversion process. I downloaded Raw Therapee (version 2) and it looks pretty good. Try it out and see if you like it. -- BenRG (talk) 05:50, 7 June 2010 (UTC)[reply]
You're getting far less out of what you're doing then if you choose to tweak the options but I wouldn't say little point since AFAIK many camera support RAW which is often uncompressed and JPEG which is losslessly compressed. If you save as RAW you can then convert it to a lossless compression format which appears to be what the OP is trying to do, and hopefully with the same output without intervention you would expect from the camera saving JPEG, except you get lossless compression. And while I don't know if there's ever really such a case, it would seem theoretically possible that given the time and hardware constraints you could generally get better automatic results from a fancy computer program then from the camera hardware processing (and particularly if you have a fancy GPU the hardware capability is vastly superior even if it isn't dedicated, regardless of whether it can actually be used for better results). Edit: Also if you aren't deleting the RAW images, you can later choose to specially process any specific images that you aren't pleased with or that you decide are very important or whatever, something you can't do if your camera is saving the JPGs Nil Einne (talk) 06:52, 7 June 2010 (UTC)[reply]
Nin Einne's spot on: My camera normally only saves to JPEG files (lossy), and I want a PNG or other lossless file format. Having my camera save as a RAW file and then converting on my computer to a PNG (or other lossless format) is the only way I could see to do it. Buddy431 (talk) 13:41, 7 June 2010 (UTC)[reply]
dcraw (along with something like pnmtopng) may be a good choice, as it's well-suited to non-interactive use. I do question the overall goal here... why do you need PNGs? If your goal is to archive a lossless file, you already have that in the raw file. The conversion from raw to PNG loses data (not in the same way as JPEG compression, but you will lose dynamic range at least). If your goal is to edit the photo, then at least some of those manipulations (exposure, color balance, etc.) should probably be done during raw processing, before outputting to PNG or similar format. -- Coneslayer (talk) 13:52, 7 June 2010 (UTC)[reply]
You're quite right of course, but it wouldn't be very nice of me to distribute my pictures to others in a raw format, and they still might appreciate a lossless format (especially at, say, Commons). Buddy431 (talk) 14:42, 7 June 2010 (UTC)[reply]
Fair enough, I don't usually think of full-size PNGs as being "distribution-friendly" (e.g. to friends and family) but for something like the Commons it makes sense. -- Coneslayer (talk) 14:43, 7 June 2010 (UTC)[reply]
I would challenge you to use the SX 110 to save a raw and high quality jpeg of the same shot, convert them both to PNG (or a format of your choice) and then tell the difference between the two. I am honestly curious, but just to be honest I really doubt you will be able to, other than the likelihood that the raw version won't be on the same white balance (if it's balanced at all) as the jpeg thanks to CHDK and the conversion process. RAW formats work wonders on DSLR cameras, but that's because the sensors are dramatically different. Even on point-shoot cameras that are factory-equipped to shoot raw really don't score any better in tests when in raw mode. Just thought I would share this, and see if anyone has information on quantitative quality studies. I would be interested to see it! --Jmeden2000 (talk) 17:03, 8 June 2010 (UTC)[reply]
I have an SX100 running CHDK (almost the same setup as the OP) and the CRW as shown by IrfanView looks quite different from the JPEG produced by the camera's internal processing (and rather worse, in my opinion). IrfanView has the option to use Canon DLLs instead of its internal processing (which I presume is based on dcraw), but I haven't managed to get that to work. I wouldn't count on the Canon DLLs working identically to the camera since these processing algorithms are closely guarded secrets and DLLs can be disassembled. If the DLLs did work identically except without the JPEG compression stage, I suspect the result would be indistinguishable from the camera's super-fine JPEG except for file size. The odd texture of the image at high magnifications comes from the denoising algorithm, not from JPEG. The whole idea of "lossless" images of the natural world is nonsensical in the first place. My advice is to stick with JPEG. -- BenRG (talk) 20:04, 8 June 2010 (UTC)[reply]
That's been my experience, too. Even the newer canon point-shoot cameras with genuine raw capture and processing via the DPP suite (canon's raw postprocessor) tend to look grainier if anything, and basically no additional detail is derived from the RAW information. JPEG is the world's photograph-sharing standard for a reason; use a good tool and a high setting and you won't tell the difference except under extremely close inspection, and the file size savings and overall compatibility are worth that tiny bit of loss, IMO (as a photographer). RAWs only benefits come out when you have a DSLR and want to do specific postprocessing (and have a good RAW processor to do it in). --144.191.148.3 (talk) 14:30, 9 June 2010 (UTC)[reply]
I don't know if I would say in terms of pure opening JPEG is any more compatible then PNG in the modern world. Some very old browsers is about all. However it does appear there's no reliable/well supported way to add EXIF to PNGs. However the point about the different output is a good one. I did read a few people suggesting that many/most? raw automatic processing algorithms aren't as good as the camera's internal ones which was why I said 'and hopefully' above. As a personal opinion, not really having a digital camera, if size didn't matter, which is a big if (although with increasing HD sizes and memory card sizes apparently starting to get to the point that the average consumer doesn't need any large, at least according to some comments I read about SDXC once it's far less of an issue then say 3 years ago) and I could get the same output which appears to also be a big if, I would take the lossless over the lossy because although it's true you'd rarely notice the difference, there may be some specific cases when you will (obviously we're only talking about under high magnifications here) and ultimately you can produce lossy saves of your lossless images if you want. In terms of the commons, one advantage with uploading a PNG instead of JPEGs is people usually keep the same file and therefore same file format when making minor touchups. If multiple people make multiple minor touchup you may get some noticeable at higher magnifications generation loss. Of course you could just save your JPEGs as PNG before you upload although if you're going to upload as PNG anyway it may seem you might as well go lossless in the first instance if the earlier conditions are met (which as I specified they may not be). Alternatively try to encourage people at the commons to avoid that sort of thing. P.S. Just to emphasise I do agree that even if you could get the same output it's likely to be rare and definitely only under high magnification that you'll notice the difference between high quality JPEGs and lossless images so the actual advantage is going to be small. Nil Einne (talk) 17:35, 9 June 2010 (UTC)[reply]
Not to drag this out much farther but two things to add: 1) You can work losslessly on JPEG if you use an editing program that allows such things; 90 degree operations and localized changes will take place without an overall degradation of the image. It is up to the user to figure this out though; since it's not readily apparent whether your tool and workflow will end up being lossless. From a purist perspective it's still not ideal but from a practical one (moving files up and down and around the internet) there is a huge advantage to JPEG. And one correction to your comment from before (in case you hadnt figured it out) RAW is actually losslessly compressed, however the file size is still about 3-4x what a high detail JPEG would be. --144.191.148.3 (talk) 18:22, 10 June 2010 (UTC)[reply]

Linux clock problem

Resolved

Hi! I'm running Debian Linux (lenny), and have been having some problems with the hardware and software clocks. I believe Linux threw off my BIOS's clock, and when I fixed it and continued to boot to Linux, my software's clock was wrong, AND the hardware's, too. That's to say

# hwclock
# date

return different times that are both wrong. I tried hwclock --localtime and setting the clocks to the right time, but every time I reboot they're off again. It's not the CMOS battery, either, because the BIOS maintains the correct time when I don't boot to Linux. What am I missing? Thank you!--el Aprel (facta-facienda) 04:36, 7 June 2010 (UTC)[reply]

How wrong is wrong (in other words, is it simply a few minutes or nearly exactly several hours or something else)? Also are you sure your timezone is set correctly? Is the Linux set to store/read the time in the bios as UTC (as it probably would by default, unlike say Windows) or local time? When you say 'when I don't boot to Linux' do you mean if you go into the bios before bootup the time is set correctly (this is the best way to tell what the bios is doing as otherwise you need to be sure whatever other OS isn't correcting the time)? Nil Einne (talk) 06:45, 7 June 2010 (UTC)[reply]
Okay, I was looking at it again and I've found some consistency. The BIOS's clock remains on the correct local time no matter how many times I boot to Linux as long as I don't try to change it with hwclock from there. After correctly setting the BIOS's clock to the local time, #hwclock and #date both return the same time that is exactly 4 hours behind the BIOS's (so if the BIOS says it's 15:00, Linux #hwclock and #date say it's 11:00). The minutes and seconds are consistent all around. hwclock is set to --localtime, which is why #hwclock and #date have identical times. Setting the time with #hwclock screws up the BIOS's. It shouldn't make any difference, but /etc/timezone is correctly set to US/Eastern for me. Any suggestions? Thank you!--el Aprel (facta-facienda) 19:45, 7 June 2010 (UTC)[reply]
As Nil Einne suggested, check your time zone settings, especially if time is off exactly by multiples of 30 minutes (yes, there are some silly time zones that are 30, rather than 60 minutes apart). Also, if you have internet connectivity, you could install ntp and/or ntpdate to regularly sync your clock with an official time server for your country. Two of hwclock's parameters might be of interest to you:
hwclock --hctosys 
and
hwclock --systohc
Oh, and if you're running Linux in a virtual machine, that might have an influence on clock speed as well. I remember reading quite a bit about it on VMware's web site, and could imagine other virtualization providers like VirtualBox or qemu have similar issues. -- 109.193.27.65 (talk) 19:48, 7 June 2010 (UTC)[reply]
Thank you for the tips. I was thinking about installing a time-server updater, but I'm not sure that would solve the problem, since it seems to be Linux's tinkering with the BIOS's clock causing it. I'm running Linux all by itself on the computer, with no other operating system to fiddle with the clock.--el Aprel (facta-facienda) 20:14, 7 June 2010 (UTC)[reply]
Well, in that case, ntp seems like the way to go, especially because of the way it keeps the clock in sync (it sloooows dooown graaaduaaalllyyy or spdsupvryfast, but doesn't do "jumps"). Or maybe it's the hwclock --systohc that's run as part of the Linux shutdown sequence that messes with your BIOS settings, so it would be sufficient to disable that particular line? Check /etc/init/d/hwclockfirst.sh and /etc/init.d/hwclock.sh - but be sure to read the inline documentation of these files first. Maybe something is messing with your /etc/adjtime file? -- 109.193.27.65 (talk) 20:27, 7 June 2010 (UTC)[reply]
Thank you, I will check those files. I should have mentioned that I use my BIOS to start up at a specific time in the morning, so I do need it to always have the right time and not let Linux mess with that. Otherwise, I agree ntp would be the easiest solution to my problem.--el Aprel (facta-facienda) 20:36, 7 June 2010 (UTC)[reply]
Well, for that, you could try to suspend your Linux to disk (not to RAM) and see if maybe hwclock --systohc isn't called during a suspend (I assume your boot loader defaults to Linux, so there shouldn't be any issues with one OS booting up while the other one is suspended instead of being properly shut down). Waking up from suspend should be faster than doing a cold boot, anyways. -- 109.193.27.65 (talk) 20:54, 7 June 2010 (UTC)[reply]
If it still doesn't work, please post the output of
grep UTC /etc/default/rcS
and
hwclock --debug
I think these two might not agree on whether your BIOS clock is running on UTC or not, when it is in fact not. -- 109.193.27.65 (talk) 21:04, 7 June 2010 (UTC)[reply]
Thank you! That was exactly it: UTC was set to "yes" in /etc/default/rcS. I changed it to "no" and now the the clock is working fine. Thanks again, --el Aprel (facta-facienda) 22:02, 7 June 2010 (UTC)[reply]
You're welcome. :-) Actually, there's a question during installation that asks if your BIOS clock is set to UTC, so I guess you either gave the wrong answer there during install or you picked a setting where the installer asks next to nothing (guessing on that one, as I don't want to look up the priority level of that question right now), skipping over the question and selecting what it considers the smartest choice. That's the problem with the computers of today - any attempt at artificial intelligence will sooner or later turn into a case of genuine stupidity. ;-) -- 109.193.27.65 (talk) 22:16, 7 June 2010 (UTC)[reply]
I'm not that experience with *nixes but still a bit confused by this and since the issue is resolved hopefully no one minds be taking this OT. I was originally going to suggest the UTC thing more clearly in my first answer but then got confused because if I understood the OP correctly Linux programs are reporting the incorrect time. I would have thought that if you set it in Linux to use UTC in the bios, once Linux corrects the time in the bios to be UTC provided the timezone is set correctly all programs should report the correct time (well unless you tell them to report UTC). (If you check the bios manually it would appear the incorrect time if you weren't aware it was set to UTC of course.) But from reading the above, it sounds like it was more complicated then that, and the problem appeared to be more then Linux trying to use UTC when the OP didn't want it, also different programs not agreeing on whether or not the bios was using UTC. Or to put it a different way, even if the OP had wanted Linux to use UTC and so had correctly answered the installer option, it seems like there would still be a problem with the OPs config based on my understanding of the above. Do multiple programs ask you whether your bios is set to UTC? Nil Einne (talk) 23:04, 7 June 2010 (UTC)[reply]
IIRC, the installer question isn't simply if you want to use UTC, but rather "Is your hardware clock set to UTC?". So if you want to use UTC, you have to set the clock to UTC in the BIOS before you start your installation, then when the question pops up, answer "yes". So what we were seeing here was probably that if it's not running on UTC and you answer yes, it'll store a wrong time in the hardware clock during the next shutdown/reboot, which will bite your computer in its shiny metal posterior upon next boot. This isn't even WP:OR, though - just wild guessing. If you want to experiment further, try to change the settings here and there and monitor the output of hwclock --debug so you can compare what the system thinks with what the hardware clock thinks. I'd do it myself, but: <span style="margin:0px;">Hanc marginis exiguitas non caperet.</span> -- 109.193.27.65 (talk) 23:26, 7 June 2010 (UTC)[reply]
Actually, my original post was incorrect. What had happened was #hwclock and #date were returning the same (incorrect) time until I tried setting the correct time with #hwclock --set, and I think I misinterpreted the result since it must have been UTC time since that was the setting. Interestingly enough, hwclock --localtime did not change the setting in /etc/default/rcS, so maybe the "UTC=yes/no" setting is contained in more than one file and they weren't consistent? Just a guess.--el Aprel (facta-facienda) 03:05, 8 June 2010 (UTC)[reply]

laserdisk

i read the article but it doesn't say

how many megabytes does a laserdisc hold —Preceding unsigned comment added by Sunnyscraps (talkcontribs) 13:34, 7 June 2010 (UTC)[reply]

Assuming you mean Laserdisc, it appears they can hold up to 60 minutes of video per side in an analog format which, as far as I know, is not easily converted into a clear cut digital number of MB. On the other hand, assuming that the video is approximaetly VHS quality, then this would suggest that two hours of VHS quality video is about 1 GB (going on the fact that it says a 1.46 GB DVD can hold about 3 hours of VHS quality video). So that would be about 500 MB per side of the laserdisc. That's only a rough approximation though, of course. Buddy431 (talk) 14:19, 7 June 2010 (UTC)[reply]
The BBC Domesday Project's LaserVision Read Only Memory disks, an adaptation of Laserdisc that did support digital files, apparently stored up to 300 MB per side. -- Finlay McWalterTalk 14:39, 7 June 2010 (UTC)[reply]
However as noted in Laserdisc#Comparison with VHS Laserdisc quality was better then VHS (and I can also say that from personal experience that it was definitely noticeable). Somewhat in between DVD and VHS is the usual approximation.
Another perhaps interesting comparison is that of the CD Video, basically a combined Laserdisc+CD that was CD size. This could store up to 5 minutes of video plus 20 minutes of audio. I don't however know if the video was stored CLV or CAV (I would guess CLV since it was apparently fairly late and CDs are CLV). But regardless, this would mean you gave up 54 minutes of audio, or perhaps ~475MiB of the disc for 5 minutes of video. You may think then that a full size disc (capable of storing 30 minutes instead of 5 minutes if it's CAV or 60 minutes instead of 5 minutes if it's CLV) could theoretically store 2.8GB - 5.6GB.
(There were also Video Single Disc although it's not clear if these could store more video.)
However this would potentially require techology that wasn't properly developed until the CD-ROM hence the reason systems like the one FW mentions above were far more limited (actually reading the article it appears the Laservision also included audio and vision in addition to the digital data). In other words, this may not really be a fair comparison. You could for example in the same vein, wonder what you could store with a 30 cm DVD or Bluray (I expect this would be highly theoretical since there's lots of problems you're likely to encounter with such a system).
P.S. Note that in later variants Laserdiscs could store a minimum of one pair of full CD-quality digital audio tracks in addition to the video.
Nil Einne (talk) 22:26, 7 June 2010 (UTC)[reply]

data to audio?

Can binary data be encoded into audio? Question Factory (talk) 13:36, 7 June 2010 (UTC)[reply]

I'm not really sure I understand the question. What sort of binary data? Why would you want to encode it into sound? Note that digitised sound (wav files, MP3s) are binary data and are played as sound. --Phil Holmes (talk) 13:48, 7 June 2010 (UTC)[reply]
Assuming you really mean encoding binary data into sound waves (and not vice versa), then one simple encoding is to use long bleeps for 1s and short bleeps for 0s. This encoding is used in the transmission of Morse code, for example. For more sophisticated methods, see amplitude-shift keying, Kansas City standard and frequency-shift keying. Gandalf61 (talk) 14:46, 7 June 2010 (UTC)[reply]
It's very easy just to take some binary data and call it PCM audio, but that will generally either sound like noise or like nothing at all. Representing your data in an audio format that actually conveys anything worthwhile (that is either meaningful or musical) is much more of a challenge, and depends entirely on what kind of data it is, what features of it you want to hear, and what synthesis technology you choose to use to bring that about. You might choose to use the data to drive parameters in a software synthesizer or perhaps do more sophisticated work on it with a audio programming language. Doing this is a lot like graphing data - you need to decide what to graph and figure out a way to do it so you get a worthwhile, meaningful result. -- Finlay McWalterTalk 14:49, 7 June 2010 (UTC)[reply]
Cue the old guys...I've spent hours adjusting the read/write head of an ordinary cassette tape recorder to get data off the notoriously finicky Sinclair ZX81 cassette tapes. Luxury was having a tape counter to know where your latest iteration of your program was stored... --Stephan Schulz (talk) 14:58, 7 June 2010 (UTC)[reply]
Even more obvious... how about a phone modem? It encoded binary data as audio that was transmitted over telephone lines. I wonder if kids today would recognize the ooo-weee-ooo-weee-grrrrrr of a modem sync since they've (luckily) had no reason to ever hear it. -- kainaw 16:58, 7 June 2010 (UTC)[reply]
Phone modem??? Get off my lawn, hedonist! In my days, we were happy when we got an acoustic coupler, because before that, we had to whistle into the phone with a boatswain's pipe. Oh, and of course, we had to carry our bit buckets uphill both ways! -- 109.193.27.65 (talk) 19:40, 7 June 2010 (UTC)[reply]

CSS zoom vs HTML size=

What are the CSS font size percentages that would equal the HTML size="-1" and "-2" relative sizes? --70.167.58.6 (talk) 17:02, 7 June 2010 (UTC)[reply]

They mean two different things. A percent is a percent of the font size itself. So, if it is a 10pt font and you ask for it to be 80%, you get an 8pt font. If you ask for a font size of -1, you might get a 9pt font. You might get a 9.5pt font. You might get an 8pt font. The web browser is only being asked to make it one size smaller, but it is not told exactly how much smaller. My experience is that Firefox reduces by 1 point. So, if the base is a 10pt font, -1 will produce a 9pt font and +1 will produce an 11pt font. Because the base is 10pt, -1 is 90% and +1 is 110%. If you began with an 11pt font, -1 would produce a 10pt font, so it would reduce it to 91%, not 90%. -- kainaw 17:11, 7 June 2010 (UTC)[reply]
I think the issue here is that the HTML specification doesn't give absolute size differences for FONT SIZE values. Each browser presumably handles it a bit differently. CSS by contrast is meant to be handled fairly uniformly. I'm not sure there is a way to do a direct 1-to-1 conversion that looks the same on every browser. (All of this without assuming that the user has their own zoom/font settings.) --Mr.98 (talk) 17:58, 7 June 2010 (UTC)[reply]
So there's no relative font sizing from the default size in CSS? Obviously, I want to keep it a whole number and not have 10.3pt type. --70.167.58.6 (talk) 00:13, 8 June 2010 (UTC)[reply]
There is relative font sizing; it is percentage-based (e.g. 90%, 110%). It does not round to whole numbers. As for wanting whole numbers, I honestly don't see the reason for the worry. The difference between 10pt and 10.3pt is arbitrary and immaterial, as long as you are relatively consistent. (It has no relation to the actual size it will be in pixels on any given user's screen—it will be so many millimeters on my laptop, so many millimeters on my monitor, etc.) If you are really concerned with having absolute value font sizes, you will have to set them absolutely (e.g. create styles for the relative increases, decreases, etc. with hardcoded values). Of all of the many complaints one can have about CSS (and I have many!), this seems rather low on the totem pole to me. --Mr.98 (talk) 03:18, 8 June 2010 (UTC)[reply]

WWDC 2010

Will the Keynote be available via Apple's Keynote video podcast? Chevymontecarlo 19:13, 7 June 2010 (UTC)[reply]

Answered my own question. The video came up in my download list earlier today. Chevymontecarlo 16:55, 8 June 2010 (UTC)[reply]


June 8

Keeping Windows error messages in the background.

Is there a setting or shareware product that would prevent Windows errors window from popping in front of the current running application window? I'm trying to design a Kiosk that runs in fullscreen mode, but occasional OS warnings and messages pop in front of it. How can I keep my front most window always in front? --70.167.58.6 (talk) 00:11, 8 June 2010 (UTC)[reply]

I'm not sure you can totally overcome all modal windows. A better approach is to set up the windows machine to not display said messages, which can be done fiddling with the administrative settings. --Mr.98 (talk) 00:51, 8 June 2010 (UTC)[reply]

Writing a DVD

How do I write files(movies) to a DVD such that it could play on DVD players??I use Nero 10 —Preceding unsigned comment added by 117.204.2.32 (talk) 08:20, 8 June 2010 (UTC)[reply]

You really should consult the help file that comes with the Nero software. It'll tell you how. --Mr.98 (talk) 13:10, 8 June 2010 (UTC)[reply]
You can use a conversion software like DVD Flick. Nero can write .iso or make a DVD from .vob .bup and .ifo files. Graeme Bartlett (talk) 22:02, 8 June 2010 (UTC)[reply]
Select DVD Video and drag your source files into the VIDEO_TS folder. If you have non-compliant files then you must use Nero Vision to convert them to VOB etc. If you don't have that, try using ConvertXtoDVD software which will eliminate the need to use Nero for creating DVDs. Sandman30s (talk) 07:57, 9 June 2010 (UTC)[reply]

All Windows desktop icons disappeared

4-year old HP desktop running Windows XP home SP3. All the desktop icons have disappeared; desktop background is still there. Everything else seems to work, including icons in the task bar. When I try to set up a new icon by dragging it onto the desktop, on releasing the mouse key it disappears. When I did this a second time with the WP icon from the Firefox address bar, a message box said this icon was already in "the folder" and did I want to replace it. Any ideas? JohnCD (talk) 08:20, 8 June 2010 (UTC)[reply]

I assume you've tried the classic turning-it-off-and-waiting-and-then-turning-it-on-again? ╟─TreasuryTagsheriff─╢ 09:01, 8 June 2010 (UTC)[reply]
Yes, I did a restart and then a complete turn off overnight and restart from cold. And I am even-as-we-speak running a virus scan. JohnCD (talk) 09:49, 8 June 2010 (UTC)[reply]
I do not remember the old Windows XP system very well, but in Windows Vista and Windows 7, you can right-click the desktop, select "View" and then toggle "View desktop icons" on/off. --Andreas Rejbrand (talk) 11:53, 8 June 2010 (UTC)[reply]
That did it, thanks! It wasn't quite like that, but under the "Arrange icons by" submenu there was a "Show desktop icons" item which had somehow unchecked itself. Thanks again! JohnCD (talk) 13:00, 8 June 2010 (UTC)[reply]

Thinking about animations.

For an assignment, I've been told to explain what should be taken into account with regards to making animations for use on the web (probably .gif files, maybe some Flash animations, it's not specified), with particular reference to accessing the Internet from PDA's, mobile phones, etc. I figure image size (on such a small screen) and file size (on such slow connections on low-power machines) are particular relevant; anything else? I don't need an explanation, I can probably figure it out just as soon as I know what it is I might talk about. Thanks in advance - Vimescarrot (talk) 09:17, 8 June 2010 (UTC)[reply]

The frame rate is important. The higher the frame rate, the higher the CPU load. Mobile devices have less-powerful CPUs than desktop and laptop devices. (Of course, there are also older, less powerful machines — e.g., Pentium IIIs — still surfing the net, too.) Also important is the number of things on the "stage" of the movie. If you just have one thing on the stage moving around against a white background, then the CPU won't have to draw as much. But if you have many actors (what we call sprites) moving around, then the CPU load will go up. These are issues to consider both with flash and animated GIFs. I have made both animated GIFs and flash animations that bog down high-performance desktops.
Also consider the implications of adding music to your animation. Music tastes vary quite a bit, so you should add a volume slider or off button in case they don't like it.--Best Dog Ever (talk) 09:54, 8 June 2010 (UTC)[reply]
That helps. Thanks Vimescarrot (talk) 11:15, 8 June 2010 (UTC)[reply]

Excel indirect formula (not Excel indirect cell location)

Is it possible to have a range of cells in excel all refer to a FORMULA in some other cell and evaluate that. I don't mean the INDIRECT function that returns the data pointed to by a cell address in another cell.

 EG:
A1={=B1+C1} or {"B1+C1"} depending upon how it works
A2={=FANCYFUNCTION(A1)} which evaluates as {=B2+C2}
A3={=FANCYFUNCTION(A1)} which evaluates as {=B3+C3}
 then I change A1 to be
A1={=B2*5} or {"B2*5"} depending upon how it works and now I get
A2={=FANCYFUNCTION(A1)} which evaluates as {=B3*5}
A3={=FANCYFUNCTION(A1)} which evaluates as {=B4*5}

-- SGBailey (talk) 15:33, 8 June 2010 (UTC)[reply]

I'm only used to the Google Documents spreadsheet, but I doubt that Excel has what you want either. Programming languages with first-class functions do what you want, though. Paul (Stansifer) 18:38, 8 June 2010 (UTC)[reply]
You're best off using a macro for that sort of thing. (Has Google spreadsheets anything like macros?, it sounds unlikely to me) Dmcq (talk) 08:27, 9 June 2010 (UTC)[reply]

LCD monitor vs LCD TV

Why are LCD monitors more expensive than LCD TVs? --212.120.247.67 (talk) 19:34, 8 June 2010 (UTC)[reply]

Well, I don't know about the economics of it, but generally speaking LCD monitors have higher resolutions than LCD televisions, and have more variability in terms of refresh rate. I'm sure that must have some kind of effect on how they pack the transistors in there and so forth. --Mr.98 (talk) 23:28, 8 June 2010 (UTC)[reply]
As much as anything, I'd wager it has to do with economies of scale, and as Mr 98 said, the density of pixels. Riffraffselbow (talk) 13:33, 9 June 2010 (UTC)[reply]
You can certainly find a given resolution TV to be more expensive than the same PC monitor, and vice versa, it's all about the fatures (and to some extent, yes, pixel density). TVs tend to use lower fidelity display technology (such as twisted nematic), since overall quality is not as important due to the user typically being farther away. At the same time any LCD TV will include an ATSC/QAM tuner which, at full resolution is a fairly sophisticated device. Monitors will probably include fewer inputs overall but they will be of higher quality (DVI, Displayport, HDMI, RGB) compared to most TVs which sport HDMI and a few others (if you're lucky). With all that being said, now that digital output from PCs (DVI or HDMI) are ubiquitous, and digital input on TVs is the same, why not just get an LCD tv for your next monitor if it will save you some money? IMHO there are some very nice 32" units out there now that are well priced, perform the same as a monitor (same resolution, refresh, etc) and are far cheaper than even a 24" monitor. --144.191.148.3 (talk) 19:21, 9 June 2010 (UTC)[reply]

ssh authorized_keys

I need to maintain a central /etc/ssh/authorized_keys file on one machine, as opposed to having per-user authorized_keys files in ~/.ssh/authorized_keys. For forced commands, this is easily done by prepending the public key with

command="test \"$USER\" = \"allowed_user_name_here\" && /forced/command/here". 

However, I need to allow interactive logins, too, and what happens when I place a public key in /etc/ssh/authorized_keys is that whoever owns the corresponding private key can log on as any user known to the system, including root. Obviously, real security doesn't work that way. So how can I make sure that each public key in the central /etc/ssh/authorized_keys file is valid for one specific user only, without blocking that user from running interactive sessions? -- 109.193.27.65 (talk) 22:15, 8 June 2010 (UTC)[reply]

It doesn't really look like the authorized_keys file is designed to work that way. They specifically say the file shouldn't be accessible by anyone other than the intended user. Why do you need a central authorized_keys file? Oh, hmm.. could you just make the /forced/command/here for each key be "/bin/login $USER" (or the relevant username)? Indeterminate (talk) 04:09, 9 June 2010 (UTC)[reply]
I need it in a central location because the machine is basically a master copy of several clones, where /home is swapped out with the home partition needed for that specific clone, so I would have to implement some sort of detection mechanism that checks if a new /home has been mounted, and follows up with a copy/merge into the individual ~/.ssh/ folder. Also, I do not want regular users to be able to add their own additional keys.
The way I solved it for now is not using a central file, but a central path and individual file names:
AuthorizedKeysFile      /etc/ssh/%u_authorized_keys
Though I'm not sure I'm going to leave it that way. Maybe
AuthorizedKeysFile      /etc/ssh/authorized_keys.d/%u 
would make more sense.
What both solutions have in common is that the files are stored in a central location and that ordinary users can't change them because of insufficient write permissions. -- 109.193.27.65 (talk) 08:11, 9 June 2010 (UTC)[reply]

Vertical and Horizontal retrace

Quote from the "Principles of analogue television" article:

"A CRT television displays an image by scanning a beam of electrons across the screen in a pattern of horizontal lines known as a raster. At the end of each line the beam returns to the start of the next line; at the end of the last line it returns to the top of the screen."

Why wasn't analogue television designed in a boustrophedon-style signal, where the first line is traced from left to right, and the second from right to left, and when the whole screen is drawn, it starts to draw the next frame from bottom to top. This should make both horizontal and vertical retrace unnecessary. But of course I have great respect for those scientists who developed television had some reason for not doing like this. So my questions is, what's wrong with my thinking? —Preceding unsigned comment added by 83.226.129.174 (talk) 22:53, 8 June 2010 (UTC)[reply]

It is not necessary to complicate the design further by making it trace right to left to right to left... etc... It is complicated because a television must stay in sync or the picture will be flipped left-to-right or top-to-bottom. You can add safeguards for the flipping, but that makes it more complicated. The "retrace" is trivial. It is nothing more than a change in voltage on the magnetic controller. Like flipping a switch, it is nearly instantaneous to jump from one point on the screen to another. -- kainaw 23:07, 8 June 2010 (UTC)[reply]


I don't really know for sure, but I'd guess it's to do with graceful handling of timing drift and a failure of horizontal lock. With the l->r only scheme, if horizontal lock is lost the image will roll slowly left or rightward; that did occasionally happen with the old analog tvs of my youth. So a picture like:
  _abcdefgh_
  _ABCDEFGH_
(where _ is the horizontal blanking gap) would roll to become:
  __abcdefgh
  __ABCDEFGH
But with an alternating scheme, a differential of even 1% will cause utter loss of vertical coherence (that is, a vertical bar would become two fuzzy vertical bars) and essentially destroy the picture. So:
  _00001000_  
  _00001000_  
would become
  __00001000  
  00001000__  
Given that, when analog standards like PAL and NTSC were developed, they were targeting devices built from wobbly and very analog components, tolerance to such drifts has to be a major consideration. -- Finlay McWalterTalk 23:16, 8 June 2010 (UTC)[reply]
Correcting my last, losing hlock on an alternating-direction scheme would completely diagonalise the picture, making it instantly garbage. -- Finlay McWalterTalk 23:19, 8 June 2010 (UTC)[reply]
Doing it one direction only improves the registration between lines. You might notice with a printer that if you ask for high quality rather than fast it will print each line in one direction only, and that's using a stepper motor which is digital. You'd never have got a decent picture with the analogue electronics. Dmcq (talk) 08:20, 9 June 2010 (UTC)[reply]
The fast retrace from right to left is not trivial. It requires a fast change in current through the magnetic deflection coils. Due to their inductance, that corresponds to a spike of high voltage. That provides the source of high voltage that a CRT needs. If the OP's bi-directional scan were used, TVs with CRTs would be more expensive due to their need for a separate EHT supply. Using the existing line scan rates, such TVs would emit the unpleasant noise of a 7812.5Hz sawtooth. Cuddlyable3 (talk) 20:23, 11 June 2010 (UTC)[reply]

June 9

File hosting services

Hi. I've been trying to find a file hosting service that meets the following criteria. Can you please help me?

  • The service must have unlimited file storage.
  • The service must have no maximum file size.
  • The service must have "direct access" to downloads, with no CAPTCHA or wait time to download a file.
  • The service must not have a traffic or bandwidth limit.
  • The service's files may not expire (be deleted off of their servers) after a period of time. (However the service may delete the file based on inactivity since the file was last downloaded.)
  • And, most importantly, the service may not charge for the above services.

I would appreciate your help. Thanks! Samwb123T (R)-C-E 00:58, 9 June 2010 (UTC)[reply]

Hi, after you have found a suitable file storage service, can you find a plane for me as well? I want it to be able to go to any place on earth in less than one hour, take up to 10000 passengers, use water as fuel, gold plated and encrusted with diamonds and most importantly, be free? thanks 203.167.190.79 (talk) 03:10, 9 June 2010 (UTC)[reply]
No need to be sarcastic! There IS a file hosting service that meets those criteria: it's called hosting your own server. :) Check out droopy, for instance. Sorry, Samwb123, nobody else will let you use unlimited amounts of their storage and bandwidth for FREE. Indeterminate (talk) 03:47, 9 June 2010 (UTC)[reply]
Honestly, that would have been similar to my response too. Hosting your own server doesn't meet half of the OP's requirements. Unless you ignore the "most important" one. The OP is going to have to be flexible on at least a few of the requirements before this becomes even remotely feasible. Vespine (talk) 05:49, 9 June 2010 (UTC)[reply]
Google documents? It allows users to store music, documents, pdf, etc, and share them. --Tyw7  (☎ Contact me! • Contributions)   Changing the world one edit at a time! 04:32, 9 June 2010 (UTC)[reply]

Except for the maximum file size, mediafire meets all these requirements. If your file is larger than 200mb (or 2gb if you're on their premium service) then you can just rar them into smaller chunks then upload. 82.43.89.11 (talk) 10:15, 9 June 2010 (UTC)[reply]

The OP asks for "no maximum file size." The OP should rethink this limit, unless they are interested in designing the famous infinite data compression algorithm. Disks have finite size. File systems and operating systems have maximum addressable file offsets. Even if you could convince somebody to pay for the service, buy the disks, attach the storage with superfast network, when you hit files larger than, lets say, 10,000 petabytes, you are going to have some serious trouble using them. The same goes for "unlimited" bandwidth; unlimited storage time, and so on. Serious data archivers, like the Library of Congress, have spent considerable effort investigating the technical and economic feasibility of large, long-term storage: see, e.g., www.digitalpreservation.gov/ - but even they do not make outrageous claims about infinite storage and infinite bandwidth. I think what we can conclude from this and other observations is that the OP has not carefully thought out his/her requirements: they should reconsider exactly what they need, and bear in mind that outrageous requirements carry outrageous price-tags. Nimur (talk) 16:13, 9 June 2010 (UTC)[reply]
I agree with the last sentence in particular—you'd get more useful results on here if you actually told us what it was you were planning to do with it. It would be easier for us to actually suggest services that worked within the reasonable practical limits of whatever your project is, or to let you know why some of them really can't work out. What you're asking for does not exist in the terms in which you are asking for it, and the reasons for it not existing are fairly obvious (bandwidth and space costs somebody money, so any service that offered up literally unlimited space and bandwidth would either have to be run as a charitable institution or would run itself into the ground). There are services that can approximate some of the requirements within reasonable limits—i.e., upper caps on file sizes or total storage, or having ads before the link, etc. But you'll have to be more specific about what you are using it for, otherwise it is as ridiculous as the request made by 203.167. --Mr.98 (talk) 19:11, 9 June 2010 (UTC)[reply]

Actually, I just found something that meets all of those criteria (except the bandwidth one, but still, it has 2 GB of bandwidth, and that's a lot). See http://www.filehosting.org/. Samwb123T (R)-C-E 01:27, 10 June 2010 (UTC)[reply]

I think we can assume the poster wouldn't charge themselves for use of a service then buying their own equipment does satisfy all the constraints. ;-) Dmcq (talk) 12:17, 10 June 2010 (UTC)[reply]

Indifference curves

I want to visualize indifference curves in 3D. So I need a graphing calculator that is capable of doing contour graphs. For example, let U(x,y) = X0.45 * Y0.55, I want to plot the indifference curves where U = 10 and 20. I don't want grids.

Is there any free software for this purpose? -- Toytoy (talk) 05:14, 9 June 2010 (UTC)[reply]

Sounds like something that can be done using GNU Octave. Titoxd(?!? - cool stuff) 07:51, 9 June 2010 (UTC)[reply]
Here are some resources that I found while searching: an archived Math Desk discussion about Octave and economic indifference curves; the Octave Econometrics package (which does not have indifference curves, but may be useful anyway); this handy Octave plotting reference; and of course, because Octave provides a MATLAB-like interface for almost all common plot functions, you can read about MATLAB 3D line-plotting and 3D surface-plotting, and see how many of those commands work in Octave (most everything should be compatible or only slightly different). You can fall back to the built-in docs inside Octave, or check the Octave plotting documentation. Nimur (talk) 20:10, 9 June 2010 (UTC)[reply]

iSCSI

I have a few questions in regard of iSCSI Storage Area Networking, and I want to learn everything that I can about iSCSI SANs

  1. As iSCSI initiators are connected to iSCSI SANs over the internet, without any physical connection, is the limit on the number of hosts an iSCSI SAN have fixed to specific hosts, or is it a limit as to how many hosts can be connected any one given time?
  2. How does the iSCSI SAN know which parts of its pool of its storage belongs to which hosts? Unlike a NAS, where this information is stored onboard the server, where is this information stored in regard of the iSCSI SAN? —Preceding unsigned comment added by Rocketshiporion (talkcontribs) 06:44, 9 June 2010 (UTC)[reply]
Reformatted and title added. --217.42.63.244 (talk) 07:20, 9 June 2010 (UTC)[reply]
The iSCSI system uses a pretty big ID (similar to a GUID) to track initiators so there is no practical limit on connections other than what the host is willing to allow (meaning, many more than you would actually *want*). This is usually licensed by the host vendor in a realistic way, and limited because each initiator must be assigned a storage partition or group in a 1:1 way (1:many and many:1 is possible but is basically an extension of 1:1). The host controller has a lot of intelligence (basically an embedded computer) to allow it to keep track of local disks and clients. However, the iSCSI system is very fragile; in my experience you are nuts to want to truly use it across the internet, especially with a lot of hosts. It is a very low level protocol so it won't stop you from doing foolish things like assigning two systems with non-concurrent filesystems to the same storage partition, and having them subsequently destroy each other. For more information on this it's useful to investigate the respective filesystems you plan on using; iscsi doesn't really care. --144.191.148.3 (talk) 16:18, 9 June 2010 (UTC)[reply]

how to simulate a handoff scenario?

i was studying the different handoff techniques in wlan ieee 802.11x and was thinking about an idea of my own which i think might in its own way be able to reduce the handoff latencies in MAC layer scanning. But i need a simulation to find out if it would really work out. Should i use matlab for it? Does it provide built in functions for ieee 802.11 nerworks and specifically for handoff scenarios?? Or do i have to write codes for it? Can anybody please explain how such simulations work? An example or a link would be very helpful. Thanks. --scoobydoo (talk) 07:37, 9 June 2010 (UTC)[reply]

If you have consistent data to model handoff, I would *love* to see it. I have done work on 802.11 networks many times in the past, and handoff is such a crap-shoot of idiosyncrasies between each brand of access point and wireless client device that there's no good way to predict it; you just have to set it up, test it, and hope for the best when the users start to swell. --144.191.148.3 (talk) 16:25, 9 June 2010 (UTC)[reply]

So you are saying it is best to go for real time experiments with different brands of APs and MSs? Won't computer simulations work? Actually i was thinking of a handoff based on gps measurements of the postion of the MS and hexagonal cell structure of AP coverage areas. But i don't know whether it will work out. I thought maybe if it was possible to simulate handoffs through programming and stuff. Could not find anything in matlab communication toolbox or simulink blocksets... --scoobydoo (talk) 17:31, 9 June 2010 (UTC)[reply]

Not knowing that much about matlab, I would think you at least need to provide some constants like handoff probability related to proximity to adjacent stations, handoff speed for various signal strengths (and given certain negotiated speeds, vendors, assumptions for client activity, etc.) and other figures that can't be mathematically derived. Then, as your user count increases (as each user affects the others' ability to see base stations) you need to almost move to a finite element approach where you can take all these things into account for a snapshot model. It sounds super duper hard, but if you can pull it off you will probably have a model worth selling to Cisco or other wireless big-names since they are very interested in software approaches to optimizing wireless networks. There are only a few companies that have even tried the software optimization approach (the assumed goal of this exercise), and they are far from perfect. Personally (given that I have done this more than a few times) I would say real world testing (and ways for base stations and clients to react to real world indicators when negotiating handoff) will always trump computer models; there are simply too many variables like interference, objects blocking the signal, behavior of other stations during certain load conditions, etc. --144.191.148.3 (talk) 19:05, 9 June 2010 (UTC)[reply]

Time to copy files

Why does copying thousands of small files that total 100mb take longer than copying one 1gb file? This is on Windows 7, using the crappy default copy service. 82.43.89.11 (talk) 10:24, 9 June 2010 (UTC)[reply]

My guess is be that the default copy service is not optimised for bulk copying, and so does a target directory update after each individual file copy. The longer copy time would then be due to the additional overhead of thousands of directory updates. Gandalf61 (talk) 10:54, 9 June 2010 (UTC)[reply]
Is there any way to reduce this overhead? 82.43.89.11 (talk) 11:10, 9 June 2010 (UTC)[reply]
On this topic: make sure you have large areas of open space on your hard drive, otherwise the write head will be frantically scanning back and forth, trying to find space. As a further note, it will help massively if the data starts on a different hard drive to begin wtih (unless you're cutting and pasting, in which case you definitely want the same hard drive, if possible) Riffraffselbow (talk) 13:38, 9 June 2010 (UTC)[reply]
Riffraffselbow, write heads don't scan back and forth looking for free space. Free space is found by searching the volume bitmap, which is easily cached in RAM (the bitmap for a 1 TB volume would typically be 30 megabytes), so this search takes no time at all by disk-writing standards. If the free space on the destination drive is in small fragments then the write will take longer because the disk head will have to seek between available regions, but it's seeking to a precomputed track, not searching for a free one. Gandalf61, metadata updates are not under application control. The OS always caches them and sends a bunch to the disk at once. Pre-creating zillions of files could easily make things worse, since the metadata for each file would probably be written to disk before the file was written, and would then have to be updated later. If you create, write, and close each file in a short time, there will probably be just one metadata write per file. Mr.98, it's hard for me to believe that Firewire latency would noticeably affect the copy time. Firewire hard drives work at the disk-block level, not the file level, so there isn't a per-file wait time. It might be different when copying to a network share, though. I don't know how SMB works, but there could be one or more network round-trip waits per file. -- BenRG (talk) 23:16, 9 June 2010 (UTC)[reply]
My understanding is that it's kind of the equivalent of sending one large package through the postal service and sending 1000 small ones. The overall data volume/weight might be the same, but the postal service is going to have a much easier time processing the one big one (look at address, note proper postage, put in right bin) than the 1000 small ones (where each one has a different address and postage). Sure, you might need a bigger fellow to carry that one big one, but you only have to process one package in the end. I find this is especially so when using external harddrives with Firewire connections, where the speed of the transfer of the file data is very fast, but the speed of opening up a new file for writing, and then closing it again, is very slow when multiplied by a thousand. --Mr.98 (talk) 11:59, 9 June 2010 (UTC)[reply]
There's two reasons. Firstly, and probably chiefly, just because those thousands of files are all in the same folder there's no guarantee that they're stored on contiguous clusters on the disk; indeed, it's very likely that they're not, and often that they're distributed (seemingly at random) across the entire disk surface. When the copy program opens each file, the disk head has to seek (move across the disk surface) before it can read the next block - this seek time on a modern hard disk is something around 7ms. Then it has to wait for the data it wants to spin around - this rotational delay averages at around 3ms. So, on average, every time the next file (strictly the next cluster of data) isn't stored adjacent to the last, the disk has to wait for 10ms, which means it's not reading any data during that time. If the files (strictly, clusters) are distributed fairly randomly across the disk, this delay will dominate the time actually spent reading data, and performance will be very slow. OS designers know this, of course, and built layers of caching (and often readahead) to help minimise this, but if the distribution is really random, caching doesn't really help at all. Strictly this can be a problem even for that one 1gb file too, as there's no guarantee that its clusters are adjacent to one another either - but the filesystem takes a very strong hint and tries its best (bar the disk being very fragmented already) to keep things either in one or only a few contiguous runs. Secondly there's the problem of clustering overhead. If you create a file, it takes up a whole number of clusters on the disk; even a 1 byte file takes up a whole cluster. Cluster size on NTFS is 4kb by default. In practice the block device layer of the OS, on which the filesystem is built, deals in whole clusters, so it will read and write the whole cluster when you read that file. If the files each take a small fraction of the cluster size, most of that read and write is overhead. Large files make full use of the clusters, so they have minimal cluster overhead. -- Finlay McWalterTalk 15:19, 9 June 2010 (UTC)[reply]
I would say that with SMB (especially in windows) the problem of per-file overhead related to the directory information and file handling information discussed here has more to do with it if you files are 100kb or larger, if not the slowdown will be the disk. Say we are going with the 100mb of 1000 files figure, that's 100kb per file. A modern disk can still read 100kb files at 10 MB/sec or better, the really egregious slowdowns don't happen until the files are around 5kb in size, when they are scattered all over the disk. Want to combat this? Archive your files; in a simple way like the tar format or in a compressed way like the zip format. If you really do have 100,000 1KB files (100mb worth), the time it takes to zip on one end and unzip on the other end (since disks are pretty fast these days) will pale in comparison to the time it takes SMB to wade through that many files. SMB is going to have more overhead than your disk in almost any practical case where multiple files are involved, I have observed this many times. --144.191.148.3 (talk) 16:37, 9 June 2010 (UTC)[reply]
In NTFS, the content of small files is stored in the metadata record itself, and doesn't use a cluster. Explorer will report these files as using 4K (or whatever the cluster size is), but it's wrong. Also, although disk space is allocated in cluster units, NT will only read or write the sectors that it actually needs, as far as I know. Since NT's caching is tied to x86 paging, which uses 4K pages, you will usually end up with reads of 4K or larger anyway; but at the end of a file, where NT knows some of the sectors in the cluster are meaningless, it won't read or write those sectors as far as I know. -- BenRG (talk) 23:16, 9 June 2010 (UTC)[reply]

Getting video from my camera NOT using Firewire

Hi, I have a mini-dv video camera that has Firewire as its main output. I just replaced my MacBook (old one was damaged beyond repair) and the new model does not have a Firewire port. Is there any other way I can get video from my camera to my laptop? Cheers, JoeTalkWork 11:58, 9 June 2010 (UTC)[reply]

Since you didn't give us the model number of your camera, or anything else that can help us help you, you can answer this better then us. Does the camera have any port other then Firewire, like USB for example? If not then obviously you'll need to add a Firewire port to your laptop in some way (USB to Firewire if that exists (a quick search suggest they may but wasn't able to find anything for sale that was clearly what was wanted), PCMCIA, ExpressCard, which options are available to you will depend on your laptop for starters), or use a different computer or device or buy a new camera. It occurs to me since Firewire allows devices to connect to each other, you may be able to connect the camera to an external hard drive with Firewire and transfer directly and then presuming the hard disk has Esata and/or USB which your laptop also has you can then transfer it to the laptop but I don't know if that is generally possible and in any case, it will likely depend again on you camera which as I've said we don't know (and a good way to find out whether it can may be to read the manual). If the camera has a removable hard disk or other form of storage, it may be possible to buy something which can connect that to your laptop in some way, but again it will depend on your camera. Nil Einne (talk) 12:12, 9 June 2010 (UTC)[reply]
Firewire to USB adapters and cables certainly do exist. They're quite handy. --LarryMac | Talk 12:38, 9 June 2010 (UTC)[reply]
Some cameras have a memory card, or the option to add one and the software to move files from hard drive to card. I assume that yours doesn't because this would solve your problem easily. Dbfirs 14:43, 9 June 2010 (UTC)[reply]
Sorry I didn't give much information before. The camera is a Canon MV890. It has only Firewire as its output to PC, with a separate output to TV. I think I will get a Firewire to USB adapter. Thanks for the replies, JoeTalkWork 16:55, 9 June 2010 (UTC)[reply]
Yes, unfortunately, it has no USB or memory card. Your only other option would be to use a digital recorder to record from the DV output to a DVD or hard drive that you could read on your PC. Dbfirs 17:28, 9 June 2010 (UTC)[reply]

Program to Find Primes in C++

I have created a program in C++ for finding prime numbers... This is a very simple program and so, I think you all can understand the gears and wheels of it... I was able to find the first 1000 primes in a little less than 1 second... Please tell me if there is a better way to find prime numbers... Just tell the way and I will program it myself... Do people find larger and larger primes only this way??

#include <cstdio>
#include <cstdlib>
#include <iostream>
using namespace std;
int main(int nNumberofArgs, char* pszArgs[])
{
    unsigned numberOfNumbers;
    cout << "Hey Donkey!! Type the number of prime numbers to be printed and press Enter key : ";
    cin >> numberOfNumbers;
    unsigned subjectNumber=2;
    unsigned printedNumbers=1;
    unsigned tester=1;
    unsigned hit=0;
    while (printedNumbers<=numberOfNumbers)
    {
          hit=0;
          tester=1;
          while (tester<=subjectNumber)
          {
                if (subjectNumber%tester==0)
                {
                                            hit++;
                }
                tester++;
          }
          if (hit<=2)
          {
                     cout << subjectNumber << "               " << printedNumbers << "\n";
                     printedNumbers++;
          }
          subjectNumber++;
    }
    system ("PAUSE");
}

harish (talk) 12:59, 9 June 2010 (UTC)[reply]

The basic way to find primes is the Sieve of Eratosthenes; it's a lot faster than what you're doing, because it stores all the primes it's previously discovered, and only tests candidates by dividing by these. Implementing that in C++ would generally mean you'd keep a store of all the primes you'd found and your inner loop would use these, rather than tester. (Incidentally your code does lots of pointless work too, because it doesn't terminate the inner loop when it sees a hit). Beyond the Sieve of Eratosthenes (which is simple to understand and implement, but not the fastest possible) there are things like the Sieve of Atkin, and many things listed at the Primality test article. Note that for some applications of prime numbers, people don't actually generate numbers that are definitely prime, but ones that are probably prime. -- Finlay McWalterTalk 13:19, 9 June 2010 (UTC)[reply]
Harish, please do not cross-post the same question on several Reference desks.—Emil J. 13:30, 9 June 2010 (UTC)[reply]
Reading your code, there are a number of optimizations that are "low hanging fruit" that you might want to experiment with, rather than switching algorithms to the Sieve or something else; and I like working on prime number detector optimizations because it's high school math and is easy to think of small improvements and learn about optimization. (The Sieve of Eratosthenes is going to be faster than this method, ultimately; but it's not suitable for some applications — the Sieve has to start at 2, wheras harish's method can start calculating at any arbitrary positive integer; and harish's method just works, and is good for learning.) Some things I would consider if I were you:
  • Right now you're dividing the potential prime number by every single number lower than it, but you know that if you get a single "hit" (where the mod yields a 0) then it's already not prime, and you don't need to test that number anymore. Avoiding all those extra divisions would be useful.
  • Similarly, you know you don't have to do any more testing after you reach half the value of the number. (11, for example, isn't going to be evenly divisible by any number greater than 5.5.)
  • You also know up front that no even number is going to be prime, so you could avoid even testing any of these by starting at 3 and then incrementing by 2 each time.
  • You could do a little of what the Sieve of Eratosthenes does by keeping an array around of the primes you've already discovered, and only bother to divide each potential prime by the numbers in the array. This would speed up the evaluation by never dividing anything by 9 or 10, for example.
Comet Tuttle (talk) 16:34, 9 June 2010 (UTC)[reply]
Read The Art of Computer Programming Volume 2. Zoonoses (talk) 12:35, 10 June 2010 (UTC)[reply]
Not just half but stop at the square root of the number. That should cut down the number by alot. Also I tried this once in Java, screwing around like you, and I'm not sure about how much faster using a list is. Not sure if it's because I used a parameterized ArrayList but it actually ran slower when I included it. 66.133.196.152 (talk) 09:14, 11 June 2010 (UTC)[reply]

UTF-8 and HTML

Hi, I've read UTF-8, Character encodings in HTML and Unicode and HTML, but I'm still kind of confused. When I save an HTML document as UTF-8 and include <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> in its source, do I still have to use &somecodehere; for special characters like German umlauts (&uuml; = ü, etc)? -- 109.193.27.65 (talk) 12:59, 9 June 2010 (UTC)[reply]

No, the utf encoding should take care of it; the HTML4 standard says "As some character encodings cannot directly represent all characters an author may want to include in a document, HTML offers other mechanisms, called character references, for referring to any character." which I take to mean that, if the character encoding does do what you want, it's your choice as the page author. But there's always the worry of old browsers and wonky search engines that don't understand utf properly ... -- Finlay McWalterTalk 13:09, 9 June 2010 (UTC)[reply]
Naturally, that is assuming you really do represent the Ü correctly in UTF8; that means you've verified that a text-editor with which you edit the html file honours the UTF encoding properly, and any database that you store the character data in (e.g. for a blog posting) also honours the encoding correctly. -- Finlay McWalterTalk 13:39, 9 June 2010 (UTC)[reply]

Google Chat

How many MBs does one hour of the following consume???

1. One hour of pure typing chat using GTalk. 2.One hour of voice chat using GTalk. 3.One hour of video chat using GTalk. —Preceding unsigned comment added by 117.193.155.79 (talk) 15:47, 9 June 2010 (UTC)[reply]

This question is mostly unanswerable since it depends on what happens during that one hour for each. In particular if the two parties type at a constant rate of 120 words per minute the data usage is going to be significantly different from if the two parties are typing at an average of 5 words per minute (whether very slow typists or more likely they aren't constantly typing but reading and replying and perhaps doing other things in between). The data usage will still be small but there could easily be an order of magnitude difference. I don't know if Gtalk varies the voice codec but even if they don't many modern voice codecs have silence detection and other things which means the rate will generally vary too. Video is probably the worse. I'm pretty sure Gtalk as with many video conferencing utilities varies the quality automatically based on several things including available bandwidth, potentially computer speed and camera resolution+frame rate. If you both have symmetrical 100mbit connections with 720P video cameras and very fast computers you're likely to have a far higher bandwidth and therefore data usage then if you both have 256k/128k connections with a typical VGA camera on a netbook. P.S. In relative terms, the voice will always be a lot more then the text and the video quite a big higher then the voice Nil Einne (talk) 17:05, 9 June 2010 (UTC)[reply]

Surfing with Python

How do you say in Python:

-go to url x and open it into a new Firefox tab
-push that javascript button on the page x.
-fill the field on page x and push the ok button

--Quest09 (talk) 17:28, 9 June 2010 (UTC)[reply]

I think what you're looking for is automation. Take a look at this thread. Indeterminate (talk) 17:51, 9 June 2010 (UTC)[reply]
If you instead want a Python program that does the same thing instead of a Python program that takes control of Firefox in order to do what you need to do, you need to know more about what is actually happening behind the scenes. How are the fields sent to the server? (POST (HTTP)? GET (HTTP)? AJAX? JSON?) What format is it in? What other parameter (user-agent? HTTP Cookies? Referrer (HTTP)? etc.) does it pass? After you've figured all that out you can write a simple python script that does exactly what Firefox does behind the scenes using the httplib and urllib in Python. --antilivedT | C | G 05:37, 10 June 2010 (UTC)[reply]
alternatively, from the perspective of an end-user with limited programming experience, something like autohotkey might be more useful. Just record a macro that fills in each field/clicks the buttons, selecting them via tab. Then loop said macro. Riffraffselbow (talk) 07:01, 10 June 2010 (UTC)[reply]
Follow-up: and what is the equivalent to autohotkey for Linux?--Quest09 (talk) 15:57, 10 June 2010 (UTC)[reply]

email info

please help me to rename my secondary email in simple terms, as i am pretty new at computers, windows live help, said go to your account summary page with your windows live id, and i don't know how to do that? —Preceding unsigned comment added by Saltyzoom (talkcontribs) 18:51, 9 June 2010 (UTC)[reply]

Your question is unclear. You don't say what it is that needs you to rename your secondary email, or which company or organization you have this account with. Astronaut (talk) 01:15, 11 June 2010 (UTC)[reply]

Internet

I'm sure this has been asked before, in fact I'm certain of it because I remember seeing a thread here, but I can't find it in the archives. Anyway. Is there such a program for windows that can monitor every webpage the computer visits and save all the pages, images etc. Sort of like a web spider except it only saves what your browse. Sort of like building an offline internet of every page you've ever visited. 82.43.89.11 (talk) 21:18, 9 June 2010 (UTC)[reply]

Maybe Wikipedia:Reference_desk/Archives/Computing/2009_June_14#Search_whole_site and Wikipedia:Reference_desk/Archives/Computing/2007_May_16#saving_web-pages_in_one_file are the questions you remembered seeing here? -- 109.193.27.65 (talk) 22:09, 9 June 2010 (UTC)[reply]

June 10

yahoo messenger archive decoding

there are several programs on the internet that offer to magically decode yahoo .dat archives. The problem with this is that i dont want to trust too well some program whose source i dont know in this case. Is it easy in most languages to decode these? does anyone know a basic algorithm for decoding? thanks!

63.26.255.89 (talk) 02:09, 10 June 2010 (UTC)[reply]

Editing subtitles in .srt files

I want to know whether there is any coding that I can put in a .srt subtitle file that specifies it's colour and font (for example,pink and Comic Sans MS). I know that I can manually specify the font for each line by using standard HTML coding ( like, <font face="Comic Sans MS"> text </font>), but I'd like to know if there is anything that I can put at, say the beginning of the file, that applies the said font to all the lines in the subtitle. Can anyone help me? Thanks in advance! 117.194.227.30 (talk) 05:18, 10 June 2010 (UTC)[reply]

have you tried placing it like so? Riffraffselbow (talk) 07:07, 10 June 2010 (UTC)[reply]
 <font=Futura>
 Juliet: Romeo, Romeo, wherefore art thou Robot Romeo
 Robot Romeo: Bleep Bloop!
 Juliet: Man, this guy is a terrible writer
 </font>

Yeah. I tried it like that. But, there are lines of coding specifying the time and the serial number in between. I've noticed that if I type the <font=Futura> at the beginning and the </font> at the end of the whole file, with even the timings (00:00:00,266 --> 00:00:07,138, for example) in between, then Media Player Classic, and all my other players cease to recognise the file as a subtitle file... This has forced me to painstakingly copy-paste the font-style code at the beginning of each and every subtitle line.... It's a seemingly endless task.. You forgot to take into account the fact that "Robot Romeo: Bleep Bloop!" should come some time after Juliet speaks.... 117.194.226.128 (talk) 08:25, 10 June 2010 (UTC)[reply]

Do you have access to a scripting language like Perl or Python? Mac and Linux machines tend to come with them pre-installed. Scripting languages are great for otherwise mind-numbingly repetitive textual tasks. Paul (Stansifer) 18:51, 10 June 2010 (UTC)[reply]

I don't. I use Windows XP.... :( Pity... I had to modify more than 1000 lines of text wholly by hand..... I think I'm dying... 117.194.227.198 (talk) 07:29, 11 June 2010 (UTC)[reply]

HTTP referrers and site tracking

Is it correct that HTTP referrers only track when you click on links? Disregarding cookies, it possible for a site to track visits and departures if someone uses the address bar? Like if you visit Site A > Site B > Site C by typing the addresses in the URL bar, is it possible for Site B to know about A or C? 24.6.21.207 (talk) 05:46, 10 June 2010 (UTC)[reply]

AFAIK, no browser will send a referrer when you type something in the address bar (nor when you visit a bookmark or whatever). They have no reason to, there's no reason to think you were referred to the address by the address you are currently at, it could be completely unrelated and probably often is (when I decide to visit http://www.nzherald.co.nz after visiting wikipedia it's usually because I want to check the news not because of something I saw on the reference desk and similar if I go to en.wikipedia.org look at it and then add WP:RDS it's not because I was referred to the RD from the main page but because I visited the main page then decided to visit the RDS).
Note that sites are totally dependent on browsers in this regard, they can only 'track' referrers (if they do) because browser send them. So if you don't want sites to get referrers just disable them in your browser (or if your browser doesn't support that get a different browser and/or tool that will strip it for you). Be aware of course this will break some downloads and other things. You could perhaps find a browser or tool which will prevent cross-site referrers but allow intra-URL ones which will probably reduce the problem but is unlikely* to eliminate them.
*If it's strict and thinks of www.microsoft.com as different from www2.microsoft.com then obviously it won't help in these cases. But if it's not strict and thinks of them as the same then it will also treat evilspy.cjb.net as the same as yourboyfriend.cjb.net even though these could be different servers run by completely unrelated people. Even worse if it stupid and treats www.google.co.nz the same as www.nzherald.co.nz. And of course if the server redirects you to an IP then it will never work. Our article does mention just sending the base URL often works.
Nil Einne (talk) 10:15, 10 June 2010 (UTC)[reply]

getting error mesg with new interface

Webpage error details

User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; MS-RTC LM 8) Timestamp: Thu, 10 Jun 2010 08:44:34 UTC


Message: 'wgCollapsibleNavForceNewVersion' is undefined Line: 4 Char: 15 Code: 0 URI: http://bits.wikimedia.org/w/extensions/UsabilityInitiative/Vector/Vector.combined.min.js?281a —Preceding unsigned comment added by 85.115.3.118 (talk) 08:45, 10 June 2010 (UTC)[reply]

Google

Resolved

What is http://google.ca main page theme about? I can't find it from hovering over the logo like you usually do for their fancy logos. 82.43.89.11 (talk) 09:09, 10 June 2010 (UTC)[reply]

Judging by the fact that the little "change background image" link in the bottom asks you to sign in, it may just be a campaign for iGoogle. But I don't know. {{Sonia|ping|enlist}} 09:15, 10 June 2010 (UTC)[reply]
This blog post by Marissa Mayer says that they are indeed showing off Google's personalisation features. --Kateshortforbob talk 10:10, 10 June 2010 (UTC)[reply]

Does anyone know how to switch it off and get it back to the boring plain old white that we used to have? I personally prefer the plain old white page because it doesn't have java script occasionally crashing the page etc. 'Remove background image' just puts back the hideous one they supply you with. --KägeTorä - (影虎) (TALK) 11:37, 10 June 2010 (UTC)[reply]

It should be gone again by tomorrow, if you don't mind waiting. If not, you can use the secure site https://www.google.com or click the "Go to Google.com" link which should appear under the search bar of your country-specific Google page. Both of these will mean that your results won't be specific to your country though. (I'm beginning to find it annoying too, and I don't even have a "Remove background link", even though I have an account., oh, now I do, of course...) --Kateshortforbob talk 12:05, 10 June 2010 (UTC)[reply]
Nope, clicking on the 'Go to Google.com' link just takes me to Google.com with the same background stuff on it. Also, how do we know this will be gone by tomorrow? I personally doubt it will be, because it's linked to my Picasaweb page (i.e. I am given the opportunity to replace the original white page with a photograph from my own Picasaweb account). I wish Google would stop bringing out new features and not giving people the option to opt out of them until there's been an uproar and publicity and so on. --KägeTorä - (影虎) (TALK) 13:22, 10 June 2010 (UTC)[reply]
Testing out poorly-thought-out new features on non-logged in users - well, thank goodness Wikipedia never does anything like that ! Gandalf61 (talk) 13:31, 10 June 2010 (UTC)[reply]
The Official Google Blog says that it's a 24-hour demonstration of the new feature. Lifehacker has instructions for switching back to plain white. (Note that the ability to set a background, as you see in Picasaweb, is permanent. Turning it on for everyone is a 24-hour phenomenon.) -- Coneslayer (talk) 13:35, 10 June 2010 (UTC)[reply]
The Google.com link works for me but I have read reports that there is a bug in international versions which is causing the remove background image to not work, although no reliable source for that. Mayer's blog post says

To provide you with an extra bit of inspiration, we‘ve collaborated with several well-known artists, sculptors and photographers to create a gallery of background images you can use to personalize your Google homepage. Included in the collection are photographs of the works of Dale Chihuly, Jeff Koons, Tom Otterness, Polly Apfelbaum, Kengo Kuma (隈研吾), Kwon, Ki-soo (권기수) and Tord Boontje, as well as some incredible photos from Yann Arthus-Bertrand and National Geographic. We’ll be featuring these images as backgrounds on the Google homepage over the next 24 hours.

This page works for Google UK for me. Changing en-GB to the appropriate ISO 639-1 code is getting me the localised pages for other languages. Does that work for you? --Kateshortforbob talk 13:41, 10 June 2010 (UTC)[reply]
Incidentally, the White theme looks really weird to me - the text is shadowed and almost illegible (not that I really need to read it).--Kateshortforbob talk 13:44, 10 June 2010 (UTC)[reply]
I'm finding that it doesn't matter what picture I set it too (incidentally I can't find the white one in the list that LifeHacker mentions), as every time I go back to the homepage there is always another picture. Considering the extremely small amount of time one would normally spend on a Google homepage before using it for what it's supposed to be used for, having a button that changes the background image for that session only seems like a really pointless thing. --KägeTorä - (影虎) (TALK) 14:14, 10 June 2010 (UTC)[reply]


Thanks everyone. It seems to be gone now anyway 82.43.89.11 (talk) 20:52, 10 June 2010 (UTC)[reply]

Program to help with world cup sweepstakes

I am tasked with doing a sweepstakes in work for the World Cup and am wondering is there any online program that would let me input the staff members names and also teams playing in the world cup to randomly assign a team to a name, there will be 4 seperate draws, I could write up a programe to do this but was checking to see if there was one around to save me the hassle of wriring one thanks. Mo ainm~Talk 12:00, 10 June 2010 (UTC)[reply]

The random.org list randomizer does what I think you want. If not, they have a lot of other services. Paul (Stansifer) 15:12, 10 June 2010 (UTC)[reply]

iPhone

My IPhone used to 'ping' when ever an email reached my AppleMacPro lap top; it no longer does that, does anyone know why please?--Artjo (talk) 16:02, 10 June 2010 (UTC)[reply]

I'm not sure exactly what you are asking, but I have a few ideas. You could have turned notifications for email off, the account may not be set for push, and instead be set to check for updates every x minutes (or manually), you may have turned the phone to vibrate or mute, or you aren't getting a reliable data connection. Caltsar (talk) 18:16, 11 June 2010 (UTC)[reply]

Weird hidden text in webpage, not in source, but apparent after copying and pasting?

At this eBay listing, there is an item description that reads:

DESCRIPTION

The Bolle face shield has a blue brow guard that protects the top of the head and a polycarbonate lens to protects against impact from flying particles. It’s easy to adjust the headgear and to change the polycarbonate visor.

The Bolle face shield provides: simplicity, comfort and security.


Face shield - polycarbonate flip-up Dimensions face shield - 220mm x 390mm Weight - 254 g

However, if you copy and paste it, there's an extra line:

DESCRIPTION Replacement visors for Bolle BL20 Browguard

The Bolle face shield has a blue brow guard that protects the top of the head and a polycarbonate lens to protects against impact from flying particles. It’s easy to adjust the headgear and to change the polycarbonate visor.

The Bolle face shield provides: simplicity, comfort and security.


Face shield - polycarbonate flip-up Dimensions face shield - 220mm x 390mm Weight - 254 g

The line is not present in the source, so I don't understand where it's coming from?? 92.25.100.105 (talk) 18:22, 10 June 2010 (UTC)[reply]

I see the line in the source:
<DIV class=tabpanelcontent style="DISPLAY: none">Replacement visors for Bolle BL20 Browguard</DIV>
Comet Tuttle (talk) 18:51, 10 June 2010 (UTC)[reply]
That's really strange... I don't find it via Firefox 3.7 but when I use IE 8, it opens the source in Notepad++ and there it's visible. 92.25.100.105 (talk) 20:28, 10 June 2010 (UTC)[reply]
For what it's worth, I saw it using Firefox 3.6.3. Comet Tuttle (talk) 21:43, 10 June 2010 (UTC)[reply]

Maybe it's done through JavaScript as outlined in this article: http://www.techdirt.com/articles/20100601/0047399633.shtml —Preceding unsigned comment added by 84.157.75.7 (talk) 19:01, 10 June 2010 (UTC)[reply]

I see it using Chrome in the source but it's not there in the copy and paste. In I.E the text doesn't display but it does in the copy and paste. I doubt it's done in the way outlined above, I think it's just browser differences. I can't understand why Firefox and I.E would show it though.66.133.196.152 (talk) 09:51, 11 June 2010 (UTC)[reply]

How to check what other people can see of my Facebook profile?

Is there a website which will present to me what other people can see of my profile, dependent on their relationship (friend, friend of friend, public), exactly as they see it? (It could conceivably show me the information only available to friends by letting me befriend a special account, if Facebook allows such practises.) --92.25.100.105 (talk) 18:26, 10 June 2010 (UTC)[reply]

Navigate to the Privacy Settings page. Select the "Customize settings" link towards the bottom of the "Sharing on Facebook" section. Click the "Preview My Profile" button that is located in top right side of the page. When you have reached this preview mode, you will automatically see how your profile looks to most people on Facebook. To view how your profile looks to a specific friend, just type his or her name in the open field. Mo ainm~Talk 18:49, 10 June 2010 (UTC)[reply]

June 11

problem with windows media player 11

hey guys im having a problem with windows media player 11 i have vista home premium and wmp11 worked fine until recently now when i play some songs it wont play them and says "your computer is low on memory. quit other programs, and then try again." however they work fine on itunes. also my laptop just got back recently from acer pretty well brand new so i have no idea why this problem is happening.Alive99 (talk) 00:02, 11 June 2010 (UTC)[reply]

Sounds obvious it is running out of memory. A few things come to mind:
  1. You say you got it back from Acer, so presumably that was a repair. Maybe they didn't re-seat the memory correctly, so the laptop now only has half the memory it should. Or maybe some of the memory is faulty.
  2. Maybe you have not run out of real memory, but have run out of virtual memory. This can happen if your disk is very full.
  3. Maybe you are running some resource hog of a program. Games and video editing make high demands on a PC's resources. But maybe it is some program you cannot see easily, that starts automatically ... perhaps malware.
Astronaut (talk) 01:09, 11 June 2010 (UTC)[reply]

Running out of space on a RAID

I have a computer running Mac OS X 10.5. It has a 500GB RAID with only 11GB empty. The system drive is 250GB with only 16GB empty. Could these items be a reason for a system wide slow down and reduction in performance? Dismas|(talk) 01:35, 11 June 2010 (UTC)[reply]

I'm skeptical. That's a lot of space compared to the amount of RAM in your PC. First thing I'd look for is any new software you've installed or updated. If that's not the issue, have you tried defragging? Comet Tuttle (talk) 04:07, 11 June 2010 (UTC)[reply]

Comcast Cable Internet Speeds

I've been considering upgrading from the Comcast performance package (12mbs down) to Blast!(16mbs). How big of an impact will this have on the time it takes me to actually download large files in practice; in short, will I notice an appreciable difference is speed on a regular basis? The cost to upgrade is an extra $10 per month, just want to make sure this is worth it. Thank you:) 66.202.66.78 (talk) 05:39, 11 June 2010 (UTC)[reply]

Well, even in theory you're only adding 4 mbs, under very ideal conditions—namely, that the server you are downloading from is capable of giving it to you at a max speed anyhow, and that your line conditions are such that you are really getting all of that data at the top speed anyhow. For a situation where the former condition is definitely met (like torrenting a file with a huge number of seeders, which seems to be able to max-out one's down bandwidth pretty well in my experience), a 10GB file would download in about 12 114 minutes at 12mbs, and in about 9.75 85 minutes at 16mbs (there is some rounding here, but the calculation is just converting the GB into bits, and then the megabits into bits, and dividing, and then converting those seconds into minutes—all pretty easy with Google's unit conversion feature)—so a total difference of half an hour in such a situation. That's under ideal conditions of them being maxed out all the way. Is that worth an extra $10 a month, or $120 a year? Personally I probably suspect not, but that's a value judgment, and depends heavily on your own usage habits (for me, as someone who only downloads large files but rarely, it wouldn't be worth it). But perhaps there are additional benefits of a slightly higher down rate that I am not thinking of. --Mr.98 (talk) 16:04, 11 June 2010 (UTC)[reply]
Your math is off:
 10,000,000,000bytes
x                    8bits per byte
 80,000,000,000bits
 80,000,000,000bits
÷     16,000,000bits per second
               5,000seconds
And 5,000 seconds equals 83 minutes. Do the same math for 10 mbps, and you get 133 minutes. So, you save 40 minutes when downloading a 10 GB file. I save an additional 30 minutes at 24 mbps.
I have the 24 mbps Ultra package from Comcast. Web pages don't load any faster, but streaming video and downloading large files are much faster. It takes about three seconds for the connection to speed up completely. Since most simple web pages finish loading in under that time, it only impacts streaming video (e.g., Netflix, YouTube HD) and downloading large files from fast servers (e.g., updates from Microsoft, trial software, torrents, Rapidshare, etc.).--Best Dog Ever (talk) 17:32, 11 June 2010 (UTC)[reply]
(Assuming we are using "real" GB and not harddrive manufacturer GBs, 10GB is not equal to 10,000,000 bytes; its 10,737,418,240 bytes. 16Mb is 16,777,216 bits. But you're right, I missed out converting GB into bits, which is the key part.) --Mr.98 (talk) 19:01, 11 June 2010 (UTC)[reply]

I just installed and run Windows 7. But this version of Windows give me lotta confusions.

  1. Is a full format (not a quick one) harmful to the HDD? I have heard many of my friends say that when full formatted too many times, a partition can be corrupted. I'm not sure 'bout it but I have broke a few floppy disks in the past due to formatting to get rid of some viruses.
  2. When installed, it ask me to choose whether I like to install x86 or x64 version of Windows 7. My laptop is an crappy one with a Celeron @ 560 2.13 Ghz and 768 MB of RAM. I know that my CPU is 32 bit so I choose to install x86. I used to think x64 version only works with a 64 bit CPU, but a few hours ago I tried to install a x64, waiting for an error message, but unbelievable, it works! I'm running IE8 64 bit now. So, which version should I use?
  3. Do 64 bit programs run faster on a x64 OS than 32 bit ones running on a x86 OS?
  4. I still want to run an additional OS on my computer (Windows XP) to play games. This windows takes lotta RAM, which make most of my favorite games unplayable. But I surf the web and find many complaining that their Windows 7 no longer works after they install XP. To get both of them work, I must install XP first, then 7, right? (on different partitions of course)

Any help would be very appreciated. -- Livy the pixie (talk) 11:33, 11 June 2010 (UTC)[reply]

1. No.
2. The 560 is a Merom core, which means it features ia64. But I'd still run the 32 bit version; I don't think you'll see any advantage in 64.
3. It depends on the program. 64 bit programs are a bit less cache-efficient and a bit more memory-hungry (big ints, big pointers). A few programs actively benefit from the wide words, but I doubt very much you'll use any of them. If you did lots of media encoding, and you knew the codecs you used had 64 bit versions, then maybe that'd be a consideration.
4. Windows 7's compatibility is pretty impressive, and very few things that work on XP won't on 7. In the unlikely event that you do find something that just plain won't, install XP in a virtual machine inside windows 7. Dual-boot is a bad idea.
-- Finlay McWalterTalk 11:47, 11 June 2010 (UTC)[reply]
I'm not convinced that running XP in a virtual machine on a computer with only 768 MB of RAM is a good idea, especially since the OP is concerned about gaming performance. (PS: IA-64 is Itanium; the 64-bit successor to x86 is variously called x86-64, x64, amd64, EM64T, ...) -- Coneslayer (talk) 11:52, 11 June 2010 (UTC)[reply]
The amount of RAM you have is below the minimum requirement for Windows7 (see Windows 7#Hardware requirements). Even if you updated that, your CPU is pretty ancient, and is near the bottom of the acceptable range for Windows 7 (if you must, run Microsoft's Upgrade Advisor on the laptop). I wouldn't install Windows 7 on this machine. -- Finlay McWalterTalk 11:54, 11 June 2010 (UTC)[reply]

I know Microsoft has offered something call XP mode, but it require an additional 1 GB of RAM and I'm running out of it. I'm waitin for a new laptop next year so I don't want to upgrade my PC at the moment. To dual boot, I have to install the old version of Windows first, then the newer one after, right? I remember seeing it somewhere on Microsoft homepage, but this article is for Windows Vista. I'm not sure it is still correct in Windows 7 or not. -- 123.16.22.244 (talk) 11:57, 11 June 2010 (UTC)[reply]

XP mode just means it comes with a VM included. Coneslayer is right, you don't have enough memory for it. But you don't have enough memory for Windows 7 anyway. -- Finlay McWalterTalk 11:59, 11 June 2010 (UTC)[reply]
... (after edit conflicts) and, to add to Finlay's answer to 1, formatting any disk is just the same as writing to it except that a low-level format re-writes the sector markers in each track, thus erasing all data. If you format a whole hard drive, this will delete any partitions, though some formatting software will protect you from this possible error. Dbfirs 12:04, 11 June 2010 (UTC)[reply]

With 768 MB of RAM I can still run some basic programs in Windows 7, such as IE, jetAudio, NetBean, Visual C#. I have install Windows 7 for 2 weeks w/o any problem. But I need XP for gaming. I install Windows 7 to experiece some new features only. Um, what must I do to dual boot now? -- 123.16.22.244 (talk) 12:05, 11 June 2010 (UTC)[reply]

I don't have any personal experience dual-booting them, but here's a guide to dual-booting Windows 7 with either XP or Vista. It assumes you start with XP installed; I don't know if that's the only way to do it, but I would expect most guides will make that assumption, since people generally start with the older OS installed and add the newer Windows 7. -- Coneslayer (talk) 12:46, 11 June 2010 (UTC)[reply]

Ubuntu + Vista DualBoot

I recently had to reinstall Windows Vista. I suspect it had something to do with Vista believing my disk was corrupt or whatever after I put Ubuntu on as a dual-boot (original question + answers here). Anyway, Vista is reinstalled and works fine, only now I am unable to get into Ubuntu, as the menu does not appear to invite me to choose an operating system to boot into. What should I do now? Also, how can I prevent my previous problem from happening again? TIA! --KägeTorä - (影虎) (TALK) 15:26, 11 June 2010 (UTC)[reply]

You need to reinstall grub. Windows normally erases the grub loader when it installs. Here's one way to do it. -- kainaw 16:46, 11 June 2010 (UTC)[reply]
Cheers. I actually found that page before you answered, and have come into Ubuntu (using the LiveCD) to do just that, but all I'm getting is 'sudo: grub: command not found' after typing 'sudo grub' in Terminal. A Google search of this gives me some relevant pages, but mostly they are fairly old forum posts and are only half-answered. I'm totally lost. --KägeTorä - (影虎) (TALK) 17:14, 11 June 2010 (UTC)[reply]
You may need to say sudo /usr/sbin/grub explicitly. -- Finlay McWalterTalk 17:18, 11 June 2010 (UTC)[reply]
Right, well, in the end, I actually did what it says on this page and it seems to be OK. I haven't gone back into Vista yet, so I don't know what's going to happen, but anyway, for future reference I'm posting the answer here. Cheers. --KägeTorä - (影虎) (TALK) 17:33, 11 June 2010 (UTC)[reply]
Another answer, my friend, is not blowing in the wind, but rather attached to your older question here: Wikipedia:Reference_desk/Archives/Computing/2010_June_5#Win_Vista_Not_Booting. And I know that because I was the one to attach that answer to your question. ;-) -- 109.193.27.65 (talk) 17:40, 11 June 2010 (UTC)[reply]

sql server 2005 se

Hi! If i try to set to single user a db via gui it works, if try via code it doesn't; but the profiler traces the same statement in both cases! Thank you in advance --217.194.34.103 (talk) 16:49, 11 June 2010 (UTC)[reply]

Most "durable" file compression format

What file compression format(s) provide the best "durability" — that is, the most error correction or ability to uncompress contents even if the media is degraded or the archive is missing some bits. I was curious if newer formats like XAR or 7z offer any benefits in this regard compared to "old school" RAR, ZIP, TAR.GZ, TAR.BZ2. --70.167.58.6 (talk) 18:41, 11 June 2010 (UTC)[reply]

Those formats that use solid compression are less durable (that includes 7zip and RAR, and in practice those that compress a TAR), as they don't reset the compression dictionary when processing each new file. So if one file's worth of data is damaged, subsequent files are also unrecoverable. With those that do reset the dictionary for each file (ZIP), only the damaged file itself should be entirely unrecoverable (depending on the nature of the damage, naturally). I can find very little information about how XAR works in this regard. -- Finlay McWalterTalk 19:02, 11 June 2010 (UTC)[reply]

Real-time post-processing audio from speakers

Hi all,

Does anyone know of any software that can do some form of real-time post-processing of the audio that comes from line out (i.e. to your speakers/headphones), such as audio filtering? A bit like Volume Control but more high-tech, capable of doing more complicated things like filtering out certain frequencies. Not interested in recording the sound - more like modify it on-the-fly and send this output to the speakers.

Sound originally aimed for speakers -> Filtering program -> Filtered sound to speakers

Thanks in advance, x42bn6 Talk Mess 19:40, 11 June 2010 (UTC)[reply]