Jump to content

Wikipedia:Reference desk/Computing: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Mr.98 (talk | contribs)
Line 347: Line 347:
:DPI in the sense that you are seeing there is an internal value that relates only to a calculation of what you'd ideally want the output size of a given set of pixel dimensions to be. There are [[Wikipedia:Reference_desk/Archives/Computing/2007_May_20#DPI_in_photoshop|lots]] [[Wikipedia:Reference_desk/Archives/Mathematics/2006_December_20#Pixels_per_inch|of]] [[Wikipedia:Reference_desk/Archives/Miscellaneous/2008_January_25#Printer_DPI_resolution|attempts]] to explain it in more practical terms in the Reference Desk archives, other than the [[Dots per inch]] article we have. Explaining it can be a little tricky if you are not used to thinking about images for the purposes of printing quality (which you presumably are not, since you are asking about DPI in the first place).
:DPI in the sense that you are seeing there is an internal value that relates only to a calculation of what you'd ideally want the output size of a given set of pixel dimensions to be. There are [[Wikipedia:Reference_desk/Archives/Computing/2007_May_20#DPI_in_photoshop|lots]] [[Wikipedia:Reference_desk/Archives/Mathematics/2006_December_20#Pixels_per_inch|of]] [[Wikipedia:Reference_desk/Archives/Miscellaneous/2008_January_25#Printer_DPI_resolution|attempts]] to explain it in more practical terms in the Reference Desk archives, other than the [[Dots per inch]] article we have. Explaining it can be a little tricky if you are not used to thinking about images for the purposes of printing quality (which you presumably are not, since you are asking about DPI in the first place).
:The short story is that metadata DPI settings don't have anything to do with the pixel dimensions of the image. They have to do with how it is rendered on an output device. Monitors generally reproduce images at a 1:1 pixel ratio (which can vary in real-world DPI, but 72 and 96 dpi are usually the values you use in estimating), so changing the DPI setting won't change how it looks on screen. What matters in the end is the purpose of your output image. If you have a 300 pixel by 300 pixel image, and you print it out on something that requires 300 DPI to look "good", it will only print out "good" at 1" by 1". If you try to print it out at 2" by 2" it will be twice a poor (150 dpi). What you need to do to figure out how much "detail" an image would have when printed out, you first figure out how big you'd want the printout to be (e.g. 5 inches across), then figure out backwards what an ideal DPI would be for the device (300 dpi is pretty standard as a minimum threshold for things looking OK, so that would mean your image would need to be at least 1500 pixels across. Depending on your output device, you might want many more pixels than that). You can set the internal DPI of an image to any arbitrary amount, but it doesn't affect the total pixels. So our 300 pixel by 300 pixel image might have an internal DPI setting that says it is meant to be 150 dpi (and thus could be printed out at 2 inches by 2 inches), or it could have an internal DPI setting that says it is meant to be 3000 dpi (and thus could be printed out at a maximum length of .1 inches on each side). None of that would change the amount of pixels in the image, just how it is processed by a printer. The value of the DPI setting of a particular image does not, by itself, tell you anything about the amount of detail in the image; that's still always going to be in the pixel count. --[[User:Mr.98|Mr.98]] ([[User talk:Mr.98|talk]]) 17:39, 28 September 2010 (UTC)
:The short story is that metadata DPI settings don't have anything to do with the pixel dimensions of the image. They have to do with how it is rendered on an output device. Monitors generally reproduce images at a 1:1 pixel ratio (which can vary in real-world DPI, but 72 and 96 dpi are usually the values you use in estimating), so changing the DPI setting won't change how it looks on screen. What matters in the end is the purpose of your output image. If you have a 300 pixel by 300 pixel image, and you print it out on something that requires 300 DPI to look "good", it will only print out "good" at 1" by 1". If you try to print it out at 2" by 2" it will be twice a poor (150 dpi). What you need to do to figure out how much "detail" an image would have when printed out, you first figure out how big you'd want the printout to be (e.g. 5 inches across), then figure out backwards what an ideal DPI would be for the device (300 dpi is pretty standard as a minimum threshold for things looking OK, so that would mean your image would need to be at least 1500 pixels across. Depending on your output device, you might want many more pixels than that). You can set the internal DPI of an image to any arbitrary amount, but it doesn't affect the total pixels. So our 300 pixel by 300 pixel image might have an internal DPI setting that says it is meant to be 150 dpi (and thus could be printed out at 2 inches by 2 inches), or it could have an internal DPI setting that says it is meant to be 3000 dpi (and thus could be printed out at a maximum length of .1 inches on each side). None of that would change the amount of pixels in the image, just how it is processed by a printer. The value of the DPI setting of a particular image does not, by itself, tell you anything about the amount of detail in the image; that's still always going to be in the pixel count. --[[User:Mr.98|Mr.98]] ([[User talk:Mr.98|talk]]) 17:39, 28 September 2010 (UTC)

So in short, the DPI figure only affects the image when printed, and is not used when displayed on a computer screen? [[Special:Contributions/92.24.188.89|92.24.188.89]] ([[User talk:92.24.188.89|talk]]) 18:10, 28 September 2010 (UTC)

Revision as of 18:10, 28 September 2010

Welcome to the computing section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


September 23

Replacing software

I bought a new laptop which came with:

  • Microsoft Works
  • Microsoft Office 2007 (60 day trial)
  • Norton Internet Security (60 day trial)

I also purchased a copy of Norton 360 and a copy of Microsoft Office 2010 Student Edition and would like to know the following:

  • Before installing Office 2010 do I have to uninstall Office 2007
  • Do I have to uninstall Microsoft Works (as I believe I've heard that they will clash with each other, although I do realise that as I said they bothe came together on this computer which suggests that they don't)
  • Do I have to uninstall Norton Internet Security before installing Norton 360
  • To install both Office 2010 and Norton 360 do I actually need to use the discs or can I use the product keys to update everything or something like that

--81.23.48.100 (talk) 01:48, 23 September 2010 (UTC)[reply]

That's what I would do, just in case. I would uninstall all of those programs to prevent the possibility of any conflicts. Doing so will also free up disk space and clean up the registry. I haven't used Office 2010 much, but I did try to run Office 2007 and Office 2003 on the same computer and I had all kinds of issues. It took many hours of work before I gave up and just removed Office 2003. It'd also be a disaster if both Norton Internet Security and Norton 360 started automatically whenever you started your computer, right? Norton is a huge resource hog, and it always messes with programs on your computer, often preventing them from working properly. So, one fat nanny is probably more than enough in this case. Also, Works has fewer features than Office, so there's no reason to keep it.--Best Dog Ever (talk) 02:14, 23 September 2010 (UTC)[reply]

Just make sure you get the Norton Removal Tool before you try uninstalling any Norton software. ¦ Reisio (talk) 04:36, 23 September 2010 (UTC)[reply]

I've never heard of any problems between Microsoft Works and Office. Obviously, you probably won't be using Works much, but it may have some features that the student edition of Office 2010 doesn't (A calendar and database, for example). Unless you really need the extra disk space, or it does create some sort of problem, I wouldn't bother getting rid of it. Buddy431 (talk) 14:46, 23 September 2010 (UTC)[reply]
I've been running Microsoft Works along with Microsoft Office for three years and have never had a problem. Dbfirs 21:20, 23 September 2010 (UTC)[reply]
You might have a reason to keep Works if it has features that you need and are not present in Office, but in general I would uninstall all three pre-installed products/trials and install the replacement products using the disks that I should have got when I bought them. If you don't have install disks and instead are expected to use up your internet bandwidth and your own time to download them, do that before uninstalling stuff. And if downloading, only download from the proper, official site and not some random torrent you found somewhere. Astronaut (talk) 20:08, 24 September 2010 (UTC)[reply]

Internet

Occasionally a website will become unreachable from my imternet connection, but the site is fully up and operation (as checked via proxy and downforeveryoneorjustme.com). I've tried flushing the dns which has no effect, and browsing to the sites ip address also doesn't work. Why does this happen? Is it a problem with my computer, the isp, something else? 82.44.55.25 (talk) 17:34, 23 September 2010 (UTC)[reply]

That's hard for anyone else to tell... you can figure out a lot by using the proper network diagnostics (eg traceroute) but without such information, it's anyone's guess. If it's a popular site, and you're on a big ISP, and noone else on twitter is complaining, it's probably something on your end. Aside from that, browsing by IP address rarely works nowadays due to Virtual hosting being used for most websites. If you can find someone in the same neighbourhood using the same ISP, you could compare whether they have similar problems. Unilynx (talk) 18:01, 23 September 2010 (UTC)[reply]
My router sometimes refuses to load pages from www.bbc.co.uk and I have to turn it off and on to get there. --Phil Holmes (talk) 18:06, 23 September 2010 (UTC)[reply]
Your IP looks kind of familiar, I think we've answered a lot of your questions regarding wget before, and what you asked suggests that you were trying to massively copy the content of a site that is not under your control. (This may or may not be legal, and we don't give legal advice here. I'm just saying you might want to check with a qualified person if what you're doing is legal.) A site owner that doesn't want to have her/his site "scraped" may try to keep you out using a robots.txt file - but if that fails, because you're ignoring the request in that file, s/he might put a temporary or permanent block on your IP, denying you access to her/his site. -- 78.43.71.155 (talk) 17:08, 24 September 2010 (UTC)[reply]

Timeout

The default timeout in wget is 900 seconds. That seems very long to me; usually if a site doesn't respond in 10 seconds it isn't going to. Would lowering the timeout to 10 seconds negatively affect wgets functioning? 82.44.55.25 (talk) 22:51, 23 September 2010 (UTC)[reply]

According to man wget, you may find the following options of interest:
-t number
--tries=number
Set number of retries to number. Specify 0 or inf for infinite retrying. The default is to retry 20 times, with the exception of fatal errors like "connection refused" or "not found" (404), which are not retried.
-T seconds
--timeout=seconds
Set the network timeout to seconds seconds. This is equivalent to specifying --dns-timeout, --connect-timeout, and --read-timeout, all at the same time.
When interacting with the network, Wget can check for timeout and abort the operation if it takes too long. This prevents anomalies like hanging reads and infinite connects. The only timeout enabled by default is a 900-second read timeout. Setting a timeout to 0 disables it altogether. Unless you know what you are doing, it is best not to change the default timeout settings.
All timeout-related options accept decimal values, as well as subsecond values. For example, 0.1 seconds is a legal (though unwise) choice of timeout. Subsecond timeouts are useful for checking server response times or for testing network latency.
--dns-timeout=seconds
Set the DNS lookup timeout to seconds seconds. DNS lookups that don’t complete within the specified time will fail. By default, there is no timeout on DNS lookups, other than that implemented by system libraries.
--connect-timeout=seconds
Set the connect timeout to seconds seconds. TCP connections that take longer to establish will be aborted. By default, there is no connect timeout, other than that implemented by system libraries.
There is a lot more related to timeouts and traffic. Just check the man page. -- kainaw 02:04, 24 September 2010 (UTC)[reply]
I've read the manual already but that doesn't answer my question. That tells me how to set the timeout settings, it doesn't tell me if setting the timeout to 10 seconds will negatively affect wgets functioning 82.44.55.25 (talk) 11:04, 24 September 2010 (UTC)[reply]
Lowering the timeout will make it timeout quicker. What else do you expect it to do? -- kainaw 12:14, 24 September 2010 (UTC)[reply]
Clearly, He's wondering why it's set so damn high in the first place!
( The implication being that If there's a good reason he hasn't thought of, he won't mess with it.)
I'm afraid, I don't have a good answer, but it looks like it's been that way for some time. My guess would be that it's there for some historical reason. Personally, I don't recall ever having had an issue setting that to 30 and not worrying about it. APL (talk) 13:22, 24 September 2010 (UTC)[reply]
I have found it is often better to set a smaller timeout with more retries, particularly when the link is lossy and overloaded. If yo are watching it you can abort it and redo it with the -c option to continue from where it got upto (if the web site supports it). Graeme Bartlett (talk) 08:31, 26 September 2010 (UTC)[reply]

What is something like "javascript:OpenDetailWin('<value>')" called

I'm writing a doc page for {{cite gns}}. I need to identify (use a name for) something like "javascript:OpenDetailWin('<value>')". I need to say "a ??? will be found in the URL box". The government can't make anything easy and so the documentation is a bit convoluted. If you look at the doc page, you should know that I am still trying to find a easier way of locating the GUID (id number). –droll [chat] 23:14, 23 September 2010 (UTC)[reply]

How about "a Javascript function"? --Mr.98 (talk) 23:27, 23 September 2010 (UTC)[reply]
Thanks, That's what I guessed but I hadn't a clue really. –droll [chat] 00:07, 24 September 2010 (UTC)[reply]
This format is sometimes called a "Javascript protocol" URL. It's not really a protocol but the "javascript:" tag is where the protocol is usually specified in a URI. --Sean 16:02, 24 September 2010 (UTC)[reply]


September 24

Google chorme (www.xnxx.com)

Note: The above site is NOT safe for work. Dragons flight (talk) 01:04, 24 September 2010 (UTC)[reply]

I have downloaded and installed latest version of Google chrome (not beta). I am trying to visit the above mentioned website. The problem is, thumbnail pictures is not visible there, it looks like empty boxes. But I can see them in other browsers like Mozilla and IE8. Do I need any extension to view them?--180.234.20.97 (talk) 00:59, 24 September 2010 (UTC)[reply]

It works fine for me on Chrome. You might try reinstalling it, or simply emptying the cache and refreshing the page. Indeterminate (talk) 16:54, 26 September 2010 (UTC)[reply]

Playstation3 (PS3) SDK

I like to write 'hello world' programs for the computing platforms I own. Just for fun. For the Wii, it's easy through Wiibrew. For the Xbox360, I see it can be done via XNA and a $99 subscription. How is it done on a Playstation3? PS3 article says nothing, and google for 'ps3 sdk' returns nothing from Sony and forum posts full of hearsay. -- CraigKeogh (talk) 03:58, 24 September 2010 (UTC)[reply]

If you have an older PS3, you can install Linux on it and develop on that. However, you will be restricted from using much of the high-end capabilities of the system. As for an SDK, those are hard to get. Sony will not let you have one unless they like your idea. Even then, they reserve the right to force you to cease development if they change their minds. Eventually, they may allow homebrew games, but not now. I wouldn't expect it until they focus on the PS4. -- kainaw 04:09, 24 September 2010 (UTC)[reply]
The XNA "Creator's Club" package is a pretty unusual experiment on Microsoft's part. Usually to develop on a console you need either an profesional developer's kit, (Not available to private individuals, only businesses.) or a "homebrew" kit by people who have reverse engineered the consoles' hardware. (See Nintendo DS homebrew, Wii homebrew, PlayStation Portable homebrew)
However, there's no significant homebrew effort that I'm aware of for PS3, so your options are to install Linux on it and treat it like a Linux desktop, or to form a development company and buy a official dev kit. APL (talk) 13:11, 24 September 2010 (UTC)[reply]
Just a note... From what I've read about the PS3 (and as I alluded to above), you cannot simply purchase the Sony SDK. Sony has to accept your project and allow you to use the SDK. -- kainaw 13:39, 24 September 2010 (UTC)[reply]
scedev.net is Sony's web portal for licensed Sony developers. It has a "Licensing Information" section if you're interested in signing up; I think everything else on the site is private, for licensed Sony developers. As Kainaw implied above, though, it's a significant barrier to obtain a license from Sony. There is no cost, but you have to present your credentials and prove that you're a real software company; and I believe you also have to do a concept submission for one of their platforms, showing your schedule and budget and basically a commitment to develop and publish a game for the platform. They aren't going to approve hobbyists who just want to write "hello world", so the Linux idea above is your best bet. One unfortunate aspect of Linux on the PlayStation 3 is that apparently Sony implemented a hypervisor to stop you from being able to access the RSX 'Reality Synthesizer' GPU. Comet Tuttle (talk) 15:14, 24 September 2010 (UTC)[reply]
Oh, I'm wrong, there is a homebrew effort for PS3. (Of course there is. Why did I doubt it?) PSFreedom. So if you want to fool around with the ps3, you might check it out. But it's not really useful for making a game that anyone else will ever play. APL (talk) 15:41, 24 September 2010 (UTC)[reply]

computer science and technology

linking and loading web site and web server —Preceding unsigned comment added by Ankit Kumar Sinha (talkcontribs) 04:28, 24 September 2010 (UTC)[reply]

Sorry but could you rephrase as it's not clear what your question is. Also have you read our header which mentions "If your question is homework, show that you have attempted an answer first" Nil Einne (talk) 07:32, 24 September 2010 (UTC)[reply]

Some articles that seem relevant to your query Hyperlink, downloading, web site, web server 82.44.55.25 (talk) 11:26, 24 September 2010 (UTC)[reply]

Moving the contents of Windows PST files to an archive on a linux machine

I have two mail accounts that I access from Windows PCs, using Microsoft Outlook. I archive old emails in .PST files on the Windows PCs. I would like to periodically (and manually) transfer the contents of old .PST files to an archive on my Linux machine (that dual boots with XP), from which I would like to be able to open the emails and forward archived emails to my windows accounts. The archive must of course preserve attachments and have good searchability, and it will become quite large. I read recently that the Windows version of Thunderbird can read .PST files, so that might be one the tools needed to achieve what I want. I do not have any experience in setting up a mail server on a linux machine. Thanks in advance for advice on how to proceed and pointers to relevant howtos. --NorwegianBlue talk 07:37, 24 September 2010 (UTC)[reply]

If the Linux box is not the one with the main Outlook install, then I'd recommend installing a light-weight IMAP server on it (in Linux). You can then drag'n'drop from Outlook to the IMAP server. The archive would be placed into an 'old-emails' folder on the IMAP server.
Are you manually sharing the .PST files around your MS-Windows boxes? This would alleviate the need for that as well; you'd just do a 'send-and-receive emails' on the other Outlook instances to sync them to the IMAP server. For laptops I'd set Outlook to keep a copy of the IMAP contents, so you can read your non-archived emails on the road.
There are several packages available in most Linux distro repositories for this. The IMAP server itself can't send or receive email (a SMTP server is needed to receive and IMAP isn't used for sending), so there is little security problems. I'll check how I did this and get back to your here. It wasn't difficult, but there were a few non-documented steps I had to take. CS Miller (talk) 13:01, 24 September 2010 (UTC)[reply]
It's been a couple of years since I did this (so I stand to be corrected with more up-to-date info) but Thunderbird can't read PSTs, and open-source tools and libraries to read them aren't very mature. There are several ways I know of to do what you want:
  1. On a Windows machine, import the PST into Outlook (not Outlook express). Then, on the same machine, run Thunderbird and use its import from outlook option. This does MAPI calls (rather than reading the PST file) to get the email data. This should produce an mbox format file (down in the hidden gubbins of the Thunderbird profile in the Application Data area) which you can copy over to the equivalent place in the profile of a Linux Thunderbird install.
  2. On your Linux machine, install Dovecot (it's in the standard package repositories for most distributions) and configure it as an IMAP server (its config options are fairly obvious). Now configure your Linux Thunderbird client to be an IMAP client of that Dovecot IMAPd. On Windows, import the PST in Outlook, then configure Outlook to be an IMAP client of that Dovecot IMAPd on Linux. Then drag-and-drop the emails from the place Outlook imported them to over to the IMAP account, and they'll be instantly available to the Linux tbird. Once that's done you can decide to keep them inside the IMAP server or you can have the Linux Thunderbird copy them down to its own store (again by drag and drop). Unfortunately in your position this requires the Linux and Windows machines to be running concurrently, and if you're dual booting this isn't possible.
  3. Use Fookes Software's Aid4Mail (again on Windows, but it's fairly basic stuff, so it should work in Wine). That will read the PST file and will export it to either .eml files (that's one file per email) or an mbox that thunderbird will read. Again you'd copy those exported files over to Linux - you'd put the mbox into the Thunderbird profile, or with the .emls you'd just leave them in a folder somewhere and open them in Thunderbird by double clicking. Aid4Mail (I think you'd need the Professional version) isn't free.
I'll dig around and see if Evolution has PST support (as Evolution tries harder to be an Outlook replacement than Thunderbird does). -- Finlay McWalterTalk 13:15, 24 September 2010 (UTC)[reply]
You've not said from where the Outlook machine is getting those emails. If they're coming from a Microsoft Exchange server (which is pretty common in corporate and institutional environments) then Evolution can access them on the Exchange server. If that's the case you might consider using Evolution on Linux rather than Thunderbird (it's pretty good, and has much better Exchange integration). If you're set on Thunderbird, I think Evolution can export to mbox. -- Finlay McWalterTalk 13:19, 24 September 2010 (UTC)[reply]
The this free-software program claims to be able to convert PST to mbox on Linux. I haven't tried (and don't have the wherewithall to try it now) but you can give that a shot. -- Finlay McWalterTalk 13:28, 24 September 2010 (UTC)[reply]
Thanks a lot for your replies! The emails originated from two different exchange servers. The plan is to just copy the .PST files, I don't intend to connect to the exchange server from the linux machine. I'll be working on this in the weekend. I'll possibly be back with more questions, and will report the results here. --NorwegianBlue talk 15:58, 24 September 2010 (UTC)[reply]

Temporary files

I work with a lot of temporary files which I delete after a few days. Basically constantly reading and writing small files. I'm currently using my computers main hard drive for this. Would moving them to an external drive be a good idea to speed up my computer? If so, which would be better; an external hard drive or a usb flash drive? —Preceding unsigned comment added by 71.197.38.32 (talk) 15:09, 24 September 2010 (UTC)[reply]

Probably not. The SATA-2 connection you'd typically run in a desktop machine is more than 6 times faster than the USB2 connection you'd use for an external disk. An internal flash SSD (on that SATA-2 connection, or external on an ESATA connection) should be faster (at a nontrivial price); to what degree depends greatly on your specific usage pattern. It may already be that you're already mostly dealing with those files in the computer's (RAM) disk cache, in which case a faster drive won't help. You'd have to experiment to see for sure. -- Finlay McWalterTalk 15:30, 24 September 2010 (UTC)[reply]
Just as an aside, if you're using these files frequently enough that you're concerned about performance, you probably don't want to be using temp files in the first place. I only use temp files for short term persistent storage, or as a hack to get around some usage-specific limitation. if you really need to use files for some reason, and you have enough free RAM, try making a ramdisk and writing your files there - that will be dramatically faster than using hard disk I/O. --Ludwigs2 22:04, 24 September 2010 (UTC)[reply]
If creating a lot of small temporary files is really causing performance problems, it may be the seeking, not the raw bandwidth. In that case putting the files on a second drive could help a lot even if the interface is slower. ImDisk is a good free RAM disk driver for Windows. -- BenRG (talk) 02:29, 25 September 2010 (UTC)[reply]

Hard Drive

I am wondering, is there a difference between internal and external hard drive life-span? If I had an external usb hard drive and left it on 24/7 always in use, would it last as long as an internal drive left on 24/7 always in use? Not including accidents like dropping the external drive. 71.236.203.190 (talk) 15:15, 24 September 2010 (UTC)[reply]

There's no reason to suppose so, providing the external enclosure has adequate ventilation. External enclosures simply contain the same disks that are otherwise sold for desktops and laptops. -- Finlay McWalterTalk 15:21, 24 September 2010 (UTC)[reply]
externals may last longer on laptops, due to (a) the restrictions on ventilation in the limited laptop fame and (b) the random abuse that laptops are subject to. --Ludwigs2 03:37, 25 September 2010 (UTC)[reply]
On the other hand, there are other factors that may shorten the life of an external; it may be more likely to get yanked by its cable or knocked off a desk &c, in which case there may be either damage to the disk itself, or to the container. Plus, computer desks & laptop bags &c are usually designed with good placement/protection for the computer but not necessarily for other devices. The most recent failure I saw involved the plastic stub inside the device's USB socket (y'know, the one that supports the metal contacts) being damaged, along with the plastic backing at the rear of the socket, leaving it unusable, just because of a strong tug on the USB cable. If there's no USB cable in the first place, that failure mode is eliminated. bobrayner (talk) 15:48, 28 September 2010 (UTC)[reply]

Transferring songs from the mini-disc player to the computer, is it possible ????

Today we have Ipods and MP3 players and the likes but some years back the MINIDISC-player was very popular, and i'm sitting on one with many songs on it. I doubt it possible, but i'll ask anyway if someone might know ; Is it possible to transfer these songs from the minidisc to the computer?? The usual thing would be to downlaod/transfer songs from the computer to the minidisc ofcourse, but i need the opposite, songs back on a computer. Old songs, many of which are hard to come by, that i would like to have on a computer and perhaps on my ipod because minidisc won't live forever and i hope the songs won't be lost with it.

So if possible, how does one go about doing this, transferring songs from Minidisc to computer?

Thanks :)

Krikkert7 (talk) 16:12, 24 September 2010 (UTC)[reply]

This MiniDisc FAQ, question #6, says that the MiniDisc was specifically designed with a "firewall" preventing you from digitally transferring audio from a MiniDisc to your PC. It goes on to say that there is a US$5,000 package, aimed at audio professionals, that will do this despite the firewall. So, the way to transfer is going to have to be to use the analog hole: Hook up a cable from your MiniDisc player's "audio out" port into your PC's "audio in" port, and use ordinary audio capture software on the PC to capture the audio. Comet Tuttle (talk) 16:33, 24 September 2010 (UTC)[reply]
PS: Hold the phone: Our own MiniDisc article mentions the "MZ-RH1" MiniDisc player, which apparently lacks the firewall and may let you digitally copy audio back and forth between your PC and a MiniDisc. Comet Tuttle (talk) 16:36, 24 September 2010 (UTC)[reply]


thank you. Your answer is very helpful :)I'll give it a try ... 84.49.182.137 (talk) 17:23, 24 September 2010 (UTC)[reply]

Latest Mozilla Beta Version

I have downloaded both of these beta versions:

  • Firefox Setup 4.0 Beta 6
  • Firefox-4.0b7pre.en-US.win64-x86_64.installer

So which one is the latest version? My OS is W7 64bit.--180.234.0.174 (talk) 16:36, 24 September 2010 (UTC)[reply]

The latest beta released on Firefox's beta site (and through auto-update is 4.0b6. If you are looking at the nightly builds, 4.0b7pre would be the latest build. The most stable beta build will likely be 4.0b6 right now. 206.131.39.6 (talk) 19:28, 24 September 2010 (UTC)[reply]

Help! How do I announce my blog

Help I need some assistance. How do I announce my blog which I have just started? There is hardy any readers at the moment. 122.107.192.187 (talk) 21:52, 24 September 2010 (UTC)[reply]

The experienced and established bloggers I have met all followed more or less the same strategy. 1. Provide a steady stream of good content. 2. Give many, frequent, good, non-spammy comments (e.g. things that will make people interested in finding out who you are and what you have to say) on other blogs that might have a similar readership, with the link to your blog as your URL. That's the account of it I heard from a few such bloggers, anyway. They emphasized how difficult it was in the beginning and how much work they did to "publicize" their blog in a non-spammy fashion. (Spamming your blog — that is, lots of attempting to get attention without providing much substantive contributions to others — is probably NOT the way to grow your readership.) The readily-updating content convinces people that your site is worth bookmarking or following or whatever (and not just a dead site), while the thoughtful comments drive traffic and PageRank towards your site. --Mr.98 (talk) 01:16, 25 September 2010 (UTC)[reply]
Yeah, don't write spammy comments trying to advertise your blog on other sites. It will not give a very good impression of either you or your site, and it probably won't mean much more traffic either. Chevymontecarlo - alt 08:30, 25 September 2010 (UTC)[reply]


September 25

Is Windows hacker-friendliness now officially deliberate?

On any given day, as I understand it, Windows has about half a dozen "zero-day" security vulnerabilities that can be used to take over any computer running Windows anywhere in the world — so far as I can tell, this will be true in perpetuity.

Now according to [1], a set of "undisclosed vulnerabilities" were used to infiltrate Iranian computers for the purpose of doing damage to industrial facilities, which is believed to have originated in some nation state.

Does the news surrounding this event rise to the level of demonstrating that the vulnerabilities are officially tools for the U.S. government, rather than random errors? Wnt (talk) 06:06, 25 September 2010 (UTC)[reply]

No. The holes used by the Stuxnet worm have been patched. The vulnerabilities are not "undisclosed." Microsoft patches all major holes within about a month after they are discovered. (See Patch Tuesday.) Most attacks are directed against browser plugins, like Adobe Reader and the QuickTime Player, which are not maintained by Microsoft. Other viruses are executed on purpose by users, who download and execute the virus because they think it is something else (e.g., free software), or because they are spies trying to infect their employer's computers. The latter is almost certainly the case of Stuxnet. It is introduced to a network using a USB stick, meaning you need a spy to insert one into a computer on the local network for a plant. I'm not sure what you mean when you stated, "this will be true in perpetuity." As I said above, all serious holes are actually patched and it is almost impossible nowadays for a script kiddie to remotely infect an up-to-date machine running Windows XP or later. Whoever wrote Stuxnet almost certainly came across the vulnerabilities using trial and error.--Best Dog Ever (talk) 06:32, 25 September 2010 (UTC)[reply]
But note that a successful attack via a browser plugin requires both the browser and the OS to be buggy, and both are under Microsoft control. A proper browser will not allow a plugin to execute arbitrary code. A proper OS will not allow a browser plugin to execute arbitrary code, either. Yes, that means we have no proper OSes, but some are less improper. --Stephan Schulz (talk) 09:48, 25 September 2010 (UTC)[reply]
Microsoft is responsible for making sure Firefox, Chrome or Safari are proper browsers? Nil Einne (talk) 18:06, 25 September 2010 (UTC)[reply]
Note that the essential problem with having "purposeful" security vulnerabilities of that sort built into all copies of a OS is that the costs would likely be much higher than the benefits. On the off-chance that nobody noticed them and Iran didn't patch them, they would make vulnerable a huge amount of US and allied software. That's not so great. Now it certainly is the case that the US government investigates security vulnerabilities and stores up "hacks" for a rainy day, and that's what your article is about. (Bruce Schneier has written on this a bit, as well.) No doubt other nations do the same thing (it's pretty well established that China and Russia have excellent hacking capabilities, and part of that is having a list of vulnerabilities that have not yet been made public.) And it's definitely the case that the US government has a history of writing bugs/backdoors into specific systems that it makes available for export. (See Siberian pipeline sabotage.) But to say that all Windows bugs/vulnerabilities are there because of the US government wanting them is silly and unlikely. --Mr.98 (talk) 12:48, 25 September 2010 (UTC)[reply]
You might be interested in _NSAKEY. Some people said this was a US government backdoor in windows, MS denied it. Tinfoilcat (talk) 16:56, 25 September 2010 (UTC)[reply]
You're looking for articles such as Computer surveillance, Backdoor (computing), Cyber spying. It's probably likely there are certain "bugs" intentionally placed, but there are equally as many that are actual bugs due to incompetence.Smallman12q (talk) 19:57, 25 September 2010 (UTC)[reply]
Here are two relevant links.
Wavelength (talk) 20:19, 25 September 2010 (UTC)[reply]
Why are you linking paranoid articles about computer software on the website of a business that sells mail-order diet products? -- BenRG (talk) 21:06, 25 September 2010 (UTC)[reply]
I added links to articles which I believe are relevant to this discussion. Your use of the word "paranoid" suggests that the articles are incorrect. Selling those products is irrelevant to whether the articles are correct and relevant. (I did not intend to introduce spam, and I did not take the time to check that feature before posting the links.)
Wavelength (talk) 21:28, 25 September 2010 (UTC)[reply]

To be fair, Microsoft with its Windows backdoors isn't alone. Apple has been shown to have certain backdoors, as well as other major software retailers. The question that arises is whether or not it was intentional, and for how long the "glitch" was known.Smallman12q (talk) 02:11, 26 September 2010 (UTC)[reply]

Memory allocation

I have a new memory-intensive game, which is running too slowly for my liking. I've got 2Gb of RAM, but I note even when the game is struggling, the task manager indicates it is using about 900Mb. Since I'm not running much else at the same time, is there any way I can increase this? Grandiose (me, talk, contribs) 09:56, 25 September 2010 (UTC)[reply]

If there is free memory, and it's not taking it, then likely it doesn't need it, and your assumption that the slowness is due to memory problems is thus likely false. -- Finlay McWalterTalk 10:10, 25 September 2010 (UTC)[reply]
If the game is not utilizing your full physical memory space, you should make sure you have the latest DirectX version capable for your graphic card installed, or considering upgrading your graphic card to a better one. Sir Stupidity (talk) 00:44, 26 September 2010 (UTC)[reply]

wget

I want to exclude a single url from wget when doing recursive retrieval. The problem is, everything I've tried has also excluded other urls. For example, I want to exclude http://example.com/a so with the --reject option I tried:

-R a
-R "a"
-R "*/a"
-R http://example.com/a
-R "http://example.com/a"

but

http://example.com/a
http://example.com/ab
http://example.com/abc
http://example.com/abcd
http://example.com/abcde
http://example.com/abcdef
etc

are also excluded. How do I make wget exclude just that one url without affect others? 82.44.55.25 (talk) 15:35, 25 September 2010 (UTC)[reply]

Try "http://example.com/a$" (the $ matching the end of line) 94.168.184.16 (talk) 20:15, 25 September 2010 (UTC)[reply]
Thanks but it didn't work. I'm on Windows if that makes a difference 82.44.55.25 (talk) 21:33, 25 September 2010 (UTC)[reply]
wget's file type patterns don't offer full regex syntax. But take a look at this. It looks like you can combine reject and accept statements, so if you had "-R example.com/a -A example.com/a*", it should accept all the things in your second list. I think. I'm not sure whether order is important. Indeterminate (talk) 16:23, 26 September 2010 (UTC)[reply]
Thanks, but that didn't work either :( When I tried it, wget rejected "index.html" for some reason and the download stopped at that point. 82.44.55.25 (talk) 09:57, 28 September 2010 (UTC)[reply]

My anti-virus thinks that computer game is virus/trojan

I am currently trying to install CNC Tiberian Sun on my computer. I have a a antivirus system through my ISP (Wild Blue) on my computer. It thinks its a trojan. Here is the message: "Real-time access has blocked access to Trojan-Downloader:W32/Renos.gen!Q virus was detected as a potential security threat. It is a read-only file. File: game.exe Path: F:/install 12.213.80.54 (talk) 18:26, 25 September 2010 (UTC)[reply]

If you downloaded CNC Tiberian Sun from a torrent, or you got your copy from a friend's hard disk, then it might indeed be a trojan. Delete it. If you got it from an original disc or you bought it via Steam or some other legitimate electronic download system, then your solution is just to turn off the virus checker, install, then re-enable the virus checker, and if possible configure the virus checker to not check the CNC Tiberian Sun folder anymore. Comet Tuttle (talk) 21:00, 25 September 2010 (UTC)[reply]

Facebook login

Two users work on one (windows XP) PC with one user account. Because there are two facebook accounts, there is a lot of logging off and logging back in again. Is it possible to make a favorite/bookmark that will do the entire login process as a single URL? -- SGBailey (talk) 21:24, 25 September 2010 (UTC)[reply]

If snooping/impersonation isn't a concern, one of you could log onto Facebook with only one browser, say, Mozilla Firefox; and the other could log on only with a second browser, say, Google Chrome; and leave cookies on and so forth for Facebook and you would each never have to log on or off again. Comet Tuttle (talk) 06:22, 26 September 2010 (UTC)[reply]
At least some of the major browsers support multiple profiles with different cookies, bookmarks, history, addons, etc., independently of Windows/Linux/Mac user accounts. See here for example. Alternately, here's a Firefox extension that does more or less what you want. -- BenRG (talk) 07:09, 26 September 2010 (UTC)[reply]
Why not create a second user on the PC? You will then have separate internet favorites, cookies, history and so on. Astronaut (talk) 10:48, 26 September 2010 (UTC)[reply]

Sounds like it isn't possible with MSIE then. Ah well. Thanks folk. -- SGBailey (talk) 20:00, 26 September 2010 (UTC)[reply]

Should work fine if you use two Windows user accounts. It's true IE doesn't support seperate user profiles outside of user accounts AFAIK, probably because MS tries to encourage people to use the OS feature. Barring that, it's possible using some sort of cookie manager would work but I haven't tried Nil Einne (talk) 21:02, 26 September 2010 (UTC)[reply]

September 26

Android Tablets

Can I get the names of some Android-powered tablets that:

  • Have at least a 1 GHz processor.
  • Have 512 MB of RAM.
  • A USB port.
  • At least 8 GB of built in storage.

--Melab±1 03:28, 26 September 2010 (UTC)[reply]

The Asus Eee Pad EP101TC will meet those requirements.--Best Dog Ever (talk) 04:07, 26 September 2010 (UTC)[reply]

Send a kiss?

I have a touch screen laptop, I've just got it, it's one of those ones that the screen can flip around so it can be turned into a tablet. Sometimes it's nicer to handwrite things as it seems more personal, but I don't appear to have software installed to do that (I have a HP TouchSmart tm2 - it can support multitouch) and so I just open up MS Paint and write in that. It's not the best, but it does the job. I wanted to send a kiss though, not just as "x" but properly send one, by kissing the screen and having my lip marks on there to put at the end of a letter, but unfortunately that doesn't work at all with MS Paint; it draws a couple of apparantly random lines with no resemblance to lips at all. So how would I do it? Thanks for any help you can provide, and if you can't provide any, thanks for reading anyway. 192.150.181.62 (talk) 14:22, 26 September 2010 (UTC)[reply]

Multi-touch screens only record a handful of individual touch points (see List of Multi-Touch Computers and Monitors). Most do only two or three; a few do 32 or more (mostly really expensive special-purpose things like Microsoft Surface). To properly image something touching the screen you'd need many more points. Even an array of 16x16 points (needing 256 individual touches, more than all but one of the displays on that list) would give a very blocky image (a lip impression would be nothing more than an indistinct shape). So we're quite a way off the technology being able to do what you suggest (at least for things that are commercially and generally available). -- Finlay McWalterTalk 14:39, 26 September 2010 (UTC)[reply]
Immediately after applying black lipstick, kiss a sheet of paper. Wait for it to dry. Scan the paper. Marnanel (talk) 15:18, 26 September 2010 (UTC)[reply]

It's a shame commercially viable technology is unable to do very much. Thank you both, and I think I might just try that low-tech approach. 192.150.181.62 (talk) 15:42, 26 September 2010 (UTC)[reply]

Well, your assessment of what it should be able to do is based on a basic misunderstanding of how it works. In any case, this is a pretty specific requirement you have for it, one without a lot of other applications. To increase the density of the touch points to the level that you'd want would probably be expensive, with almost no everyday benefits. By contrast, just using a scanner to import an image (which is all you are really trying to do) is tried-and-true, easy-and-cheap technology. You'll only need to do it once to have an infinite number of kisses to apply. I'm not sure you should be disappointed with it! --Mr.98 (talk) 15:51, 26 September 2010 (UTC)[reply]
If you want to use a tablet, a simple solution would be some sort of software which captures it from a webcam the tablet may have Nil Einne (talk) 16:51, 26 September 2010 (UTC)[reply]
And if you have a small piece of glass or clear plastic you can get the full squished lip effect. ¦ Reisio (talk) 07:05, 27 September 2010 (UTC)[reply]

Spurious http redirect, can anything be done?

This is in reference to an issue that came up at WP:AN#What's this? (strange site-redirect). Somebody tried to go to wikipedia.org but accidentally typed wikkipedia.org instead, and was redirected to a site called survey.prizesgiveaway.com. It seems to me that that's either a violation of the .org rules or a hacked system, but the question is, if one wanted to take action, what action would be appropriate? Looie496 (talk) 16:55, 26 September 2010 (UTC)[reply]

That domain name is registered through Moniker. If you have a complaint, you must contact them. -- kainaw 16:58, 26 September 2010 (UTC)[reply]
(Without offering legal advice) - it is probable that Wikimedia Foundation could sue the infringer on some grounds of trademark infringement, or some similar thing. However, litigating would be expensive - the benefits of shutting down this site are probably not worth the cost. Such cases of "stolen" web domain-names are often settled out of court with informal exchanges; for example, as stated in Microsoft Bob, Microsoft traded "bob.com" for "windows2000.com" after deciding that one was more valuable than the other. In general, unless the site is pretending to be something it is not, it will be difficult to construct a clear-cut legal case against them; but a civil lawsuit can be filed for virtually any reason, including "diluting the brand-name." Nimur (talk) 17:29, 26 September 2010 (UTC)[reply]
Yes, but I'm pretty sure there is a rule that .org addresses can't be used by for-profit entities. How that rule is implemented and enforced is more than I know, though. Looie496 (talk) 17:38, 26 September 2010 (UTC)[reply]
According to our .org article, "Although org was recommended for non-commercial entities, there are no restrictions to registration. There are many instances of org being used by commercial sites". Rojomoke (talk) 18:00, 26 September 2010 (UTC)[reply]
For example http://yahoo.org 82.44.55.25 (talk) 18:40, 26 September 2010 (UTC)[reply]
Yes, .org is entirely voluntary compliance, not followed at all. It is for all intents and purposes unregulated (unlike .edu or .gov or .mil). You should not assume that .org sites are any less commercial or more trustworthy than .com or .net sites. --Mr.98 (talk) 22:03, 26 September 2010 (UTC)[reply]
As Typosquatting notes, a lawsuit may not be necessary as there is the Uniform Domain-Name Dispute-Resolution Policy (although either party can still sue after losing that*). Still costs money but the process was set-up to try and reduce the costs and I believe it's usually works in that way. Notably from Wikipedia:Wikipedia Signpost/2009-08-17/News and notes#Foundation secures typosquatting domains it seems the foundation is actually active in this area, so there's a fair chance they'll be interested although I suspect they'll already know. *Note that there are specific areas of law in the US that deal with this if it does go to a lawsuit. Nil Einne (talk) 20:20, 26 September 2010 (UTC)[reply]
I've left a note on User talk:MGodwin Nil Einne (talk) 20:52, 26 September 2010 (UTC)[reply]
You read that red banner on top of his talk page before doing so? Guess not... -- 78.43.71.155 (talk) 11:58, 28 September 2010 (UTC)[reply]

Remote Desktop

I'm trying to use the windows remote desktop between two computers (XP, 7). The problem is, if I don't set a password for the account on the computer I want to connect to, the remote desktop program refuses to connect saying "account restrictions". But if I do set a password for the account, I still can't connect to the computer because it's sitting at the login screen waiting for the password to be entered on the physical keyboard. What is the solution? 82.44.55.25 (talk) 16:58, 26 September 2010 (UTC)[reply]

Make sure the account you are logging into is both a member of the Remote Desktop users group and has a password. I know that missing either of those will result in an "account restrictions" error. -- kainaw 17:05, 26 September 2010 (UTC)[reply]
It is. I can connect just fine if I enter the password on the host computers keyboard, but unless I do that it's like the computer doesn't activate the network card and I can't detect or connect to it, despite the fact that it's on and waiting. More strange, if I log in then out again, I can then connect to it from the other computer. I think it's a problem with activating the network card, but I can't see any options to change it 82.44.55.25 (talk) 17:14, 26 September 2010 (UTC)[reply]
Is that a wireless network, by chance? If so, your idea regarding the network being inaccessible might truly be the reason. I've seen Win7 machines with wireless interfaces staying offline until a user logs on. Maybe Win7 stores the network key (if it is a WEP, WPA or WPA2 encrypted wireless network) in user space? -- 78.43.71.155 (talk) 09:37, 27 September 2010 (UTC)[reply]
It's not wireless, I'm connecting the two computers with an ethernet cross-over cable. 82.44.55.25 (talk) 09:56, 28 September 2010 (UTC)[reply]

How to diagnose slow internet?

In recent months I've noticed that my supposedly broadband internet can be as slow as when I had dial-up. This is particularly true when I expect it has lots of users, such as during rainy Sunday evenings. Currently pages from Wikipedia are loading extremely slowly, if at all.

Is there any way of finding out what the slowness is due to? In other words, is there any objective way of determining if the slowness is due to poor service from my ISP, or some other cause? Thanks 92.15.22.106 (talk) 17:52, 26 September 2010 (UTC)[reply]

Try running WinMTR to a known stable site like google.com. In the past I've had problems with my ISP's immediate upstream provider (a bulk internetwork provider) - packets were delivered quickly and reliably to the ISP's immediate node, but their jump from there to the bulk supplier was slow and unreliable. This is mostly of academic interest (as there's nothing I can do), other than helping my ISP yell at their supplier. -- Finlay McWalterTalk 23:39, 26 September 2010 (UTC)[reply]

Thermal past on an i5-750

I finally decided to upgrade my PC today (I've had the parts lying on dining table for about 3 months!). I have an i5-750 and an Asus motherboard and some DDR3 memory. Everything seemed to go OK, all the fans are going, it's booted up fine and I've been installing the huge number of windows/software/hardware updates that take forever on my 2 meg connection for about 2 hours. I got bored and installed the CPU temperature probe program and my CPU temp is hitting 60° quite often which sounds an alarm in the Asus software.

I've bought the boxed i5-750 with stock heatsink/fan. I assumed those metallic grey bits on the heatsink were thermal paste? But now I'm a bit concerned. All the googles I do for 'i5-750 p7p55d temperatures' are about overclocking and using thermal paste. It's been about 10 years since I last built a sytem from scratch and I didn't have to worry about thermal paste back then. I haven't tried to overclock but should I have stuck some paste in there? Writing now on wiki the temp is a happy 27° 87.113.175.51 (talk) 19:11, 26 September 2010 (UTC)[reply]

If the CPU and fan came with a set then the grey stuff on the fan is thermal paste. If it was there you should not add more, since too much paste also serves to insulate. Modern motherboards will shut down before CPU heat causes damage so unless your computer starts to "mysteriously" shut itself down I would not worry. Taemyr (talk) 19:20, 26 September 2010 (UTC)[reply]
Yep it was a boxed set, not separate bits. With it being so long since I installed a new mobo, CPU and RAM I didn't want to risk a stupid mistake. Thanks for the reassurance :-) Spoonfulsofsheep (talk) 19:30, 26 September 2010 (UTC)[reply]

Is it bad for my laptop to leave it playing me music all night long every night?

Is it bad for my laptop to leave it playing me music all night long every night? It's never switched off; when i leave it I just close the lid and it logs me off and stands by. Thanks. 192.150.181.62 (talk) 23:16, 26 September 2010 (UTC)[reply]

There's no inherent reason that it should be a problem, assuming that basic things (like overheating) are not a problem. --Mr.98 (talk) 01:29, 27 September 2010 (UTC)[reply]
Short answer: no. Longer answer: computer components have finite life expectancies that might be shortened somewhat by leaving them on for more hours per day, the hard drive and the display backlight especially. The screen will (or can be set to) shut off after a few minutes. If you're playing the same set of songs over and over, they will be cached in memory and the hard drive will be able to shut off also. Otherwise, it will probably spin all night (which is not the end of the world). -- BenRG (talk) 01:40, 27 September 2010 (UTC)[reply]

Okay, that's great, thanks!! 192.150.181.62 (talk) 10:10, 27 September 2010 (UTC)[reply]

Burn mark on laptop screen

Annoying "burn mark"

Lately, I've noticed what resembles a "burn mark" appearing in the lower right corner of my five year old FSC laptop. I also notice that area of the screen becoming really, really hot. The burn mark appears after a few minutes of normal use - it is not there after a re-start. The computer otherwise behaves normally. Does anyone have any idea what the cause of this might be? Thanks in advance, decltype (talk) 23:29, 26 September 2010 (UTC)[reply]

Try squeezing that area of the screen with your finger and thumb - does the mark move and wobble? If so, that might be the layers of the display sandwich separating, which mostly happens when the screen has been bumped, or the fastenings that hold the screen together have worked loose or snapped. -- Finlay McWalterTalk 23:35, 26 September 2010 (UTC)[reply]
Thanks, I tried that, but the area is really too hot to touch. The description below is a closer fit to the problem I'm experiencing. Regards, decltype (talk) 07:26, 28 September 2010 (UTC)[reply]

Five year old? There's your cause. A new laptop will cost less and have more. ¦ Reisio (talk) 07:02, 27 September 2010 (UTC)[reply]

That's not a helpful answer. 82.44.55.25 (talk) 09:19, 27 September 2010 (UTC)[reply]
Sure it is. ¦ Reisio (talk) 17:30, 27 September 2010 (UTC)[reply]
No it isn't. The OP came here with a specific problem and your answer was basically "your computer is old buy a new one". Firstly, just being old would not cause the problem the OP reported - laptop screens do not have an expiration date and they do not just break after 5 years. Secondly, recommending they get a new laptop doesn't answer the question. They might not be able to get a new laptop, or might need that specific laptop for some reason like running legacy software. 82.44.55.25 (talk) 20:04, 27 September 2010 (UTC)[reply]
Sure it is. Hardware does wear out and break over time, and that might be the cause of the burn mark, which absolutely answers his question (not that what I said couldn't have been a useful comment regardless... we're allowed to comment as well). They might not be able to do a lot of things based on "the answer", that doesn't make it any less of one. ¦ Reisio (talk) 04:03, 28 September 2010 (UTC)[reply]
Thanks for the comment. It is true that the laptop is old, but if you compare it to a brand new computer, the difference in performance for single-threaded applications isn't really that great. But as it turns out, this is probably what I'll end up doing. Regards, decltype (talk) 07:26, 28 September 2010 (UTC)[reply]
It seems, according to this site, that the CCFL is broken. While you can replace the CCFL in a laptop by yourself, it is a delicate job and if you aren't experienced it can take many hours. Taking it to a repair shop will cost more and they will probably replace the whole LCD panel since it's much easier to do. If you are going to fix this yourself, buying a new display will likely be the best investment. 206.131.39.6 (talk) 16:13, 27 September 2010 (UTC)[reply]
Thank you very much. The description of the problem on that site, even the image, seems very similar to the problem I'm having. I guess I have an excuse to get a new laptop then. A new display for a five-year-old laptop doesn't seem like the best investment to me. Regards, decltype (talk) 07:26, 28 September 2010 (UTC)[reply]
Note that a laptop with a somehow damaged display will usually still work fine when hooked up to an external screen, or could be re-tasked for something that doesn't require a screen (a small media or general file server, maybe?). So, instead of throwing it away, you might want to re-purpose it, or find a computer geek in your neighborhood who wants it for such a purpose. When giving it away, DBAN is a good way to make sure none of your personal data remains on the hard drive. -- 78.43.71.155 (talk) 10:38, 28 September 2010 (UTC)[reply]

September 27

Linksys WAP54G wireless access point help

This question concerns a Linksys WAP54G wireless access point. (Note that this is a simple access point, not a router.) I was attempting to set an access password (taking my leave from the legions of unsecured Linksys access points :-) ), but I ran into difficulties. I have two questions:

  1. I had temporarily made a direct wired ethernet connection between my Mac and the access point. I had succeeded in accessing the configuration screens at http://192.168.1.245/. Confused about what it wanted for WEP keys, I clicked a "Help" button. It tried to open a new browser window (also on 192.168.1.245, presumably to display the help in), but the connection timed out. After that, the main configuration screen was also displaying a "can't connect" message. After that, no matter what I did, I could never never connect to 192.168.1.245, no matter what I did. (I tried resetting the access point, and everything.) Is there anything else I could try? Did the thing break, just when I hit the Help link?
  2. After giving up on setting a password, I reconnected everything the normal way, and went back to browsing the net wirelessly (and unsecuredly). I idly tried hitting http://192.168.1.245/ again. To my dismay, it worked! And I was able to set a password, so my access point is now secured! But does this mean that anybody driving by could have taken control of my access point away from me, by setting passwords I didn't know? (Or did maybe the same thing that broke under #1 cause 192.168.1.245 to be accessible wirelessly only as it wrongly became inaccessible wiredly?)

I'm glad to have finally secured the thing, but I'm bothered that its default security model might have been even worse (much worse!) than I had thought... —Steve Summit (talk) 04:08, 27 September 2010 (UTC)[reply]

My Linksys router has a setting under Administration --> Management called, "Wireless Access Web" that lets you set whether people can administer the router wirelessly. I have it disabled. Check that to see if it's enabled. But if you were able to set the password wirelessly, then the answer to your question about wireless administration is obviously, "yes."
As for a password, it sounds like you set a password for the web interface, but haven't set a WEP or WPA key for connecting to the network? Bad move. People can still use your wireless connection and therefore intercept the password needed to login into the AP. WEP and WPA encrypt wireless traffic and prevent unauthorized people from joining your network. My router, by default, is accessed using HTTP instead of HTTPS, meaning the password is sent in the clear if WPA or WEP are disabled. Also, preferably, set a WPA2 key (not WPA or WEP). WEP is weak. WPA is better, but WPA2 is the best.--Best Dog Ever (talk) 04:30, 27 September 2010 (UTC)[reply]
Don't worry, I set both an admin and an access password. Thanks. (Still wondering about the suden lack of wired admin access, though.) —Steve Summit (talk) 12:05, 27 September 2010 (UTC)[reply]

How to Run Emacs Org Mode Commands From the Terminal?

I have a file, foo.org. When I open it in emacs, I can run C-c C-e b and get an html exported file, foo.html. Is there anyway to do this without opening emacs? I'm imagining something from the shell that looks like this:

emacs -orgmode -export -html foo.org > foo.html

Thanks if anyone knows the answer —Preceding unsigned comment added by CGPGrey (talkcontribs) 08:36, 27 September 2010 (UTC)[reply]

Short answer: Yes. It's Emacs! Longer answer: I know no simple way off-hand, but see the options --batch and --script in the Manual. --Stephan Schulz (talk) 09:15, 27 September 2010 (UTC)[reply]


I've gotten closer with this bit of script:
emacs -batch --eval '(progn (find-file "test.org.txt") (org-export-as-html "test.org.html"))'

But I'm still running into problems. The output "test.html" is blank, every time. I there is a problem with the 'find-file' part. How do I tell 'find-file' exactly where the source file is? --CGPGrey (talk) 10:15, 27 September 2010 (UTC)[reply]

I see two small problems: First, you export as" test.org.html", but you say "test.html" is empty. Secondly, I think it should be --batch (note two dashes). Moreover, do you possibly need to switch to orgmode? --Stephan Schulz (talk) 10:53, 27 September 2010 (UTC)[reply]
You were right about the single vs. double dash, thank you. The other bit was just my typo. Here's what happens now. I enter this command:
emacs --batch --eval '(progn (find-file "test.org") (org-export-as-html "test.html"))'

Emacs then spits out a .html file that looks like this:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
               "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> 
<html xmlns="http://www.w3.org/1999/xhtml"
lang="en" xml:lang="en"> 
<head> 
<title>test</title> 
<meta http-equiv="Content-Type" content="text/html;charset=utf-8"/> 
<meta name="generator" content="Org-mode"/> 
<meta name="generated" content="2010-09-27 14:43:09 BST"/> 
<meta name="author" content="Grey"/> 
<style type="text/css"> 
 <!--/*--><![CDATA[/*><!--*/
  html { font-family: Times, serif; font-size: 12pt; }
  .title  { text-align: center; }
  .todo   { color: red; }
  .done   { color: green; }
  .tag    { background-color:lightblue; font-weight:normal }
  .target { }
  .timestamp { color: grey }
  .timestamp-kwd { color: CadetBlue }
  p.verse { margin-left: 3% }
  pre {
	border: 1pt solid #AEBDCC;
	background-color: #F3F5F7;
	padding: 5pt;
	font-family: courier, monospace;
        font-size: 90%;
        overflow:auto;
  }
  table { border-collapse: collapse; }
  td, th { vertical-align: top; }
  dt { font-weight: bold; }
  div.figure { padding: 0.5em; }
  div.figure p { text-align: center; }
  .linenr { font-size:smaller }
  .code-highlighted {background-color:#ffff00;}
  .org-info-js_info-navigation { border-style:none; }
  #org-info-js_console-label { font-size:10px; font-weight:bold;
                               white-space:nowrap; }
  .org-info-js_search-highlight {background-color:#ffff00; color:#000000;
                                 font-weight:bold; }
  /*]]>*/-->
</style> 
<script type="text/javascript"> 
<!--/*--><![CDATA[/*><!--*/
 function CodeHighlightOn(elem, id)
 {
   var target = document.getElementById(id);
   if(null != target) {
     elem.cacheClassElem = elem.className;
     elem.cacheClassTarget = target.className;
     target.className = "code-highlighted";
     elem.className   = "code-highlighted";
   }
 }
 function CodeHighlightOff(elem, id)
 {
   var target = document.getElementById(id);
   if(elem.cacheClassElem)
     elem.className = elem.cacheClassElem;
   if(elem.cacheClassTarget)
     target.className = elem.cacheClassTarget;
 }
/*]]>*/-->
</script> 
</head><body> 
<h1 class="title">test</h1> 
 
<div id="postamble"><p class="author"> Author: Grey
<a href="mailto:grey@Cobalt.local">&lt;grey@Cobalt.local&gt;</a> 
</p> 
<p class="date"> Date: 2010-09-27 14:43:09 BST</p> 
<p>HTML generated by org-mode 6.21b in emacs 23</p> 
</div></body> 
</html>

That is what I would get if I was exporting to HTML a blank org mode file. It turns out that no matter what I write in the (find-file "test.org") I'll get an HTML file the same as above, even if the stated org file does not exist. So clearly, the "find-file" part of the above is not working, but I can't figure out how to fix it. --CGPGrey (talk) 13:47, 27 September 2010 (UTC)[reply]

(-batch and --batch are equivalent, BTW.) If you write it like this, the file should be in the current directory, otherwise you have to specify the path. I don't have org mode installed to test it, but emacs -batch -eval '(progn (find-file "test.txt") (write-file "test2.txt"))' works as expected; you could try something like that to verify whether the problem is really with find-file.—Emil J. 13:54, 27 September 2010 (UTC)[reply]

tsql:sqlserver=plsql:oracle=?:mysql

t.i.a. --217.194.34.103 (talk) 13:31, 27 September 2010 (UTC)[reply]

It doesn't appear to have a name beyond "the extensions to, and subset of, SQL supported by MySQL". Not enough marketing types, I guess. --Sean 15:54, 27 September 2010 (UTC)[reply]

mac safari won't submit form

I do not have a mac, so I cannot easily figure this out. This page looks like a perfectly legitimate form to me, but mac safari will not submit it. Clicking the submit appears to do nothing at all (from what I've been told). Is there anyone with mac safari that can try it and, hopefully, see why the submit button is being completely ignored? -- kainaw 17:35, 27 September 2010 (UTC)[reply]

I tested it with Mobile Safari on an iPod touch, and that form has no submit box when viewed with it (it does in Firefox, so it does seem to be a browser issue). -- Finlay McWalterTalk 17:51, 27 September 2010 (UTC)[reply]
I suspect it's a browser compatibility problem because it's using <button type="submit">Submit</button> rather than <input type="submit" value="Submit"> If that's your website then changing it to the latter should work regardless of the browser (with the same visual appearance)  ZX81  talk 18:06, 27 September 2010 (UTC)[reply]
I have searched and I cannot find any reference that Safari won't handle <button type='submit>. There is a cross-browser issue with <input type='submit'> in the css area. You can style a button, but you can't style input-submit separately from the general submit on all browsers. -- kainaw 18:08, 27 September 2010 (UTC)[reply]
That's correct about the CSS styling, but the page in question isn't doing that hence my reply about it'll look exactly the same, but will work across all browsers (I don't know about Safari, but IE6 has problems with <button>) Given what Mr.98 has just written below though, moving the <form></form> and the initial <input> outside the
would probably fix the problem regardless of whether input/button is being used.  ZX81  talk 18:27, 27 September 2010 (UTC)[reply]
Safari 5.02's error console throws the following when it loads the page:
*<form> cannot act as a container inside <table> without disrupting the table.  The children of the <form> will be placed inside the <table> instead.   contact.html:171
*<input> is not allowed inside <tbody>. Inserting <input> before the <table> instead.  contact.html:173
I don't know if that helps or not. It seems to submit on my Mac (the Submit button is there, and when I put in a nonsense name and no e-mail, it told me, after submitting, that it wanted an e-mail address, which I assume means it worked). --Mr.98 (talk) 18:18, 27 September 2010 (UTC)[reply]
That was the problem. Placed the table inside the form instead of the form inside the table and it works now. -- kainaw 18:41, 27 September 2010 (UTC)[reply]
I can confirm that Mobile Safari now sees the submit button and that it works okay. -- Finlay McWalterTalk 18:49, 27 September 2010 (UTC)[reply]
Thanks. I will have to keep in mind that some browsers don't like forms inside of tables. -- kainaw 18:53, 27 September 2010 (UTC)[reply]

Loose lines in LaTeX

Resolved

How (beyond \sloppy, which is often insufficient) does one persuade LaTeX to allow lines looser than it normally would so as to avoid overfull hboxes? (That is, persuade it to break the line before the offending word and justify the remainder, even when the result risks making rivers, rather than break it after and produce an overlong line even at the tightest spacing.) --Tardis (talk) 17:58, 27 September 2010 (UTC)[reply]

\sloppy works by setting \tolerance=9999, \emergencystretch=3em, and \hfuzz=\vfuzz=.5pt. You may try to make it even sloppier by manually setting bigger \emergencystretch. If you just need to tweak a particular piece of text by hand, you can insert \break at the desired break point.—Emil J. 18:07, 27 September 2010 (UTC)[reply]
Whilst this doesn't directly address your problem, it will probably do the job. Iff you're using pdflatex (it doesn't work for dvilatex), type \usepackage{microtype} in the top-matter of your document. It subtly adjusts font widths and other parameters such as to improve the appearance and readability of your documents, usually alleviating all your overfull boxes.--Leon (talk) 18:09, 27 September 2010 (UTC)[reply]
Interesting — looking forward to trying this.
Now, what I have often wished for is a way to convince LaTeX to break before the long word and not justify the remainder. In other words, tell it that I would rather have the text not get to the right margin rather than go past it. I don't mean putting \\ at the end of particular lines — I mean telling the algorithm "prefer underfull hboxes to overfull ones, and don't go stretching them beyond reason". Is there any way to do that? --Trovatore (talk) 08:20, 28 September 2010 (UTC)[reply]
The algorithm does prefer underfull lines to overfull ones, in fact, it treats all overfull lines as infinitely bad. "Stretching beyond reason" is what underfull means. What you describe is a sort of ragged right typesetting, whose basic form can be done in LaTeX with \raggedright. It works by making \rightskip infinitely stretchable; if it's too ragged for you, you can instead give this parameter small finite stretchability, such as \rightskip=0pt plus 5pt. However, I don't think there is a way to tell the algorithm to right-justify as usual when possible, and only rag lines that would be too bad otherwise.—Emil J. 11:35, 28 September 2010 (UTC)[reply]
That's too bad. That last thing is exactly what I was looking for. --Trovatore (talk) 17:39, 28 September 2010 (UTC)[reply]
But wait, if ovrfull lines are infinitely bad, does that apply even to lines that contain an \mbox? It wasn't really a word that I wanted it to put on the next line, leaving the previous line ragged. It was something in an \mbox. I never did figure out how to do it. --Trovatore (talk) 17:40, 28 September 2010 (UTC)[reply]
Unfortunately, I'm using normal LaTeX, with a long tradition of EPS figures. --Tardis (talk) 18:45, 27 September 2010 (UTC)[reply]
If you ever want to switch (I effectively did a small couple of years ago), ImageMagick supports convert Science.eps Science.pdf and mostly works well. --Stephan Schulz (talk) 07:40, 28 September 2010 (UTC)[reply]
Chances are that your TeX distribution comes with epstopdf.—Emil J. 12:09, 28 September 2010 (UTC)[reply]
Ah, I was being silly -- \sloppy was actually sufficient in this case, but I forgot to apply it to the \par in question. Thanks anyway for its internal details that might be useful later. Also thanks for \break; I had tried \newline and just \\, but they don't cause stretching. I wonder what the difference is between \linebreak and just \break? --Tardis (talk) 18:45, 27 September 2010 (UTC)[reply]
\newline and \\ basically do \hfil\break, so they will not right-justify the text, similarly to the end of paragraph. \linebreak is more or less a fancy LaTeX wrapper around \break (or actually around the \penalty primitive), the primary practical difference being that \break in vertical mode (i.e., outside paragraph) does a page break.—Emil J. 19:03, 27 September 2010 (UTC)[reply]

Safari Autofill data stealing

Does this:link removed by Ludwigs2 because he thinks it is dangerous, but it is not currently work on Google Chrome? --Belchman (talk) 19:15, 27 September 2010 (UTC)[reply]

I imagine you are trying to indicate the Safari autofill exploit? (The proof of concept site demonstrating it that you linked to was apparently too much for Ludwig2, and maybe he is right. Anyway it is linked to from the blog I mentioned; proceed at your own risk.) It sounds like it only works with Safari. Googling "Chrome autofill exploit" seems to indicate that it does not suffer from this bug. --Mr.98 (talk) 19:36, 27 September 2010 (UTC)[reply]
It wasn't too much, but I thought it better to remove it until I'd had a chance to examine the code. do you disagree with that? --Ludwigs2 19:38, 27 September 2010 (UTC)[reply]
Are you done examining it? It took me about ten seconds to find out that it's harmless. It doesn't send the data anywhere.--Best Dog Ever (talk) 20:15, 27 September 2010 (UTC)[reply]
No. It doesn't. By the way, you can always test stuff like this in VMware Workstation or VirtualBox if it's too risky to try on your main system.--Best Dog Ever (talk) 20:11, 27 September 2010 (UTC)[reply]
Yes, I'm done. thanks for asking. --Ludwigs2 20:20, 27 September 2010 (UTC)[reply]

Thanks to everyone but Ludwigs2, of course. --Belchman (talk) 21:09, 27 September 2010 (UTC)[reply]

You need to apologize to Ludwigs2 for that, mate. Comet Tuttle (talk) 21:21, 27 September 2010 (UTC)[reply]
Er.. what? That was in no way a personal attack. At all. He/she doesn't have to apologize for not thanking a user who didn't help answer their question 82.44.55.25 (talk) 22:48, 27 September 2010 (UTC)[reply]
No apology needed, so long as he doesn't continue. I've warned him on his talk page. --Ludwigs2 21:27, 27 September 2010 (UTC)[reply]

Access 2007 and too many fields

I have somewhere around 116-ish fields in a table. I went into design view to change some of them from integer to decimal (about 50 of them). Now that I want to leave design view, Access claims I have too many fields and it won't let me save. What's wrong? 138.192.58.227 (talk) 23:02, 27 September 2010 (UTC)[reply]

Here are Access 2007's vital statistics. Maximum fields per table: 255. So the problem is somewhere else. --Tagishsimon (talk) 00:29, 28 September 2010 (UTC)[reply]
The only other options it has for what could be causing the problem is too many locks (I have no locks) or a setting about indexing that I have not used. 184.97.159.46 (talk) 01:25, 28 September 2010 (UTC)[reply]
Not a helpful answer, but: this sounds like exactly the kind of finicky and annoying buggy thing that Access is very prone to. It seems to get internally corrupted pretty easily in very subtle and strange ways. It may not have anything logical related to what you did. I know: this isn't helpful. But you may have to just close without saving, reopen it, and try again. I write this not as an anti-Access person, but as someone who has struggled with its deficiencies and inherent bugginess for over a decade. --Mr.98 (talk) 02:15, 28 September 2010 (UTC)[reply]
Thanks for the suggestion. I ended up having to split it into a few different tables, which fixed it. I'm just not sure how. 184.97.159.46 (talk) 02:36, 28 September 2010 (UTC)[reply]
That's pretty much how it goes with Access a lot of the time. I've been using it for ages, it's always been like this, from Access 97 through Access 2007. I don't expect it will change anytime soon... --Mr.98 (talk) 15:36, 28 September 2010 (UTC)[reply]

ISP big brother?

Does my internet service provider know all the sites I visit, dirty or otherwise? Can it follow what I am doing, peering over my shoulder seeing what Wikipedia articles I visit and which natural titty model I'm leering over? And if so, do they have any public policy on purging my browsing history, or, say, if I was running for U.S. Senator could a bit of well slipped cash allow someone to get dirt on me? I find the prospect that they can see what I'm doing kind of scary.--162.83.168.103 (talk) 03:37, 28 September 2010 (UTC)[reply]

It is technologically easy trivial for them to monitor all of your unencrypted communications. As long as your encrypted communications links are not tampered with (man in the middle attack), and are otherwise done properly, those are not viewable. There are laws, federal and state, that protect against blatant monitoring without consent, however you need to review the terms of your service to know what you've consented to. There are also exceptions in the law for monitoring used for some maintenance and of course court orders, but I know of no large ISP in the U.S. that openly monitors their user's ingress/egress data as a general rule. They do store some information for limited times, and websites you visit may store information as well, but again, as a general surveillance, I doubt any large ISPs do. Shadowjams (talk) 03:45, 28 September 2010 (UTC)[reply]
Note that even if your communication with the website is encrypted, they can still probably guess what sites you're visiting by your DNS lookups. One option would be run some sort of programme which randomly visits sites, I think those exist. If done properly, it may be hard for them to guess which sites you're actually visiting. Another is of course to ensure your lookups themselves are encrypted and use a DNS service you trust or alternatively use something like Tor (but if you do, bear in mind exit nodes have far less qualms abount monitoring what you do, and depending on how careless you are i.e. how much unencrypted traffic you do and whether you give away details which can identify you in said unencrypted traffic they may be able to work out who you are.) Nil Einne (talk) 03:57, 28 September 2010 (UTC)[reply]
Well that sucks! I guess it's pretty paranoid and maybe a bit daft to think anyone really cares what I am doing in particular being random internet guy, but still I don't like it. But I'm not very tech savvy. I wouldnlt kno where to begin with setting up a randomizer or a "tor" Also, I don't think a randomizer would fool anyone if they were actually looking because what they'd see is random, random, random, random, and then twenty related sites that are linked by topic because that's what I'm doing so they'd know that was really me.--162.83.168.103 (talk) 04:01, 28 September 2010 (UTC)[reply]
Your ISP publishes their privacy policy, which should contain all the details about what they do and don't monitor. Usually it's available on their website. My ISP sent me a copy with one of my bills. It said they do not log any websites I visit, but then there was a whole section on all the stuff they can do if they get a government subpoena. Read yours and plan your Internet-security measures appropriately.--el Aprel (facta-facienda) 04:28, 28 September 2010 (UTC)[reply]
You could also only visit sites you don't want to be tracked visiting via access points owned by others, such as the wireless network of a coffee shop, or McDonalds, or even some public libraries. You may find disagreeable content filters though. The Masked Booby (talk) 04:45, 28 September 2010 (UTC)[reply]
Anonymity is a hard technological problem. The tor project talks about this quite a bit, which may be of some interest to you. Bruce Schneier has a good blog post about the distinction between anonymity and privacy. The former is not at all private, but the identity is hard to link to you. The latter is secure, but who you're talking to is clear. Think about interactions with random strangers... you buy a cup of coffee in an airport. You're anonymous, but you're not private... it's clear to everyone around you bought a cup of coffee. The internet's a little bit like that. Shadowjams (talk) 06:07, 28 September 2010 (UTC)[reply]
Well I'm not going to do anything. I'm not important enough for anyone to really want to follow or log or bribe or expose through my ISP. It's just the principal I don't like. Thanks for the info.--162.83.168.103 (talk) 08:48, 28 September 2010 (UTC)[reply]

submask

how doesa submask work????????please explain with an example...... —Preceding unsigned comment added by Naveenkumarrocks (talkcontribs) 12:16, 28 September 2010 (UTC)[reply]

I assume you are referring to a "subnet mask" by merging the two words into "submask". Subnet is short for subnetwork. It is a part of a network. In most cases, the network is all of the addressable internet. Using IPv4, every node on the internet is addressed with four numbers in the form 1.2.3.4. Each number can be up to 8 bits long (00000000 to 11111111, which is 0 to 255). There are reserved addresses, but that isn't important here. Due to the organization of the internet, addresses with sequential numbers are often on the same local network. All of the computers on a local network are considered a subnet. A subnet mask is a way of identifying which binary digits of the address are used to define the subnet. Lets assume that all of the first three numbers are requires. So, we need 1.2.3.* since every node on our local network has the address 1.2.3.something. The last number has 8 bits, but the first two never change for our local network. That means that the digits that never change for our local network are 11111111.111111111.11111111.11000000. The 1's never change. The 0's do change depending on which node you are addressing on our local network. That is the subnet mask. Converting binary to decimal you get 255.255.255.192. Now, by doing a simple masking operation on an address, I can see that 1.2.3.16 is on the same local network as 1.2.3.4. I can also see that 1.2.3.49 is not. -- kainaw 14:05, 28 September 2010 (UTC)[reply]
And the point of subnet masks is that networking devices (e.g. PCs) know that they can reach other PCs on the same subnetwork using (normally) Ethernet addressing. So they perform an ARP function to find the Ethernet (MAC) address and send the frame direct to that address, rather than sending it to their local gateway for forwarding to the wider world. --Phil Holmes (talk) 16:36, 28 September 2010 (UTC)[reply]
If your subnet mask is 255.255.255.192, then 1.2.3.49 is going to be on the same local network as 1.2.3.4 and 1.2.3.16. However, 1.2.3.149 (for example) is not. JIP | Talk 16:55, 28 September 2010 (UTC)[reply]
Correct. I was sure I had at least one major typo, but I didn't catch it on a read-through. -- kainaw 17:02, 28 September 2010 (UTC)[reply]

More on facebook login

I asked a question a few days ago. Since then I've worked out that href="http://www.facebook.com/login.php?email=myname%40mydomain.com" gets halfway to doing a login. I tried adding &password=abcdefgh but that didn't fill in the password field. Anyone know how to achieve that? -- SGBailey (talk) 13:00, 28 September 2010 (UTC)[reply]

That can only answered by people who know about how that page is implemented; there's no way to see what parameters a particular page can take. My bet is that no such parameter exists because there's no secure way to use it. Publishing such a link would involve publishing the account's password. Using such a link without publishing it is probably unsafe, too, since the contents of URLs are not treated with as much care as form variables in POSTed forms, and for that, there's the "remember me" option and browser-saved passwords. I bet there are browser extensions for people who want to juggle multiple saved passwords. Paul (Stansifer) 13:49, 28 September 2010 (UTC)[reply]
If it has to be a HTTP POST, you could take this code and wrap it into a bookmarklet. --Sean 15:27, 28 September 2010 (UTC)[reply]

DPI in Irfanview

I've been looking at some jpg images in Irfanview and pressing the 'I' key for information. I've noticed that they have different values for DPI, which I assume is dots per inch. After changing the DPI, the image does not appear to change in size. Why do images have different DPI, what is its significance? Does this mean that images with high values for DPI could be enlarged yet still have the same amount detail compared with low DPI images? Thanks

Supplementary question - how do I get images to open by default in Iranview rather than in the "Windows Picture And Fax Viewer"? I use WinXP. Thanks again. 92.28.249.130 (talk) 14:16, 28 September 2010 (UTC)[reply]

DPI is used primarily for printing (I can't think of any other use). It is printing size preference. Do you prefer the image to be printed at 30 DPI or 300 DPI? For print media, DPI is important. When sending images around in JPG format, DPI is used to quickly decide how large an image can be before it gets pixelized. If you aren't in print media, you can ignore it since it doesn't affect the image on the computer screen. As for the default application in XP, I remember it being: hold down shift and right-click on the file. You will see the new option "open with". Find the program in the open with dialog and check the "always open with this program" box when doing so. -- kainaw 14:42, 28 September 2010 (UTC)[reply]
DPI in the sense that you are seeing there is an internal value that relates only to a calculation of what you'd ideally want the output size of a given set of pixel dimensions to be. There are lots of attempts to explain it in more practical terms in the Reference Desk archives, other than the Dots per inch article we have. Explaining it can be a little tricky if you are not used to thinking about images for the purposes of printing quality (which you presumably are not, since you are asking about DPI in the first place).
The short story is that metadata DPI settings don't have anything to do with the pixel dimensions of the image. They have to do with how it is rendered on an output device. Monitors generally reproduce images at a 1:1 pixel ratio (which can vary in real-world DPI, but 72 and 96 dpi are usually the values you use in estimating), so changing the DPI setting won't change how it looks on screen. What matters in the end is the purpose of your output image. If you have a 300 pixel by 300 pixel image, and you print it out on something that requires 300 DPI to look "good", it will only print out "good" at 1" by 1". If you try to print it out at 2" by 2" it will be twice a poor (150 dpi). What you need to do to figure out how much "detail" an image would have when printed out, you first figure out how big you'd want the printout to be (e.g. 5 inches across), then figure out backwards what an ideal DPI would be for the device (300 dpi is pretty standard as a minimum threshold for things looking OK, so that would mean your image would need to be at least 1500 pixels across. Depending on your output device, you might want many more pixels than that). You can set the internal DPI of an image to any arbitrary amount, but it doesn't affect the total pixels. So our 300 pixel by 300 pixel image might have an internal DPI setting that says it is meant to be 150 dpi (and thus could be printed out at 2 inches by 2 inches), or it could have an internal DPI setting that says it is meant to be 3000 dpi (and thus could be printed out at a maximum length of .1 inches on each side). None of that would change the amount of pixels in the image, just how it is processed by a printer. The value of the DPI setting of a particular image does not, by itself, tell you anything about the amount of detail in the image; that's still always going to be in the pixel count. --Mr.98 (talk) 17:39, 28 September 2010 (UTC)[reply]

So in short, the DPI figure only affects the image when printed, and is not used when displayed on a computer screen? 92.24.188.89 (talk) 18:10, 28 September 2010 (UTC)[reply]