Jump to content

Wikipedia:Reference desk/Computing: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
→‎Monitor jitter: new section
Line 211: Line 211:


: If the phone CPU itself is inoperable, the trouble would be getting at the data stored in the flash memory. The phone doesn't contain a readily removable flash card, which you could extract and plug into another phone or into an adapter. Looking at disassembly videos, it seems the phone has a Hynix H26M44001CAR part, which seems to be a [[ball grid array]] package implementing [[MultiMediaCard#eMMC|eMMC]] on a single chip (I can't be sure, as Hynix don't make that part any more, and are very unforthcoming about info or datasheets). That's surface-mounted to the phone's circuit board. Prising it of would destroy it, and because it's BGA this means all the connectors are buried underneath it in a very inaccessible way. I'm sure a specialist data-recovery company could either un-mount the device or could figure out the manufacturing test points on the circuit board and access the device that way (assuming the pool didn't break it too). But that's an exceptionally specialised, technical, and time-consuming task, which would surely cost an eye-watering sum. -- [[User:Finlay McWalter|Finlay McWalter]]'''ჷ'''[[User talk:Finlay McWalter|Talk]] 19:41, 22 June 2012 (UTC)
: If the phone CPU itself is inoperable, the trouble would be getting at the data stored in the flash memory. The phone doesn't contain a readily removable flash card, which you could extract and plug into another phone or into an adapter. Looking at disassembly videos, it seems the phone has a Hynix H26M44001CAR part, which seems to be a [[ball grid array]] package implementing [[MultiMediaCard#eMMC|eMMC]] on a single chip (I can't be sure, as Hynix don't make that part any more, and are very unforthcoming about info or datasheets). That's surface-mounted to the phone's circuit board. Prising it of would destroy it, and because it's BGA this means all the connectors are buried underneath it in a very inaccessible way. I'm sure a specialist data-recovery company could either un-mount the device or could figure out the manufacturing test points on the circuit board and access the device that way (assuming the pool didn't break it too). But that's an exceptionally specialised, technical, and time-consuming task, which would surely cost an eye-watering sum. -- [[User:Finlay McWalter|Finlay McWalter]]'''ჷ'''[[User talk:Finlay McWalter|Talk]] 19:41, 22 June 2012 (UTC)

== Monitor jitter ==

Im using ATI Radeon Xpress 200 card and Dell E193FP monitor. Ive tried various screen resolutions/refresh rates but still get noticable horizontal jitter. Any suggestions as to what the cause is?--[[Special:Contributions/92.25.110.216|92.25.110.216]] ([[User talk:92.25.110.216|talk]]) 19:45, 22 June 2012 (UTC)

Revision as of 19:45, 22 June 2012

Welcome to the computing section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


June 17

Free mac program for converting PAL to NTSC?

Well the headline says it all. I'd like a recommendation for a free mac program to do this. Thanks in advance.--108.46.98.134 (talk) 04:59, 17 June 2012 (UTC)[reply]

You could use FFmpeg*, but… I can’t stress enough how much of a waste of time it is to convert to/from PAL/NTSC/etc. or author optical media in this day and age. ¦ Reisio (talk) 06:57, 17 June 2012 (UTC)[reply]

You can stress it all you want but you're wrong. The reason you're wrong is because you are not in my shoes. I'm sure you're right as to yourself. I like using my 6 carousel DVD player which cannot play PAL. I do not want to stretch a cord from my computer to my gigantic older TV. I hate watching videos on my computer. The software I use to burn DVDs will re-encode to NTSC but it takes forever and I can only convert one disk at a time so with separate software I can convert the files I am planning on burning first and do so all at once and then when actually burning it will be ever so much faster. It is not a waste of time for me.--108.46.98.134 (talk) 17:25, 17 June 2012 (UTC)[reply]

If you spent maybe $25-50 you could get a small computer to use as a media center and connect it more or less directly to your TV. ¦ Reisio (talk) 04:36, 18 June 2012 (UTC)[reply]
Yes, I don't want to do any such thing. You obviously have a lot of knowledge about computers and software and related matters, and it's nice you answer questions but it would be better if you simply answered the question asked and didn't package it with prescriptive and unsolicited pronouncements. By the way, the next time you get a question like this one, if ever, recommend HandBrake instead, which I'm now using for this purpose. Far more targeted and user friendly if a person is not a computer expert which you should assume people asking here are not.--108.46.98.134 (talk) 08:37, 20 June 2012 (UTC)[reply]

Different browsers render same web page differently

One critical issue I have encountered when surfing the web is that different web browsers (specifically, different layout engines with exactly same web page rendering functionality (including fonts, HTML, CSS, SVG, and other standards, fallback when a browser cannot render correctly, and font size, image size, and similar things) render one same web page differently. What is the cause of this phenomenon? 123.24.124.142 (talk) 08:57, 17 June 2012 (UTC)[reply]

"…different web browsers…render…differently… What is the cause of this phenomenon?" They’re different. ¦ Reisio (talk) 10:51, 17 June 2012 (UTC)[reply]
Several (somewhat related) reasons:
  • there are several standards for web content, and some browsers support different ones
  • browser makers don't wait until they've got all the support for a given standard working before they release a product with that support - which means different browsers support different subsets of the standards that they do support
  • the web browsers have bugs
  • the standards are ambiguous; eventually browser manufacturers figure out where these ambiguities lie, and often converge on the same resolution - but not immediately, and not always
  • quite a bit of stuff is left up to the browser to decide (font choices, colour specifics) or varies between systems (screen sizes, installed fonts); good web developers understand this and don't make assumptions, but instead test on a range of browsers on a range of systems to make sure things work as they expect. But, as in most fields, a depressingly large number of people aren't very good, or very thorough, at their job.
  • web browsers support new features (pre-standardisation) features; when web developers use these they're knowingly inducing a difference between how the site will work on one browsers to another
  • users have settings which alter how the page renders (user stylesheets, zoom levels, text zoom settings)
  • users install addons which mutate the page as it loads (e.g. ad-blockers, greasemonkey scripts)
People sometimes say that making a site that works and looks good on the huge range of installed browsers, from IE5 to Chrome to lynx to screen readers to iPads and Android phones, is like nailing jelly to a tree. It is, but it's a jelly tree, and the nails are jelly too. -- Finlay McWalterTalk 11:11, 17 June 2012 (UTC)[reply]
I'd also just want want to note that if the pages themselves aren't fully standards compliant (often the case), much less rely on hacks and workarounds, then you've added another unknown into the mix. Some browsers interpret such sites via quirks mode, which is unreliable. --Mr.98 (talk) 12:54, 17 June 2012 (UTC)[reply]
But the essential fact is that HTML was never designed to be rendered in a single specific way -- unlike formats such as PDF that specify every detail of rendering. Looie496 (talk) 16:49, 17 June 2012 (UTC)[reply]

Where did Google Instant acquire search suggestions?

Google Search offers a function named Google Instant that shows search predictions as you type into the search box. My question is, where did Google acquire the words, names and phrases used in search suggestions? And their frequency of use, as more frequent searches are shown at the upper? 123.24.124.142 (talk) 09:06, 17 June 2012 (UTC)[reply]

Previous searches for the same things. Dismas|(talk) 09:10, 17 June 2012 (UTC)[reply]

Two computers-- One user

I have two computers one faster than the other, connected via a LAN and thence to the internet. What is the best way to use these two computers with one user? Can I make one into a server? Which one? Can I use one to back the other up? Im lost for ideas. Please help--78.148.133.146 (talk) 16:50, 17 June 2012 (UTC)[reply]

If you're lost for ideas, why waste your time looking for a solution to a problem that doesn't exist? Just use the good one. Looie496 (talk) 16:56, 17 June 2012 (UTC)[reply]
I would like to use the facilities and storage of both computers and some programs are difficult to port across to the other computer.--78.148.133.146 (talk) 17:07, 17 June 2012 (UTC)[reply]

If you just want to control the slower computer via the faster computer, Remote Desktop Services is good. The slower computer wouldn't even need a screen or keyboard anymore, and you can access pretty much all of its functions over the ethernet connection except graphic intensive things like as watching videos or playing video games. AvrillirvA (talk) 17:37, 17 June 2012 (UTC)[reply]

Lots of things to do with old computers. Projects could be as varied as turning it into a file server, a media computer, or scrapping it for parts (putting its hard drive into your better computer). Lots of possibilities. Seems to me the question is, what do you want to do with it? Shadowjams (talk) 21:07, 17 June 2012 (UTC)[reply]
My suggestion is to use one as your internet computer, and the other as a standalone, where you can do taxes and record private info like bank account numbers and such that would be bad if they got out on the web. The older one is probably the best choice for the standalone. If you do this you will need to disconnect the standalone computer both from the Internet and the LAN, to make it secure. You should also avoid transferring files from/to that standalone computer using flash drives and such. StuRat (talk) 07:25, 18 June 2012 (UTC)[reply]

Power management in Fedora Linux

Yesterday I changed the Power management preferences on my Fedora 14 Linux system, changing "Put computer to sleep when inactive for" from "Never" to "10 minutes", and checking "Spin down hard disks when possible". Sure enough, after 10 minutes of not using the computer, it went to sleep. Then I wanted to use it again. I moved the mouse. Nothing happened. I pressed a few keys. Nothing happened. I pushed the power button briefly. Nothing happened. I held the power button down for a few seconds. The computer shut down, and when I pressed the power button again, it rebooted. I changed the settings back to "never put the computer to sleep" and "don't spin down hard disks". The computer doesn't go to sleep now, but the hard disks still spin down when left unused. This "putting to sleep" isn't exactly helping if I can't wake it up again without rebooting it. Am I doing something wrong here? JIP | Talk 20:08, 17 June 2012 (UTC)[reply]

You need to talk to Fedora users, but a number of things have to work together for this, including: the BIOS, the hardware itself, the power management software, the Linux configuration (or probably in Fedora: what modules are loaded). ¦ Reisio (talk) 04:39, 18 June 2012 (UTC)[reply]
Hibernation has historically been flaky for Linux. You could try upgrading to Fedora 17; no guarantees of course. What sort of computer is it? Looie496 (talk) 21:01, 17 June 2012 (UTC)[reply]
Without the proper configuration and drivers for the hardware, any OS will fail at this. ¦ Reisio (talk) 04:39, 18 June 2012 (UTC)[reply]

Why is HTML5 YouTube so slow?

So around a month ago I opted into the YouTube HTML5 trial. I figured, this is where the Internet is heading so I might as well get a jump-start on it. It massively slowed down video playback, though. To the point that I thought my computer was failing, because I couldn't even load 720p videos without seconds of lag at a time. There were actually various issues - my stream had to completely re-buffer when I went into full-screen mode, it wouldn't load more than a few seconds of advance footage, and when I jumped backward in the video timeline it lost any existing buffer. Not to mention the right-click features for copying video URLs and the like didn't work. This had nothing to do with my Internet connection.

When I turned off the trial and went back to Flash everything was speedy and normal again. Videos play smoothly and quickly. This seems incomprehensible to me, especially given how much Apple Inc. rails against Flash and how resource-consuming it can be. I tend to agree that it's a dying platform. So why is still the best option for Web video? Is this a matter of Google not making the most efficient code? The standard still being developed? CaseyPenk (talk) 22:36, 17 June 2012 (UTC)[reply]

Confirm: the performance is inferior (Same video needs significantly more CPU on HTML5 trial at present). As you guess the probable cause is that adobe/flash has been being tweaked for ~decades with a genuine (commercial) need to improve performance, whilst HTML5 video codecs are likely still in grad school (or whatever analogy you want). Actually verifying that isn't easy - but I guess most people would have the same conclusion based on similar experiences over the years. As for what Apple says - they have there reasons no doubt - (but see http://www.youtube.com/watch?v=b2F-DItXtZs ) but also see Comparison of HTML5 and Flash#Performance, see also Adobe_Flash#Performance -I think if you have a window machine you will be getting a different performance experience from flash,
Oh. and depending on what codec the HTML video is using it might actually need more decoding - ie be intrinsically be slower - but I think the real answer is above.
I didn't appear to get the buffer problem .Oranjblud (talk) 00:44, 18 June 2012 (UTC)[reply]
I should note that what Apple is talking about may be flash performance on flash built websites (which is often a 'dog') - that's a different kettle of fish to just an embedded flash video.Oranjblud (talk) 11:56, 18 June 2012 (UTC)[reply]

It's simply a matter of those involved not being very thorough (or that the technology is young, if you like). You'd probably find a plugin system made by people who know what they're doing (like VideoLAN has no issue buffering video properly. Do not delude yourself into thinking Apple blocked a video format being included in HTML5 for any reasons other than their own interests, every claim they have used has been debunked (they had to think up a new one each time). ¦ Reisio (talk) 04:46, 18 June 2012 (UTC)[reply]

"blocking" or "redirecting" access to an aplication.

Is there anyway to redirect the files a program want to edit/view/create/delete I have tried sandboxie but I found that sandboxie would leave hidden registers in my compuer (when you uninstall and install the program, the program know it has been installed before and opens a window to buy their license), for example If I wanted to make a portable application of any program redirecting registry settings to a folder and keeping the whole program scoped. What should I use, or do? --190.158.212.204 (talk) 23:44, 17 June 2012 (UTC) Ps: I know a virtual machine is a great solution when you want full protection against an application, but I just need control of the files the application use.[reply]

For what it's worth, Sandboxie's license allows you to use it indefinitely without paying. (See the licensing FAQ, first question.) I don't know of any other way to do this that's as easy as Sandboxie. With any other approach you would waste more time getting it to work than you'd waste waiting for Sandboxie's nag screens. -- BenRG (talk) 16:16, 18 June 2012 (UTC)[reply]
Sounds more like the application they have installed in Sandboxie remembered it was installed, and they are trying to use Sandboxie to run a program for longer than its trial period to avoid licensing it. I'm not sure if we're supposed to answer these sorts of questions. 209.131.76.183 (talk) 11:52, 19 June 2012 (UTC)[reply]
You could attempt to create a portable version by using something akin to ThinApp on a VM. Alternatively, you could create a limited user and run the application under that user. These days, however, virtualization is preferred.Smallman12q (talk) 15:36, 20 June 2012 (UTC)[reply]


June 18

List of Ubuntu releases says that Ubuntu releases are named in alphabetical order. How will Ubuntu developers choose codenames when they run out of letters in alphabet? 117.5.4.42 (talk) 04:36, 18 June 2012 (UTC)[reply]

I doubt they're really concerned about it; my guess is they'll start back at the beginning. ¦ Reisio (talk) 04:48, 18 June 2012 (UTC)[reply]
I see your guess and raise you an "Ärgerliche Ähre"! --Stephan Schulz (talk) 12:25, 18 June 2012 (UTC)[reply]
The official reference is here and as you can see no official decision has been made on that question by Ubuntu's sponsor, Canonical, yet. The leading suggestions seem to be to just start over again at "A" or else to use double letters such Aalenian Aal or Aalenian Aardvark. - Ahunt (talk) 14:55, 18 June 2012 (UTC)[reply]
It'd be a mercy if they just did away with them. Most Ubuntu users don't even realize they're in alphabetical order, so what you have is two versions (number based and name based) and tons of (official, even) websites that only use one and not the other. ¦ Reisio (talk) 20:56, 18 June 2012 (UTC)[reply]
A well made point there , Precise Pangolin, oxymoronic oncelot,quixotic (insert some random name) sheesh Virtualpractice (talk) 12:00, 21 June 2012 (UTC)[reply]

Hi. I use Google Chrome under Windows 7 and my router is SpeedTouch. What should I do to switch to IPv6? Does my router support IPv6? Do I need any new software or hardware? Is the switch done automatically? Help me, please. --41.129.120.207 (talk) 20:53, 18 June 2012 (UTC)[reply]

Go to http://test-ipv6.com/ to see if you have ipv6 support. If you don't, there's not much you can do except wait for your ISP to update their systems or change to an ISP that supports it. 2002:5CE9:401A:0:0:0:5CE9:401A (talk) 21:46, 18 June 2012 (UTC)[reply]

It tells me the following:

Your IPv4 address on the public Internet appears to be 41.129.120.207

Your IPv6 address on the public Internet appears to be 2001:0:4137:9e76:18fa:2c5f:d67e:8730

Your IPv6 service appears to be: Teredo

The World IPv6 Launch day is June 6th, 2012. Good news! Your current browser, on this computer and at this location, are expected to keep working after the Launch. [more info]

You appear to be able to browse the IPv4 Internet only. You will not be able to reach IPv6-only sites.

Your IPv6 connection appears to be using Teredo, a type of IPv4/IPv6 gateway; currently it connects only to direct IP's. Your browser will not be able to go to IPv6 sites by name. This means the current configuration is not useful for browsing IPv6 web sites. [more info]

Your DNS server (possibly run by your ISP) appears to have no access to the IPv6 Internet, or is not configured to use it. This may in the future restrict your ability to reach IPv6-only sites. [more info] Your readiness scores

10/10 for your IPv4 stability and readiness, when publishers offer both IPv4 and IPv6

0/10 for your IPv6 stability and readiness, when publishers are forced to go IPv6 only --41.129.120.207 (talk) 21:55, 18 June 2012 (UTC)[reply]

Using Python/Numpy/Scipy and Datetime objects

So I have just started learning and using python and I have a question. So I have a text file, something like

1990 10 11 21 15 0.0000
1990 10 11 21 20 0.0000
1990 10 11 21 25 0.0000
1990 10 11 21 30 0.0000

with a lot more rows. The columns are delimited by space representing year, month, day, hour, mins, and seconds. So what I want to do is end up with something like

dtdates = [datetime.datetime(1990,10,11,21,15,0),
datetime.datetime(1990,10,11,21,20,0),
datetime.datetime(1990,10,11,21,25,0),
datetime.datetime(1990,10,11,21,30,0)]

an entire array of datetime objects distributed over rows with the columns being the arguments of datetime. First question, all of the arguments of datetime have to be integers right? I am asking about the seconds because in the ascii text file they are written as floats. Second, what is the fastest way (meaning runtime) to do this in python? I can obviously have a naive double for loop but doesn't seem like a good idea for python. I have thousands of rows to process like this so dtdates at the end will be one giant array consisting datetime objects. Thanks! - Looking for Wisdom and Insight! (talk) 22:50, 18 June 2012 (UTC)[reply]

Try this:
#!/usr/bin/python
import datetime

dtdates = []

for line in open('data','r'):
    year,month,day,hour,minute,second = line.split()
    second = float(second)
    ms = (second-int(second))*1000000 

    dtdates.append(datetime.datetime(int(year),
                                     int(month),
                                     int(day),
                                     int(hour),
                                     int(minute),
                                     int(second),
                                     int(ms)))
-- Finlay McWalterTalk 23:15, 18 June 2012 (UTC)[reply]
Incidentally, you can do the above with datetime.datetime.strptime too, but I did it this way so there'd be less "magic" for you to have to read about. -- Finlay McWalterTalk 23:19, 18 June 2012 (UTC)[reply]
Which would make the entire loop body read:
    dtdates.append(datetime.datetime.strptime(line.strip(), "%Y %m %d %H %M %S.%f"))
If you have lines that don't perfectly match the format you specified, both versions will probably barf, the latter less intelligibly. -- Finlay McWalterTalk 23:28, 18 June 2012 (UTC)[reply]

Thanks for being prompt but I just realized two things. First the text file has more columns on the right (a total of like 14 but I only need the first six for time) so I suspect the line split thing may not work. Second, I need to read in all columns (because I need them for more processing later) so now instead of reading the time from the text file and datetiming it, I do the following

mydata = numpy.loadtxt('myfile.txt',delimiter=' ') time = mydata[:,0:6]

I think "time" would be an array of arrays? So now the question is, how to go from this above define "time" variable to "dtdates" defined way above? And lastly, is appending the only way to do it? Seems like there should be a faster more efficient approach? If I am reading the entire file, I can know its length so can't I preallocate an array with enough space to hold all the datetime objects and then fill them in? I just don't know the commands to do all this nicely. Thanks again! - Looking for Wisdom and Insight! (talk) 23:44, 18 June 2012 (UTC)[reply]

split returns a tuple containing however many values it has found (unless you tell it to split fewer); so give it more values to fill - if there are variable numbers of possible fields, you'll need to do some extra work based on the length of the tuple. Your program is IO bound, so worrying about arrays and preallocation (in whatever language you were using) is pointless. -- Finlay McWalterTalk 00:04, 19 June 2012 (UTC)[reply]
Is it I/O bound? Assuming 100 bytes per line, a raw read rate of 60 MB/s, and a 3 GHz CPU core, that's 5000 CPU cycles per line. A very quick test suggests that the call to strptime alone takes several times longer than that. Also, the input file might be cached in RAM. -- BenRG (talk) 17:39, 19 June 2012 (UTC)[reply]

Oh and looking at the second edit, I would actually prefer the condensed version. So can we modify this condensed version to accomodate the changes I described above? It will be much much faster (I know this from past experience in other languages...classic debate for compiled versus interpretative languages). - Looking for Wisdom and Insight! (talk) 23:47, 18 June 2012 (UTC)[reply]

Don't optimise the performance of a program you haven't got running yet. Neither version works the way its superficial representation might suggest. Premature optimisation is a pointless waste of time. -- Finlay McWalterTalk 00:04, 19 June 2012 (UTC)[reply]
For the original problem without the extra columns you could write
    with open('data', 'r') as f:
        dtdates = [datetime.datetime.strptime(line.strip(), "%Y %m %d %H %M %S.%f")
                   for line in f]
However, I would be amazed if this ran measurably faster. The bottleneck here is strptime (or maybe I/O, but see above).
For your problem you can try
    dtdates, other_fields = [], []
    with open('data', 'r') as f:
        for line in f:
            year, month, day, hour, minute, second, remainder_of_line = line.split(None, 6)
            other_fields.append(remainder_of_line)
            # make a datetime object and append it to dtdates, as above
The second argument to split should match the number of commas on the left hand side. Or maybe this:
    with open('data', 'r') as f:
        for line in f:
            col1to6, col7, col8, ..., col14 = line.rsplit(None, 8)
            dtdates.append(datetime.datetime.strptime(col1to6.lstrip(), "%Y %m %d %H %M %S.%f"))
            # do whatever with the remaining columns
You might want to benchmark strptime against manual datetime construction. -- BenRG (talk) 17:39, 19 June 2012 (UTC)[reply]

Actually it isn't as bad as I thought. I got it working. Thanks again! - Looking for Wisdom and Insight! (talk) 18:25, 19 June 2012 (UTC)[reply]


June 19

Load runner

Hi,

I am recording a script for one of the application, a unique id was created after filling one from now i need to search using the id which was generated. Can any one suggest me how to do it? is it possible by correlation

Regards, Swamy — Preceding unsigned comment added by 125.22.193.145 (talk) 10:33, 19 June 2012 (UTC)[reply]

I don't really understand the question. Apparently some application creates something (a "script" ?) with a unique ID associated with it in some way. Is this "script" a file, and is this unique ID the file name ? If so, you could just sort the files by file name and easily find the one you want. The more rigorous solution is to put them into a relational database, indexed by the unique ID. StuRat (talk) 17:54, 19 June 2012 (UTC)[reply]
If English isn't your native language, you might want to post your question in your own language, then we will translate it to English. StuRat (talk) 00:59, 20 June 2012 (UTC)[reply]

Digital camera resolution

I have a cheap little point-and-shoot digital camera (Nikon Coolpix L26). The specs say it has 16.44 million total (1/2.3-in type CCD) effective pixels in the image sensor. However, when I set it to a resolution with about that many pixels (4608×3456), it looks fuzzy. I don't seem to get any more resolution than if I set it to about 1600×1200. It rather looks like the 4608×3456 pic was upconverted from the 1600×1200 pic. So, what's going on ? Does the 16.44 million pixels include different pixels for different colors, so effectively it provides a lower number of full-color pixels ? StuRat (talk) 19:28, 19 June 2012 (UTC)[reply]

Welcome to the world of Pixel count. Precise optical parts are expensive to make, while Megapixels are cheap and impressive in the adverts, so the manufacturers supply more pixels than the lens can properly focus. I almost always crank down my modestly more expensive Nikon P-6000 from its nominal 13 Megapixels to 8, for smaller files and higher sensitiviy. The change in fuzziness, if any, is indetectable. Expensive cameras tend to put the extra money mostly into optics, so they can make better use of those Megapixels. Jim.henderson (talk) 19:45, 19 June 2012 (UTC)[reply]
I see. But what specifically is wrong with the optics ? Is the image always slightly out of focus on the CCD ? Is this due to lens aberrations ? And is there any way to know what the real resolution of a camera is, before you buy it ? StuRat (talk) 20:01, 19 June 2012 (UTC)[reply]
(off-topic?) My camera has a CMOS 15.1 effective megapixel sensor. Yours use CCD sensors. Did they ever sort out which was better?--Canoe1967 (talk) 19:59, 19 June 2012 (UTC)[reply]
It's very similar to when you see amateur telescopes advertised as 800x magnification.. It might technically be capable of that, but it will look rubbish! i imagine that the pixels are just "crappy" (technical term), pushing them to their resolution limit exposes that fact. As to how you can tell what the "real" resolution is, the problem is there is no such thing as "real resolution", it's all relative, you can try to work out a relative resolution by reading reviews from reputable sites, like dpreview dot com. But be warned, camera reviews can be almost a black hole, there is no objective standard so you can literally spend eternity chasing after the "best camera". My advice is pick a brand you like and stick with it, pick the kind of camera you want and just buy the 'middle of the road' model. (unless you're a pro, but then you wouldn't be asking for advice). These days you can't go wrong with that advice, most extra features are just gimmicks and most cameras these days will take a photo that is more then good enough. Gone are the days where you invest in ONE camera that will last you a lifetime, i think it's far more economical and practical to upgrade your camera regularly (I do it about every 3-4 years, usually when I go for an overseas trip). You will ALWAYS find a camera that has slightly better feature or is slightly cheaper; make a decision and don't look back. Vespine (talk) 22:59, 19 June 2012 (UTC)[reply]
I should try a test with 2 lenses I got from Ebay for mine. A Canon 80-200mm and a Vivitar 100-300mm. Set the camera to the same setting for each at 100, 150, and 200mm then take 6 images and compare them for lens quality. What would would be the best subject, a phonebook page at bright indirect light?--Canoe1967 (talk) 23:15, 19 June 2012 (UTC)[reply]
I don't think it's the lens, unless the blurring you're talking about is chromatic aberration. There are two things that reduce the effective resolution of all digital cameras that have nothing to do with the lens:
  1. By convention, a single pixel in a digital camera is a single sensor behind a colored filter usually arranged in a Bayer pattern, whereas a single pixel on a computer monitor has red, green, and blue subpixels, so 16.44 million digital camera pixels is only as many samples as 5.48 million monitor pixels. That doesn't degrade the resolution by a factor of 3, but it does degrade it.
  2. The tinier the sensor, the less light it collects and so the noisier the output. All P&S cameras denoise the image as part of the postprocessing, which also degrades the resolution, because high-frequency detail can't be distinguished from high-frequency noise.
-- BenRG (talk) 23:51, 19 June 2012 (UTC)[reply]
1) Why doesn't it degrade it by a factor of 3 ? (In my camera, it seems to be degraded by approximately a factor of 8.)
2) Wouldn't the total light gathering ability of the camera depend on the diameter of the lens, not the sensor ? StuRat (talk) 00:51, 20 June 2012 (UTC)[reply]
1) You still have the full spatial resolution of 16 million separate pixels, and variations in the three color types are highly correlated in practice. Just as an example, you could use the 8 million G pixels as the luminance and then colorize it using the R and B pixels. That would lose only half the resolution, not two thirds. I think actual postprocessing algorithms do better than that. 2) I meant that for a given amount of light hitting the whole sensor, as the pixel count increases, the amount of light hitting each pixel decreases. I was calling each individual pixel a "sensor", which probably isn't the right terminology.
I crossed out the claim that the lens quality doesn't matter much because I don't really know. -- BenRG (talk) 22:07, 20 June 2012 (UTC)[reply]
1) That wouldn't work, as that approach would mean any spot devoid of green would come out as black, when it's really bright red, blue, or purple. StuRat (talk) 22:23, 20 June 2012 (UTC)[reply]
The "red", "green" and "blue" channels in a Bayer array all cover a large range of the visual spectrum. They aren't like the RGB subpixels on a display, which really are those perceptual colors. Deriving luminance only from the "green" channel probably isn't a great idea, but it would work better than you suggest. The human visual system actually uses only the L and M ("red" and "green") cones to calculate luminance. -- BenRG (talk) 23:59, 20 June 2012 (UTC)[reply]

I would think a system of testing cameras to find their actual resolution could be devised. Here are my thoughts:

A) Photograph a series of grids, with finer and finer resolutions, until you get down to the resolution where each line and gap in the grid will be one pixel wide in the digital image. This series of grids could be generated on a computer monitor (but not a type with a fixed pixel count, like LCD; perhaps an old CRT would be best).

B) Take the digital output and feed it into a program that counts the number of lines in the grid. If it's able to correctly count them, then the camera can handle that resolution.

C) Repeat this process with different cameras, settings, and grid sizes, until you have a chart listing the maximum effective resolution of every camera.

Should I contact Consumer Reports to convince them to do such a test ? :-) StuRat (talk) 00:48, 20 June 2012 (UTC)[reply]

It's a combination of sensor and lens. One obvious thing to notice is that a lense is round but a sensor is square, so the lens has to "overshoot" the sensor to some degree. In cheap cameras (as a rule of thumb), the overshoot will be as small as possible, in more expensive cameras, there will be a bit more overshoot. Since the edges of a lens require tighter tolerance, the corners of an image typically suffer most from this effect. At leasat one good review site include an image in its reviews which shows fine lines to determine the "effective resolution". Vespine (talk) 02:26, 20 June 2012 (UTC)[reply]
The messed up corners can be fixed by a bit of cropping. StuRat (talk) 02:32, 20 June 2012 (UTC)[reply]
  • In a perfect world there should be a site that has images from all cameras taken of the same test pattern. Someone should make an .svg one and upload it perhaps? I have also seen a line pattern that is angled away from the camera to set the accuracy of the auto-focus. Take a picture of the center of the lines and then see which numbered line is actually in focus (they are numbered by distance). With many high end cameras you can input that number to get perfect focus each time.--Canoe1967 (talk) 15:40, 20 June 2012 (UTC)[reply]
I don't believe that you can quantify optical quality in any simple way, because image quality changes based on aperature, ISO, focus, shutter speed, and many other things. If you give a camera enough light to work with, it doesn't even need a lens (pinhole camera). It might be tempting, therefore, to just try lower and lower light levels and measure how grainy things get, but lots of good photography requires opening up the aperature and using shallow depth-of-field, and for that you suddenly become interested in what aperature settings it has and the nature of its bokeh, and you don't care at all about high-ISO performance. Paul (Stansifer) 18:52, 20 June 2012 (UTC)[reply]
I realize that it's complicated, but the "maximum resolution achievable by a camera under ideal conditions" is something I would certainly be interested in knowing. I believe that's what many consumers think the megapixel count is giving them, but clearly, it is not. StuRat (talk) 18:58, 20 June 2012 (UTC)[reply]
Do you mean something like an 'acceleration rating' for cars? I used to have a 1975 Chev that cruised at 105mph and peaked at 125+. The 0-60 was crap because it was so heavy. Raw horsepower can't be used to judge but horsepower to weight ratio can.--Canoe1967 (talk) 21:46, 20 June 2012 (UTC)[reply]
Something like that, yes. StuRat (talk) 22:19, 20 June 2012 (UTC)[reply]
Everything above is wrong. Jim.henderson came closest, but it's a case of a fundamental physical limit, rather than Nikon cutting costs on the optics. The problem is actually quite simple: you're hitting the diffraction limit. For any given aperture, there's a minimum size of spot that a lens can focus light to, the Airy disk, and if the sensor elements are smaller than the disk, light will spill over into adjacent sensor elements and give a fuzzy appearance. In your case, assuming an aperture of f/8 (common for outdoor photographs), the Airy disk is three pixels wide. --Carnildo (talk) 01:30, 21 June 2012 (UTC)[reply]
That would be about right, because 3×3 pixels would be 9, and I seem to see about an 8-fold degradation of the resolution relative to the total number of pixels (if you consider that a circle with a diameter of 3 has an area of 7, you also get close to that). Can you tell me how you determined the 3 pixel width ? This camera has the f/8 aperture and also an f/3.2 aperture. How many pixels wide would the airy disk be for that setting ? StuRat (talk) 02:09, 21 June 2012 (UTC)[reply]
I ran the numbers through the advanced diffraction limit calculator at [1]. Since the calculator doesn't have a setting for a 1/2.3" sensor, I used the 1/2" sensor setting instead. An aperture setting of f/3.2 will still be diffraction-limited, but less visibly so: the loss of detail from a 1.5-pixel Airy disk is on the same scale as the loss of detail from interpolating the Bayer filter and loss of detail from noise reduction. --Carnildo (talk) 22:26, 22 June 2012 (UTC)[reply]
Thanks, so it sound like I could get away with half or a third of the total megapixel count without visible graininess, then, at f/3.2 ? If so, that's a lot better than 1/8th, which was all I got before. StuRat (talk) 00:29, 23 June 2012 (UTC)[reply]
Just did another test with the pill bottle. Using the brightest light I have, and no zoom, I was able to get a 1/500th second exposure with f/3.4. The maximum resolution seems to occur around 8 megapixels (2448x3264 pixels), so right about what we expected. I can see the printing dots and make out misalignments in the colors (which I assume are actually on the bottle), that I can't see with the naked eye, so I'm happy with that. So, looks like this camera is only good for close-up pics of inanimate objects. StuRat (talk) 03:42, 23 June 2012 (UTC)[reply]
The usual solution to an image that's too blurry at full size is to scale it down. If you take one of the full-size images from that camera and scale it down to 4 megapixels (so, a scale factor of 2), it should look reasonably sharp. There are other post-processing steps you can try to make the image look sharper as well, such as unsharp masking. --Carnildo (talk) 20:49, 24 June 2012 (UTC)[reply]

Bad pixels ?

I was wondering, do some of the pixel sensors just send out garbage ? If so, the software might apply some type of averaging algorithm to disguise these bad pixels, which might also account for the blurriness. I believe our eyes do something similar.StuRat (talk) 22:19, 20 June 2012 (UTC)[reply]

Bad pixels should show up as just a crappy pixel. Some cameras can do that if you register the 'dust data' though. They mark dust spots on the sensor, and average to the pixels around them, I think. If there isn't a lab standard to test the horsepower/weight ratio of cameras, someone should create a standard. I usually trust the camera store. I tell them my budget and they recommend a camera. Future Shop I have found does know alot about that, they may get advice from their own head office and not even sell crap cameras with lots of pixels, but lenses made from flour and water.--Canoe1967 (talk) 23:08, 20 June 2012 (UTC)[reply]
What does "register the 'dust data'" mean ? StuRat (talk) 23:15, 20 June 2012 (UTC)[reply]
I have a parameter on my camera for it. I think it just sets up a 'balanced' output from the sensor. I am trying to find a link to info on it. If there is dust on my sensor, I think it compensates somehow. I can add dust data and then remove it after a sensor cleaning. This I assume takes pictures without the effects of the dust spots showing as much.--Canoe1967 (talk) 23:26, 20 June 2012 (UTC)[reply]
I'm still not quite getting it. Do you tell it which specific pixels are bad, or does it somehow figure it out (from them producing output that doesn't match the surrounding pixels). StuRat (talk) 23:30, 20 June 2012 (UTC)[reply]
  • Well, that looks unnecessarily complex. You have to take a pic of a white background, zoom in on any dust spots on the image, then ask it to delete those spots. I'd want it to detect any pixels which don't match the background automatically, tell me, and ask if I want to use the average of the surrounding pixels instead. Of course, a piece of dust might blot out several pixels, while the type of bad pixel I'm talking about should be alone. StuRat (talk) 23:38, 20 June 2012 (UTC)[reply]
You're not encountering bad pixels. Bad pixels are very common in camera sensors (yours probably has a few hundred to a few thousand). They're handled by the camera's software identifying pixels that are inappropriately black or inappropriately full-on, and replacing them with the average of the pixel's neighbors. Technically speaking, this blurs the image, but since it's in such small, isolated areas, you'll never be able to spot it. --Carnildo (talk) 01:37, 21 June 2012 (UTC)[reply]

Light level effect on digipics ?

UPDATE: I found I get much sharper images with more light. The ambient light was normal room lighting before, which I thought would be sufficient. However, when I shined a 500 watt halogen light directly on the subject (a pill bottle, in this test), it came out much better (and not any brighter). I can think of several possible reasons:

1) Increased signal-to-noise ratio.

2) The auto-focus may have had insufficient light to work before, leaving the image slightly out of focus.

3) The shorter exposure time needed under such bright light may have eliminated blurring from camera vibration (either from the electronics or me having the DTs).

So, which of these is the most likely explanation ? StuRat (talk) 05:41, 21 June 2012 (UTC)[reply]

I would say 1) Increased signal-to-noise ratio, If I had to choose from that list. I don't know if you can adjust shutter speed or 'film speed' with your camera. If they are set automatically then it may be a higher film speed that increases the 'grain'. My camera can take pictures in very low light, but they are very 'grainy'. See Image sensor and Film_speed#Digital. Wp seems full with information split into so many articles, we should stop adding to it maybe? The auto-focus shouldn't be an issue and hand held shutter speeds are usually okay at 1/100 or faster. --Canoe1967 (talk) 12:03, 21 June 2012 (UTC)[reply]
The fastest I've gotten it it 1/60th of a second (while it doesn't let me manually set the speed, it does report it). It supposedly can do a 1/2000 second exposure, but enough light for it to choose that would likely set the subject on fire. :-) StuRat (talk) 04:20, 22 June 2012 (UTC)[reply]
If you have the same subject as the original low quality image, could you try another shot with brighter light? Longer lenses need faster shutter speeds as well. They do the math different now but it used to be 1/lens. 50mm 1/50, 200mm 1/200 sec type thing. I think they still use the same math and then multiply/divide with the crop factor. I think your camera has image stabilization which should help as well.--Canoe1967 (talk) 14:30, 22 June 2012 (UTC)[reply]
Unfortunately the original subjects were my family gathered for the last holiday. Even when they are gathered together, shining blinding lights in their eyes to get a less grainy pic probably wouldn't be much appreciated. So, it seems like this camera may only be good for shooting inanimate objects under arc lamps. I may need to get myself a welding mask for this. :-) StuRat (talk) 00:23, 23 June 2012 (UTC)[reply]

Will it take an external flash? Most built-in ones are crap even on high-end cameras and only go a 3-6 feet or so.--Canoe1967 (talk) 18:11, 23 June 2012 (UTC)[reply]

I don't believe it does take an external flash. And, even if it did, it looks like it would have to be so bright as to cause retina damage. (Maybe I can have everyone wear dark sunglasses ?) StuRat (talk) 18:24, 23 June 2012 (UTC)[reply]
With this camera's need for extreme light levels, I wonder if it would take decent pictures of the Sun (too bad I just missed the transit of Venus). Or would some component be damaged (like the light meter) ? StuRat (talk) 18:31, 23 June 2012 (UTC)[reply]
A point-and-shoot camera like you've got only has one sensor: it uses the main imaging sensor for metering, focus, live preview, and taking the image. It's unlikely that you'll damage it by taking pictures of the Sun, but it's unlikely that you'll get anything but a white circle, either: the Sun's just too bright. For the recent eclipse, I used a shutter speed of 1/4000 of a second, an aperture of f/32, a stack of filters equivalent to a five-stop neutral density filter, a 1.4x teleconverter (reducing the light by one stop), and a thin overcast to cut the light down to something my camera could handle. --Carnildo (talk) 20:49, 24 June 2012 (UTC)[reply]

June 20

inetrnet service

does inter service needs any change to be made

Well my internet service certainly needs some improvement, but are you asking about the internet in general? Dbfirs 07:42, 20 June 2012 (UTC)[reply]
Yes:
1) Faster
2) Less monitoring of my traffic
3) Less government intervention
Other than that, it's perfect.
Zzubnik (talk) 09:55, 20 June 2012 (UTC)[reply]
1) Perhaps it could be more secure, so hackers can't take down the Internet whenever they please.
2) It is currently largely controlled by the US (which created it as DARPA Net) and everyone else worries about that. (That the US could cut them off as part of a boycott, etc.) StuRat (talk) 16:29, 20 June 2012 (UTC)[reply]
That may very cute if they tried. I can see the headlines now: "Gamblers and pedophiles join in violent protests against anything USA around the world. For the common good of restoring the internet."--Canoe1967 (talk) 19:45, 20 June 2012 (UTC)[reply]
I doubt if it would be used lightly. Perhaps against China, if they invaded Taiwan, for example. StuRat (talk) 19:51, 20 June 2012 (UTC)[reply]

Bypass VPN for most traffic?

Is it possible to have a VPN connection to my work servers from home which would only enable me to access my files, and not putting all the rest of my traffic through the VPN? Otherwise I have to connect to VPN when I need to use it and then disconnect to play games or enjoy adult material 129.215.47.59 (talk) 13:41, 20 June 2012 (UTC)[reply]

It depends on the VPN server - some allow it, some don't - Ask your work IT if they allow Split tunneling on the VPN. Avicennasis @ 03:37, 2 Tamuz 5772 / 03:37, 22 June 2012 (UTC)[reply]

When using my digital camera, will I get a better depth of field (have more of the scene in focus), if I position the camera far away, and put it on maximum optical zoom (avoiding digital zoom, as always) ? If so, why isn't this always done ? Does the optical zoom introduce some other distortions ? StuRat (talk) 18:31, 20 June 2012 (UTC)[reply]

You are correct: for many lenses, it is possible to achieve a longer depth of field (in terms of maximum region in focus, measured in meters-distance-from-camera) by positioning that depth of field close to infinity and then zooming in to the region of interest. Why is this not always done? Because for many lenses, it is easier to set the aperture size, selectively controlling the depth of field without affecting field of view. When you "zoom with your feet," you affect both depth of field and field of view, meaning that the ultimate perspective of the shot is very different, with artistic consequences. This phenomenon its most widely known in the form of a "dolly zoom". Nimur (talk) 18:57, 20 June 2012 (UTC)[reply]
As a side note. That is why those old Brownie cameras took better pictures than many 35mm SLRs. They only had a single lens so no focus and infinite depth of field, combined with 120mm film. Someone should try making a single element lense for a digital camera?--Canoe1967 (talk) 19:09, 20 June 2012 (UTC)[reply]
Why do you say that a greater depth of field is better? Depth of field, which is related to aperture, should be a choice, not a goal. Hayttom (talk) 19:14, 20 June 2012 (UTC)[reply]
I didn't mean to say it was better, just that the pictures were always in focus and a large format negative made really nice prints and therefore later scans.--Canoe1967 (talk) 19:38, 20 June 2012 (UTC)[reply]
Having a single lens doesn't give you infinite depth of field. It gives you the same depth of field as a focusable camera that happens to be focused at the same distance (with the same aperture). -- BenRG (talk) 21:36, 20 June 2012 (UTC)[reply]
I am confused. I remember from optics in physics class that a single lens was always in focus past its minimum focal distance to infinity. Am I wrong about this?--Canoe1967 (talk) 12:56, 21 June 2012 (UTC)[reply]
You mean beyond the hyperfocal distance? Yes, you've got the concept right, but that's not the same as infinite depth of field. You're conveniently forgetting about half of the plane not beyond the hyperfocal distance. A (hypothetical) imaging optic with infinite depth of field would have every object in focus - even objects very close to the camera. Nimur (talk) 01:31, 22 June 2012 (UTC)[reply]
The angular diameter of the circle of confusion at infinity is the arcsine of (the diameter of the aperture / the distance to the focal plane), which is presumably a few arc minutes for the Brownie, so it's not terribly out of focus. But the main thing is that this is independent of the design of the lens system. You should be able to get the same results by focusing your SLR at 4 meters or whatever the Brownie's fixed focus distance was. -- BenRG (talk) 02:06, 22 June 2012 (UTC)[reply]
On a side note II: I don't believe there's a "better depth of field." Sometimes you want the background to be blurred and sometimes you want it to be seen. OsmanRF34 (talk) 19:29, 20 June 2012 (UTC)[reply]
I rarely want the background (or foreground) to be blurred, and, if I did, I could do that to the digital image on my computer. This also gives me the option of blurring things at any depth, after the fact, like say an ex-g/f. :-) StuRat (talk) 19:47, 20 June 2012 (UTC)[reply]
In the long run it'll save time to pick g/f's that are blurry already. :p ¦ Reisio (talk) 21:26, 20 June 2012 (UTC)[reply]
It's very common to use depth of field to call attention to the intended subject of a shot (examples: File:Alhambra_wall_detail.jpg, File:Matryoshka_dolls_in_Budapest.jpg, File:Mating_Grasshoppers-2.jpg). You could blur it afterwards, or apply a stained-glass effect and surround it with a fake sparkly picture frame, but a lot of photographers like the idea that their effects are produced by purely optical means.
If you want to see what shooting everything from far away with a telephoto lens looks like, watch Ran, which was shot almost entirely that way. When characters walk toward or away from the screen they don't seem to change size at all. There's a pretty good use of both dolly zoom and depth of field in The Incredibles in the scene where Edna offers Helen the tracking device and says "Do you want to find out?". -- BenRG (talk) 21:36, 20 June 2012 (UTC)[reply]
Also, of course, it's often impossible to get further away from your subject and still be able to see it, in an enclosed room for example. FiggyBee (talk) 03:36, 21 June 2012 (UTC)[reply]
Specifically in Portrait photography relatively narrow depth of field is a desirable trait Portrait_photography#Lenses. Vespine (talk) 05:32, 21 June 2012 (UTC)[reply]
Shouldn't there be a Wikiphotography forum? Jim.henderson (talk) 12:32, 21 June 2012 (UTC)[reply]
It seems we have created one here. Digital cameras are far more popular now and there are many questions about them it seems. Wikipedia has many articles including comparsions to each other/film cameras and features, sensor and lens types, etc. I think there are enough wise editors in this forum that can answer most of them or link to more detailed pages that can answer them.--Canoe1967 (talk) 12:56, 21 June 2012 (UTC)[reply]

What web-framework should I learn?

It seems that each person recommends the web framework he's using or the web framework that uses the same programming language that he knows. So, which is secure by default, scalable, reliable, and so on. I know there's a list here, but I need more feedback to make up my mind. I suppose not all there are stable or maintained regularly or useful for general purposes. OsmanRF34 (talk) 18:52, 20 June 2012 (UTC)[reply]

Most languages have a clear frontrunner, so choosing the one for your language of choice is pretty straightforward. Do you know more than one language well? Do you want help deciding which language is least awful? :p You could also 1) not use a framework, or 2) make your own which will be perfect for your own needs. ¦ Reisio (talk) 21:27, 20 June 2012 (UTC)[reply]
I want to know, independent of starting point, what frameworks could deliver more. OsmanRF34 (talk) 23:48, 20 June 2012 (UTC)[reply]
I get the impression that a pretty decent web framework exists for every major language. The productivity differences between languages are (1) probably vast, (2) depend on the programmer, (3) depend on the application, and (4) are incredibly subjective. It's not really possible to abstract over that. Paul (Stansifer) 17:07, 22 June 2012 (UTC)[reply]

try. ... catch (final) in Java

I just had a look at the article Java (programming language), and noticed this method in an example:

    public void showDialog() {
        /*
         * "try" makes sure nothing goes wrong. If something does,
         * the interpreter skips to "catch" to see what it should do.
         */
        try {
            /*
             * The code below brings up a JOptionPane, which is a dialog box
             * The String returned by the "showInputDialog()" method is converted into
             * an integer, making the program treat it as a number instead of a word.
             * After that, this method calls a second method, calculate() that will
             * display either "Even" or "Odd."
             */
            this.input = Integer.parseInt(JOptionPane.showInputDialog("Please enter a number."));
            this.calculate();
        } catch (final NumberFormatException e) {
            /*
             * Getting in the catch block means that there was a problem with the format of
             * the number. Probably some letters were typed in instead of a number.
             */
            System.err.println("ERROR: Invalid input. Please type in a numerical value.");
        }
    }

What does the final in catch (final NumberFormatException e) do? How is it different from catch (NumberFormatException e)? JIP | Talk 19:39, 20 June 2012 (UTC)[reply]

This StackOverflow discussion should help. Unfortunately the link to the Sun blog which showed the multi-catch-throw pattern which motivated that died in the Oracle transition. -- Finlay McWalterTalk 20:44, 20 June 2012 (UTC)[reply]
Ah, here. -- Finlay McWalterTalk 20:48, 20 June 2012 (UTC)[reply]

How do I get a big (5000 rows) html table from a webpage into LibreOffice Calc?

Resolved

I use LibreOffice 3.5.4.2, Firefox 13.0.1 and Windows7.
I want to import a complete html table (about 5000 rows and 10 columns) from a web page into a LibreOffice Calc spreadsheet.
Some of the cells are empty.
I need the data to be divided into cells exactly the same way in Calc as they were in the html (preferably keeping the various fonts as well).
Is it possible?   
--Seren-dipper (talk) 21:23, 20 June 2012 (UTC)[reply]

It’d be pretty simple with a little preprocessing, but the simplest tools for that are Unix tools… which are available for Windows, but if you’re unfamiliar with them it might take some hand holding.
What’s your end game, a spreadsheet showing different font faces? (I’m not sure what else would make it worth it to keep the fonts.) How is the HTML file not sufficient? ¦ Reisio (talk) 21:34, 20 June 2012 (UTC)[reply]
Does this do it? I don't use LibreCalc personally (in Excel you can actually just copy and paste them, depending on the browser). --Mr.98 (talk) 21:53, 20 June 2012 (UTC)[reply]
@Mr.98 : Yes! That link gave me exactly what I needed! Thank you!  :-)  :-)
(Choosing: Insert -> Link to External Data
opens the External Data dialog where you enter the URL of the HTML document. After clicking "OK" one has to click the tree-dots-button to the right of the url, and voila!).
@Reisio : My table is kind of a dictionary with the headwords in the first column. The HTML is not sufficient because I need to extract shorter lists (subsets) out of the long one, depending on tags and conditions given in some of its columns. Web browsers do not do this, but Excel, Calc etc. are good at it. Thank you for your reply!  :-)
--Seren-dipper (talk) 23:04, 20 June 2012 (UTC)[reply]

June 21

Computer drivers

Have Windows 7 and receiving balloons about updating my drivers. Did limited search and believe that with Norton appearing I am being directed to sign up for new costs to have them updated.Is this the case? Do drivers need updating, if so can I access any other way to do it? Beginning to learn that it is best to ask before signing up for anything! Hamish 84Hamish84 (talk) 03:56, 21 June 2012 (UTC)[reply]

PS. My thanks to those who provided previous help. Have not as yet worked out how to thank them direct!!

Drivers do indeed sometimes require updates, it's not absolutely necessary to get them, your computer will work just fine with the same drivers it has always worked with. Updates can have bug fixes, resolve specific problems, optimizations that make them run better, etc that are generally better to have then not to have. Having said that, I have never seen driver updates that require any "signing up" or payment. What you have sounds more like either a software update or a malware program that is trying to convince you that you need its services. This has become VERY popular recently, programs pop up telling you that you have viruses (which you either don't at all or the program it self has placed them there) then it tells you that you need their software to resolve the "issue". the best thing to do in that case is google the exact name of the program or "window" that you see and you should be able to find how to remove it, sometimes unfortunately it isn't trivial. If it is norton asking you to update it (frequently after a free trial period) just uninstall it and get a free antivirus program, or just use Microsoft Security Essentials which is free if you have windows 7. Vespine (talk) 05:21, 21 June 2012 (UTC)[reply]

List of female 3d figures?

Where is a list of female 3d figures for software like Daz? I was able to find a list of 3d software (Thank you Wikipedia) but I can't find a list of base female models I can buy besides the ones made by Daz. Edit: I just need a list of base females made by a company, not models I can buy from indie vendors at various indie vendor websites. — Preceding unsigned comment added by 98.176.250.165 (talk) 08:01, 21 June 2012 (UTC)[reply]

You might try Poser World, a forum that has a section devoted to Daz and Poser resources. Looie496 (talk) 18:23, 21 June 2012 (UTC)[reply]

Erasing old hard drives before disposal...

How many times does the data realistically need to be overwritten to render anything on the drive unrecoverable? I hear/read that the Gutmann method, with 35 passes is considered complete overkill... Note: I very much doubt that any agency with unlimited resources and unlimited time in which to spend on forensic examination will have an interest in my drives. I'm just talking about sensible precautions here... Thanks. --Kurt Shaped Box (talk) 20:34, 21 June 2012 (UTC)[reply]

Data remanence cites very reliable sources that say "once". There's no evidence of anyone ever recovering data that was simply zeroed once. Truly national security users are sufficiently paranoid that they have cause to worry about unknown attacks (by their counterparts), and so might require degaussing and/or mechanical shredding. But you, who surely do not have the plans for hydrogen bombs on your disks, don't. None of this holds for solid state (flash) drives, which require different methods, as that article notes. 87.115.114.119 (talk) 20:45, 21 June 2012 (UTC)[reply]
Thanks. As a matter of interest, what is the correct/best method for permanently overwriting the data on USB flash drives? I suppose that you could just smash the thing or throw it in fire, but in terms of being able to use the thing again afterwards... --Kurt Shaped Box (talk) 22:06, 21 June 2012 (UTC)[reply]
The paper by Wei et al, cited in the data remanence article, talks about various methods. Ideally they'd all implement the ATA ERASE-UNIT commands, but that paper says many don't. Without that it doesn't seem that there's a sufficiently reliable software-only way. -- Finlay McWalterTalk 22:28, 21 June 2012 (UTC)[reply]
Sweet. That pretty much covers it, I think... :) --Kurt Shaped Box (talk) 23:15, 21 June 2012 (UTC)[reply]
Very old hard drives, removable disc-packs and "floppy discs" (if anyone remembers them) had much larger gaps between tracks, and it was possible to write date between tracks (in some cases and with some drives), with this data being difficult to erase on some other drives. The paranoia about data remanence is probably from those days. Nevertheless, I've used a sledge-hammer on more than one occasion! Dbfirs 08:57, 22 June 2012 (UTC)[reply]

June 22

Why does Apple Inc. permit use of their Mac OS X only on their hardware?

Why does Apple Inc. permit use of their Mac OS X only on their hardware? 117.5.15.219 (talk) 07:20, 22 June 2012 (UTC)[reply]

Apple has always been that way. Even if Jobs or someone else ever spoke on the subject I wouldn’t imagine what was said would be either consistent or perfectly logical. I have trouble imagining it being anything other than Jobs’ predisposition.
Now if you’re asking in general how a company might justify acting in this fashion, for that there are answers to be had. ¦ Reisio (talk) 08:30, 22 June 2012 (UTC)[reply]
Yes, Steve Jobs thought the operating system and hardware should be to a single, high quality design and that third party equipment could be inferior and reflect badly on the Apple brand. Such licensing had been permitted at the time he was ousted from Apple and he felt this contributed to the decline of the firm over that period. He ended the practice on his return, see MacOS#Macintosh_clones. I have just finished reading Steve Jobs (book) which is very good for learning about Apple design and marketing philosophy. Thincat (talk) 08:45, 22 June 2012 (UTC)[reply]
In the mid-90s, Apple did license their operating system (then Mac OS) to clone manufacturers. They stopped the arrangement in 1997. See Macintosh clone#Official Macintosh clone program.-gadfium 08:41, 22 June 2012 (UTC)[reply]
In addition to the pertinent answers above, there are also good economics reasons for pursuing such a strategy, most notably that it creates a form of vendor lock-in. - Jarry1250 [Deliberation needed] 12:15, 22 June 2012 (UTC)[reply]

North American area codes with identical prefix assigned

How many North American area codes have a prefix matching the area code assigned (i.e. in the xxx-yyy-zzzz format, xxx = yyy)? Off the top of my head, I can think of area code 787 (787-787-xxxx is assigned in Bayamón Norte) and 847 (847-847-xxxx is assigned in Lake Zurich). Are there others? 98.116.65.50 (talk) 07:44, 22 June 2012 (UTC)[reply]

One option for calculating this would be to use this database (which appears to be an excel file) as a starting list, and then use formulas in excel to count only the matching allocations. However, the cost of the database is slightly prohibitive. I am not sure if there is a freely available list of allocations; if there was, however, it would be simple enough to analyze. Sazea (talk) 17:38, 22 June 2012 (UTC)[reply]

What is the difference between a web server and a web framework?

The web framework seems to cover the same functionality as the web server (+ something mroe). OsmanRF34 (talk) 12:57, 22 June 2012 (UTC)[reply]

Like many IT terms, "web application framework" is terribly vague. The article we have on it seems to imply that it is the server-side software that can run various scripts or programs — e.g. the Zend Framework, which, when installed on a server, can be used to run server-side PHP scripts. As that explanation implies, it is not the same thing as the web server itself. --Mr.98 (talk) 14:34, 22 June 2012 (UTC)[reply]

How do you jump to 0x10000 in x86 assembly using intel syntax and assembling with NASM?

I know I succeed in writing my code to that address using int 13h because I can see it at that memory location. What I can't do is jump there. I put 0x1000 (three zeros there) into es and 0x0000 into bx and I know that [es:bx] means the address calculated by (es * 0x10) + bx which does equal 0x10000 (four zeros there). But eip, the instruction pointer, never does go there. I've tried "jmp [es:bx]", "jmp 0x1000:0x0000", and a bunch of other permutations that NASM doesn't even accept. I don't know. Do you, oh geniuses? 20.137.18.53 (talk) 18:31, 22 June 2012 (UTC)[reply]

This posting has some simple 16-bit real mode boot code which loads some code from disk to 1000:0000 and then JMPs there. -- Finlay McWalterTalk 19:06, 22 June 2012 (UTC)[reply]
Still failure. My boot loader here. I booted it up in Qemu and did a memsave on the first 50 bytes at 0x10000, opened it up with tweak, and saw my "kernel" code there (simple . But EIP still refuses to be 0x10000). Full images of the situation here. Sorry for the sarcastic tone, I've been trying to get this to work for too long. Thanks to whoever takes time to try to help me! 20.137.18.53 (talk) 19:40, 22 June 2012 (UTC)[reply]

Process of elimination style quiz maker

Hello there,

Does anyone know of a quiz maker on the internet that allows you to make quizzes in a process of elimination style - that would allow you to ask yes or no questions that narrow down a list of possible fits for the person being questioned, until there is only one option available?

E.g. Imagine it was a quiz to determine what author the person taking the quiz might like. One of the questions could be 'are you interested in books written before the 20th Century?' If they responded yes, then the pre-20thC authors would remain on the option list until further questions, if not, they'd all be removed from the potential quiz outcomes.

I've googled for a while looking for elimination-style quizzes to no avail, so would appreciate if someone could help out on this one.

All the best,

--Celinmairir (talk) 18:39, 22 June 2012 (UTC)[reply]

Getting files from a bricked Droid

I have an HTC Droid Incredible that was in my pocket when I went in the pool last weekend. Despite letting it sit in rice for a few days, it is completely dead and won't boot up. Is there any way I could get any files off of the hard drive? —Akrabbimtalk 18:43, 22 June 2012 (UTC)[reply]

If the phone CPU itself is inoperable, the trouble would be getting at the data stored in the flash memory. The phone doesn't contain a readily removable flash card, which you could extract and plug into another phone or into an adapter. Looking at disassembly videos, it seems the phone has a Hynix H26M44001CAR part, which seems to be a ball grid array package implementing eMMC on a single chip (I can't be sure, as Hynix don't make that part any more, and are very unforthcoming about info or datasheets). That's surface-mounted to the phone's circuit board. Prising it of would destroy it, and because it's BGA this means all the connectors are buried underneath it in a very inaccessible way. I'm sure a specialist data-recovery company could either un-mount the device or could figure out the manufacturing test points on the circuit board and access the device that way (assuming the pool didn't break it too). But that's an exceptionally specialised, technical, and time-consuming task, which would surely cost an eye-watering sum. -- Finlay McWalterTalk 19:41, 22 June 2012 (UTC)[reply]

Monitor jitter

Im using ATI Radeon Xpress 200 card and Dell E193FP monitor. Ive tried various screen resolutions/refresh rates but still get noticable horizontal jitter. Any suggestions as to what the cause is?--92.25.110.216 (talk) 19:45, 22 June 2012 (UTC)[reply]