Wikipedia:Reference desk/Computing

From Wikipedia, the free encyclopedia
Jump to: navigation, search

The Wikipedia Reference Desk covering the topic of computing.

Welcome to the computing reference desk.
Shortcut:
Want a faster answer?

Main page: Help searching Wikipedia

How can I get my question answered?

  • Provide a short header that gives the general topic of the question.
  • Type ~~~~ (i.e. four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Post your question to only one desk.
  • Don't post personal contact information – it will be removed. We'll answer here within a few days.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we’ll help you past the stuck point.


How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
 
See also:
Help desk
Village pump
Help manual


February 7[edit]

Is there any computational method that's neither a numerical method, nor a symbolic method?[edit]

Is there any computational method that's neither a numerical method, nor a symbolic method, nor a combination of both? I cannot imagine another possibility, but my lack of imagination is definitely not a proof.--Llaanngg (talk) 00:42, 7 February 2016 (UTC)

What do I get when I divide one by three?
  • Numerically, I get 0.33333333....
  • Symbolically, I get 1/3 (read: "one divided by three").
  • Verbally Conceptually, I simply get: a third.
HOTmag (talk) 01:03, 7 February 2016 (UTC)
Verbally = symbolically. --Llaanngg (talk) 01:52, 7 February 2016 (UTC)
@Llaanngg:: 1/3 is "one divided by three" (just as 1/x is "one divided by ex"): it's symbolic, i.e. it contains some symbols, e.g. "divided by" and likewise. It's not the same as "a third", being the conceptual computation.
Please note that not every computation can be made conceptually, just as not every computation can be made symbolically: For example:
  • The solution of the equation 3x=1 can be reached, both symbolically - as 1/3 (read "one divided by three"), and conceptually - as "a third".
  • The solution of the equation x2=2, can be reached symbolically - as 2 (read: "square root of two"), but cannot be reached conceptually.
  • The solution of the equation x5+x=1, cannot be reached conceptually nor symbolically.
Btw, there is also the "geometric computation". For example: the solution of the equation x2=2, can be computed - not only symbolically as 2 i.e. as "the square root of two" (and also numerically of course) - but also geometrically as the length of a diagonal across a square with sides of one unit of length.
HOTmag (talk) 07:41, 7 February 2016 (UTC)
Fuzzy logic ? StuRat (talk) 01:54, 7 February 2016 (UTC)
On one hand "numerical" is a kind of symbolic reasoning. On the third hand, if you can think nonsymbolically, then you can compute nonsymbolically. With yet another hand, graphical calculations are possible, such as Euclidean constructions using compass and straight edge. GangofOne (talk) 02:21, 7 February 2016 (UTC)
Computable real arithmetic is arguably not numerical (since I think "numerical methods" are approximate by definition) and arguably not symbolic (since it works with computable real numbers "directly", not formulas). -- BenRG (talk) 02:44, 7 February 2016 (UTC)
Neural networks could be counted as neither. Fuzzy logic might also fit there too, but you could argue that all of these are symbolic, as the computation has to represent something in the problem. Graeme Bartlett (talk) 10:19, 7 February 2016 (UTC)
Neural network uses numerical methods: the errors in the output converges to a minimum, so the output approaches a numerical value. Fuzzy logic uses symbolic methods, as you've indicated. HOTmag (talk) 10:32, 7 February 2016 (UTC)
The terms are kinda vague - but I'd definitely want to add "geometrical" to "symbolical" and "numerical". There are some wonderful things that can most easily be visualized geometrically...the dissection proofs of pythagoras' theorem come to mind here, but there are many good examples out there. SteveBaker (talk) 16:12, 7 February 2016 (UTC)
Analog computers were once used to solve differential equations. Also even now people use scale models for architecture, hydrology or wind tunnel simulations. Graeme Bartlett (talk) 00:59, 8 February 2016 (UTC)
Standard digital computers can be understood as doing everything by symbolic methods, including numerical computation; and the way I see the word "computation", that's really the only kind there is. However, you may consider what an analog computer does to qualify as computation (rather than as an alternative method used instead of computation). In that case it would qualify as an answer. --76.69.45.64 (talk) 23:13, 7 February 2016 (UTC) (by edit request) ―Mandruss  06:45, 8 February 2016 (UTC)

Scraping of .asp?[edit]

How can I scrap a page accessed with www.address.org/somescript.asp? It has two fields (name of artist, works) and two buttons (search, reset). How could I tell a program to go to name of artist, pick a name from a list that I have stored, press search, retrieve page and store. --Scicurious (talk) 16:36, 7 February 2016 (UTC)

wget has parameters to fill in forms. Also if all the names are linked or are findable on a query, you may be able to do a recursive query to get all the pages. Otherwise you could make a list of URLs and pass that to wget. Graeme Bartlett (talk) 00:54, 8 February 2016 (UTC)
Go through the whole process once or twice manually. Is there something similar each time, e.g. the button to be clicked is always in the same place, or the text you need is always formatted the same way? If so, you could perhaps use Macro Express to automate the process; it has the ability to control mouse placement (so you could automatically move the cursor to a certain space, for example) as well as merely clicking and pressing keys. Since you have the list of names, you could have it copy/paste from the list. Code for that operation follows my signature. Nyttend (talk) 01:12, 8 February 2016 (UTC)

With a macro program like MacroExpress, it's just simulating the keystrokes that you'd be using anyway, so just write down the keys you'd press and have the program press those keys in those orders. Be careful about timing: the computer often takes slight bits of time to load windows, and while this isn't significant when you're doing things manually, it's significant for the macro, which essentially does everything instantaneously. As a result, you'll need to insert slight timing breaks (very rarely will you need anything more than a couple hundred milliseconds) after commands that bring up new windows to ensure that it has time to bring up the window before you have it start performing things in the window. Also, you should use something like Notepad, because it won't insert additional characters, and every character matters in this kind of setting. Things like C are instructions to type whatever you've written, while things within <> characters are instructions to press specific keys instead of writing those letters: CTRLD is push down the control key, CTRLU is let it up, and the same for SHIFTD/U. Since you have a list of names in Notepad, with each name on a separate line, you'll find it helpful to mark which ones you've done. I've told it to place a ` character at the start of each line with an already saved title (after it saves the page, it adds the character before the name, and then goes to the next line, where it's ready to start the next page) because that's an easy way of marking which lines you've already done, and the ` character, being quite rare in normal text, isn't likely to be found elsewhere in the document, so when you're done with the list, you can simply do a find/replace command in Notepad to delete the character, and you won't worry about deleting significant characters. Nyttend (talk) 03:59, 8 February 2016 (UTC)

Time Machine's persistence[edit]

My external HD has suddenly become unreliable. (Nothing vital is on it.) It could be some time before I can replace it. I currently have about six months of Time Machine backups. If a year goes by before I replace the flaky drive, will Time Machine throw away what was on it, or keep the last known versions of those volumes? —Tamfang (talk) 21:58, 7 February 2016 (UTC)

The question is unclear. Time machine will keep adding back ups as long as there is space on the drive, once the drive is full it will delete the oldest backups to make space. How much room it needs depends on how many changes you have made since the last time it backed up. Does that answer your question? Vespine (talk) 05:27, 9 February 2016 (UTC)

February 8[edit]

The Hunting of the Snark[edit]

As a young child in the early 1990s, I enjoyed playing a range of little computer games on Grandmother's computer whenever we visited my grandparents; I'm looking for one of them now. It had a title similar to, or identical to, The Hunting of the Snark; you had to find little snark characters in a gridded board (most spaces were empty, a few had snarks, and one had a boojum that ended the game if you found it), presumably findable through some method, but I was young enough that I couldn't find them except by clicking spaces randomly. Can anyone point me to any information about such a game? Google searches produce results mostly related to the namesake original poem, and the game-related things I found were talking about a simple program that you could write in BASIC twenty years earlier, not something that would be sold commercially on par with programs such as Chip's Challenge. Nyttend (talk) 00:42, 8 February 2016 (UTC)

My memory of that game is from much earlier than the 1990s. It would be more around the early 1980s. The source code was in a magazine or on a floppy included with a magazine. Likely, it was Byte magazine. However, all my memories from the 80s are merged together into a heaping pile of big hair, bright colors, and piles of floppy disks. 209.149.115.90 (talk) 19:49, 8 February 2016 (UTC)
To clarify, Nyttend, are you describing a graphic game? Given the amount of shovelware that came with PCs in the 90s, it may be that somebody took the basic (as well as BASIC) Snark game and put a rudimentary graphical front end on it. As you mentioned, Google searches are difficult, not least because of the more modern, colloquial meaning of snark. --LarryMac | Talk 20:27, 8 February 2016 (UTC)
Maybe some variant of Hunt the Wumpus? Some versions had tile graphics [1]. 21:34, 8 February 2016 (UTC)
Yes, a graphic game; at least part of the game, if not all, was controlled by clicking spots with the mouse. Nyttend (talk) 14:49, 12 February 2016 (UTC)

Google DNS Server[edit]

What could be some caveats or cautions about using Google DNS Server (IP address 8.8.8.8) as my DNS server? Privacy issues, maybe? ←Baseball Bugs What's up, Doc? carrots→ 05:20, 8 February 2016 (UTC)

There are two issues: performance and privacy.
Privacy: as Infoworld pointed out a while back.[2][3]
"The reality is that Google's business is and has always been about mining as much data as possible to be able to present information to users. After all, it can't display what it doesn't know. Google Search has always been an ad-supported service, so it needs a way to sell those users to advertisers -- that's how the industry works. Its Google Now voice-based service is simply a form of Google Search, so it too serves advertisers' needs. In the digital world, advertisers want to know more than the 100,000 people who might be interested in buying a new car. They now want to know who those people are, so they can reach out to them with custom messages that are more likely to be effective. They may not know you personally, but they know your digital persona -- basically, you. Google needs to know about you to satisfy its advertisers' demands. Once you understand that, you understand why Google does what it does. That's simply its business. Nothing is free, so if you won't pay cash, you'll have to pay with personal information. That business model has been around for decades; Google didn't invent that business model, but Google did figure out how to make it work globally, pervasively, appealingly, and nearly instantaneously."
The question is whether your ISP's DNS servers are worse. Are they selling your information as well? (I am looking at you, AT&T).
Performance: Most major websites use Content Delivery Networks (Amazon, Akamai,,) to serve content. A Content Delivery Network looks up your computer's IP address and directs you to the nearest server. With a public DNS server, the CDN might serve you content from a distant server, and thus your download speeds will thus be slower than if you use your ISP's DNS server. Google's DNS server information page says:
"Note, however, that because nameservers geolocate according to the resolver's IP address rather than the user's, Google Public DNS has the same limitations as other open DNS services: that is, the server to which a user is referred might be farther away than one to which a local DNS provider would have referred. This could cause a slower browsing experience for certain sites"
If you are in Australia, using the US-based Google DNS server means that "closest" Akamai cache will be chosen as in the US and you’ll see very slow download speeds as your file downloads over the international link. It's not as bad in the continental US, but it is still slower.
BTW, wikileaks keeps a list of alternative DNS servers.[4] --Guy Macon (talk) 08:30, 8 February 2016 (UTC)
That information is somewhat outdated, Google supports an extension which can provide your subnet to the CDN's DNS server so they can provide more accurate resolution [5] and it's been enabled at least for Akamai.

Also while the quoted part may be from Google, I'm not certain your intepretation is correct even ignoring the extensions. Talking about US-based Google DNS server from Australia is confusing since both 8.8.8.8 and 8.8.4.4 are anycast addresses. In NZ the servers responding are generally in Australia (you can tell by the latency). I didn't test the IPv6 servers but I'm pretty sure they're the same. I suspect this is normally the case in Australia too, since Google will definitely want their Australian servers to be used for Australians and I doubt many Australian ISPs care enough to fight Google, in fact I strongly suspect Google has the clout that they'll be able to resolve any routing/peering disputes which may cause problems. As a home end user, there's not much you can generally do about routing, so most likely you're going to be sent to the Australian DNS servers in Australia. And I strongly suspect the Australian DNS servers will do lookups with CDN's name servers specific for the Australian servers. That seems to be what this page is saying [6].

In other word, I strongly suspect if you're in Australia it's fairly unlikely you'll be connecting to Google's US DNS and it's also fairly unlikely you'll get US CDNs (unless they're the closest). You may still not get the best CDN's particularly if they don't support the extension. For example, some ISPs work with CDNs to provide specific servers for their customers. Likewise, I have no idea where Google has DNS servers in Australia, do they have them in both Melbourne and Sydney for example? I wouldn't be surprised if som CDNs do which means if Google doesn't you may not get the best geographically located servers even in Australia. Obviously in my case without the extension I'll be getting CDNs in Australia and not NZ even if they exist and there will be countries where the responding name server may be an even worse choice. (It can be complicated but your assumption should be if you're ISP is remotely competent their name servers should provide CDNs that give the best routing.)

One final comment, I'm in NZ not Australia but one our only major internet cable also connects to Australia anyway and I can say things are not nearly as bad as they were 5-10 ears ago. I'm using VDSL2 although the cable to my house is a bit crap or far so only get about 50mbit/s. I can maximise this even connecting to the US, sometimes even at peak times. (In fact, if you're not connecting to a CDN it's easily possible the US server will be faster than the local one.)

It obviously depends significant on the ISP and how much international bandwidth they have, and it's possible NZ ISPs tend to have more because there are fewer CDNs (and I'm not sure where trans-Tasman bandwidth is much cheaper than Californian bandwidth). The SCC is not even close to capacity (and I'm presuming a number of those connected only to Australia are similar), so it is only a cost issue. And it can get confusing what you're actually connecting to because of transparent caching/proxying that many ISPs use. Still the takeaway message is you shouldn't assume connecting to the US is going to be slower (in terms of bandwidth, latency is obviously going to be higher). Of course where it does happen, your ISP won't particularly like you wasting their international bandwidth that way. Actually another reason why it's likely they will work with Google to ensure their customers who choose to use Google Public DNS end up connecting to the right server.

P.S. This assumes that the CDN and your ISP only rely on name servers lookups to ensure you end up on right server. If they have a more complicated system, it may be that you will still end up connected to the right server even if your DNS does their resolutions to the CDN's name servers from the wrong location.

Nil Einne (talk) 13:45, 8 February 2016 (UTC)

Generally, DNS servers can be logged. When using Google Chrome it does not matter on navigating on web pages. The license of Google Chrome makes Google own all input You enter into the URL field of the browser. Other programms can be logged by monitoring the DNS queries. Using a DNS server, You need to trust it. I think You can trust Google. Modifing the DNS entry is also an modification to Your computer. Imagine the cause of a hacked DNS server when using online banking or giving passwords to the page, Your browser displays. DNS servers also can be used as quick way to block (web)servers hosting malware. The DNS entries in Your computer and router tells what “phonebook” to use and the computer will connect to the returned IP address. --Hans Haase (有问题吗) 11:16, 8 February 2016 (UTC)

Thank you, all, for your insights. :) ←Baseball Bugs What's up, Doc? carrots→ 01:09, 10 February 2016 (UTC)

External hard drive on Windows 10[edit]

I've backed up my files from another computer onto an external hard drive. I've connected the hard drive to Windows 10. but there is no obvious way to access it. How do I extract the files? Theskinnytypist (talk) 19:42, 8 February 2016 (UTC)

If you just copied the files over it should be a drag-and-drop copy, with the caveat that you may need to take full control & ownership of the folder first as explained here (instructions are for Windows 7, but are valid for Windows 10). If you used a backup/restore application then you might have to use that same application to restore your backup. If you used Windows 7's backup, it has a specific option in Windows 10 for restoring. FrameDrag (talk) 20:45, 8 February 2016 (UTC)

Battery dying issue[edit]

Peeps, I'm having a bit of a problem with the Laptop battery that I bought recently.

1) I bought it before/after christmas. I read the guideline where it stated (in a sentence): "Charge to 100% when it goes to 2% for the first time. For maximum battery life keep the charge up to 70%".

a) I've charged it to 100% as stated by taking it to 2%.

b) I don't really get the time to keep the battery up to 70% then turn it off because I turn on the Laptop then work until it goes to 2% than recharge to 100% while the Laptop stays on, then turn it off for about 15-20 mins, then turn it on again. I do take the occasional breaks e.g., when I'm watching TV or eating, sleeping, showring or when I go out...

c)The battery is dying like an "idiot"!

2) I've not followed any rules whatsoever with my other battery that came with the computer and it lasted four to four and a half years.

Now, I'm confused and worried how the current battery is dying; its already on 17%. What do you guys suggest I should do? Note: I have a warrenty for 6 months too...

Apostle (talk) 22:39, 8 February 2016 (UTC)

There's an option in Win7 and later to only charge the battery to about 80%. Look in the Power Management settings. Repeated partial charges/discharges will wear out the battery quicker than leaving at at 100% charge. LongHairedFop (talk) 19:34, 9 February 2016 (UTC)
[citation needed] for the claim repeated partial discharges is worse than leaving the battery at 100%. The device almost definitely has lithium ion of some sort, and these chemistries tend to work best if you don't store the battery at 100% and don't fully discharge. (Although full discharge cycles tends to help the device give better life estimations.) See http://batteryuniversity .com/learn/article/how_to_prolong_lithium_based_batteries and [7] [8] for example. Nil Einne (talk) 06:03, 10 February 2016 (UTC)
It sounds as if you have a faulty battery. Laptop batteries should last at least three hours from full charge, and some last much longer, though the time will vary according to usage. Try timing how long the battery lasts with continuous usage, then take it back to the store where you bought it. You will have a stronger case to present if you were given some indication of the battery life you could expect at the time of sale. By the way, if you are able to leave the charger attached as you work most of the time, then this will save on long-term battery life by not repeatedly charging and discharging. Dbfirs 20:25, 9 February 2016 (UTC)
Well, its 5200mAh. At first it was giving 5h 13m. Now it just about shows 4h 20 or 30m after a full charge (I have to shut it down then have to turn it back on because the battery dies even quicker if you don't turn it off...). From what I recall, it displays 1h 25m if I'm only using MS Word consisting more than 120 pages... Is it normal?
I found the Power Option settings in the Control Pannel but no option available on how much I could charge up to? I have Window 7 Ultimate Unless my English is not functioning again! - could you guide me please?
Apostle (talk) 22:10, 9 February 2016 (UTC)
Most laptops have control circuitry next to the battery that prevents overcharge, so I can see no reason why you shouldn't leave it on charge well past the 100% reading. Some people claim that you shouldn't leave power connected long-term, but I've always ignored that advice, and the laptop on which I'm typing this has been connected to the power supply almost continuously for over eight years and is still working (though the battery now lasts only a few minutes on its own without external power). The number of pages in MS Word makes negligible difference, but the time editing in Word should be at least four hours before it turns itself off. If you are watching a DVD or running external devices then the time might be shorter. You can turn the screen brightness down a bit to save battery power, and there should be other power options available, but these control how much power is used, not how much to put in. You will find that for every charge and discharge, the time you get from a full charge reduces by up to 0.1%, and this is normal. The calculation of time left is unreliable because it estimates this from current usage and past experience. If your usage varies, the time will go up and down as it recalculates. Dbfirs 22:42, 9 February 2016 (UTC)
The reason you shouldn't leave the power connected long term with the battery is because it reduces battery life. Lithium ion batteries have a significantly shorter life (in terms of how much they charge they can hold over time) if held at 100% state of charge (or 4.2V or higher for the types of lithium ion chemistries most commly used) long term. Also some devices don't disconnect the battery and only use power when fully charged. Instead they use the battery and then topup the charge when it gets below a charge level. (Most devices will also topup the battery anyway although self discharge of lithium ion isn't that high so I admit I'm not sure how much of a difference it makes but it probably makes some.)

Unless the device or battery is seriously defective, it's unlikely the battery will be dangerous if you do keep it at 100%, but if you have a battery the assumption would be you want to use the battery so it would be better to use the device in such a way that you don't shorten the life.

With a laptop, if you plan to use the device on power for a long time, it would be a good idea to remove the battery and only use it on power if you can (although this will mean you could get data loss if there is power loss and you don't have a secondary UPS). Preferable with the battery at around 70% charge. Alternatively fancier laptops may let you limit the charge to ~70% (or 3.9-4V for the types of lithium ion chemistries most commonly used). (If it's an old laptop now used like a desktop perhaps with the battery as a short of UPS, this doesn't matter much.) Note however it can be worse to discharge down to 0%, so if storing it at 70% means you often discharge down to 0% it may be better to store it at 100%. (Again, I admit I'm not completely sure how much of a difference this makes, as in both cases the amount of discharge would I presume be the same but my understanding is most commonly discharging down to 0% is probably a bit worse for battery life than charging up to 100%.)

Nil Einne (talk) 06:03, 10 February 2016 (UTC)

P.S. Most of what I said above is supported by the refs listed above. [9] has some info on capacity variation after storage albeit storage at 55 degrees C. Interesting enough they found 0% is best, whereas the most common recommendation is 30-70%. But I think the reason for that may because if you store the cell at very low SoC, you run the risk it will discharge to a level where it can't safely be used anymore.

Nil Einne (talk) 06:22, 10 February 2016 (UTC)

I don't disagree with anything Nil Einne writes above, but it's all too much bother for me to keep removing the battery. I just assume, perhaps wrongly in view of the linked documents, that the battery control circuitry is intelligent and optimises battery life. The worst thing for Li-ion batteries seems to be high temperature. Perhaps that's why my batteries last longer in cool Cumbria. Dbfirs 19:31, 10 February 2016 (UTC)
The thing is there's an inherent contradiction in consumer demands. While it's true most people want their battery to last a long time, most people also want their device to last a long time during use. So while the manufacturer could likely extend battery lifespan by charging to 4V instead of 4.2V (or whatever), they don't because they would prefer to have the battery hold more charge. Further it's much easier for someone to do a test and find the device only lasts for 5 hours instead of 6h on a single charge than it is for someone to know that after 300 cycles, the battery will still have 90% of capacity instead of 50% (these are made up numbers). Even worse from a consumer POV would be plugging the device into the charger and using it, only to find when they disconnect it's only 60% charged because the device stopped charging and didn't top up the charge after use. I admit I'm not always certain why some devices when plugged and fully charged continue to use the battery rather than just power although particularly with phones I think a significant reason is because the charger may not be able to fully power the phone during high power usage and that this increases complexity of the phone powering circuit for something which may often not be used much. Also as mentioned, I'm not certain this really makes a big difference. What I strongly suspect would be better would be to allow the battery to discharge a bit because while discharging the battery also effects the battery's ability to hold capacity, it's probably better than continuing to hold it at 100%. But as mentioned, few people want to take their device off power only to find it's only at 60% capacity because the device used the battery and didn't top it up. Or to put it a different way, the charging circuitry is smart but it's not magic and from the manufacturer POV, the first big concern is safety and the second big concern is getting maximum life from a fresh battery and ensuring the device behaves the way the customer expects. Maximum life after 2 years is below all that (and may be even other concerns) on the list of priorities. Nil Einne (talk) 17:04, 11 February 2016 (UTC)
Yes, that analysis sounds exactly right. Dbfirs 18:18, 11 February 2016 (UTC)
Okay guys, this is my experience with this current battery (this doesn't mean the same will apply with all of you):
Say for example you let the battery die out completely - this is very bad. If you fully charge the battery and still use it after unplugging - very bad for the battery. Also, a heavy usage without on recharge mode - very bad for the battery.
Now, I'll try the theory of 70%. See if it works... Thank you all for trying to help. Regards.
Apostle (talk) 18:30, 10 February 2016 (UTC)
I'm not sure why fully charge the battery and still use it after unplugging would be very bad, except in so far as a charge and discharge reduces battery capacity by 0.1%. Let us know how your experiment goes. Dbfirs 18:18, 11 February 2016 (UTC)
I'm using Everest software to review the mWh. Yesterday/Today I found out that, (1) you have to switch off the Laptop twice (not sure at what percentage) after starting at fully charged; you have to keep it off for about less than an hour or so though. Point (1) did not work twice before so I'm guessing it depends on the percentage you shut your Laptop off... (2) I also tried to use the laptop till to its last point after full charge than recharging it to full after viewing the last warning (after the notification) - two-three times it worked than floped out (reduces the mWh). What you stated earlier about the battery trying to replicate the timing of last time was correct. Also, do not keep it on charge even after its fully charged. Do not put it on sleep or hybernate mode - they are for Desktops only, not for battery usage PC. Also its doesn't go by 0.1%, goes around about, starting from 647mWh - again depending on the usage... And yes, charge and discharge reduces the mWh too if the Computer is on. I'll update you guys if it doesn't go down...keep it between all us Wikipedians! Smile-tpvgames.gif -- Apostle (talk) 19:39, 11 February 2016 (UTC)

February 9[edit]

faster cube root calculation?[edit]

Is there a way to calculate the real cube root of a real number that is faster than the log and exponential method? Bubba73 You talkin' to me? 04:48, 9 February 2016 (UTC)

Sure, there are loads of options... what are your problem constraints? How accurate do you need to be? Can you use look-up tables for some or all calculations? Do you know that the input is centered around a particular value (suitable for a truncated Maclaurin series or other approximate method)? May we assume you have conventional floating-point computer hardware, or do we need to work with some other type of machine? Are we allowed to parallelize calculation work?
My first instinct was to formulate the cube-root of k as a zero of the equation x^3 - k, and then to apply (essentially) Newton's method to find the zero. You have the advantage of knowing, analytically, that the function is monotonic and that there is a single zero crossing; so you can use that fact to your advantage. This is, basically, the Fundamental Theorem of Algebra.
Next I referred to my numerical analysis book, Numerical Analysis, Burden and Faires, which suggested applying Horner's method to accelerate convergence of Newton's method. This book actually provides code examples (in Maple), and works the numerical method for a few examples. In this specific case, I'm not sure it will make any difference, as most of the polynomial coefficients are zero. There are a lot of similar dumb tricks named for smart mathematicians; each one can shave off a couple of adds and multiplies. This probably won't actually change the execution time in any significant way on modern computer hardware.
These are appropriate accelerations if you are solving numerically using an ordinary type of computer; but if you're working with weird computational equipment - like, say, using constructive geometry to analytically solve for the root - there may be faster ways of finding the answer.
Nimur (talk) 05:22, 9 February 2016 (UTC)
Here is a machine architecture enhancement to enable hardware-accelerated Taylor series expansion of the square root, for an IEEE-754 floating point multiply/divide unit: Floating-Point Division and Square Root using a Taylor-Series Expansion Algorithm, (Kwon et al, 2007). If you can follow their work, you can see how, by extension, one could build the same hardware for the cube-root polynomial expansion.
Is that kind of hardware worth the cost? Well, only if you really need to compute a lot of cube roots, and even then, only if you can convince the team who builds your floating-point multiplier into silicon. Most mere mortals never get to provide such feedback to their silicon hardware architect. But, once this type of enhancement is built and done, you get to compute cube roots in "one machine cycle," for the arbitrarily-defined time interval that is "one machine cycle." Nimur (talk) 16:57, 9 February 2016 (UTC)
I think it's unlikely you'll get better results than the standard math library in your preferred language...UNLESS you know something that it doesn't. So if, for example, you know the range of the numbers you're giving it, or the precision of the results you're prepared to tolerate...or that you're doing a series of consecutive calculations...or that you're going to use the output in some subsequent calculation. But if you don't have a more specific thing than "I want the cube root of absolutely any number with a full-precision result at any time" - then I doubt you'll beat the built-in library.
For example, you might consider a lookup table with linear interpolation between points in the table. That won't work unless:
  1. You can constrain the inputs to a small range of numbers...and...
  2. You will run the code in a tight enough loop that the lookup table gets into cache and stays there...and...
  3. You can tolerate the errors in the result for a reasonable size of lookup table that'll fit into cache.
If any of those things aren't true - then the memory access time for the lookup could easily exceed the time for the FPU to do the log/exp thing. SteveBaker (talk) 18:55, 10 February 2016 (UTC)

Smart device flasher boxes/dongles[edit]

I know this sounds illicit or illegal, but upon seeing cracks, loaders or dongle emulators for certain software used on service boxes for mobile phones, it had me wondering if the dongles or boxes in question aren't any different from the ones used on high-end software like Pro Tools or Autotune for licencing enforcement, or if they do indeed contain actual circuitry to carry out any operation like removing SIM locks on phones and the like. Blake Gripling (talk) 05:31, 9 February 2016 (UTC)

Wikipedia is not censored, so feel free to ask about illicit/illegal subjects. Wikipedia is also public, so in some cases you might want to create a new username just for asking the question (See WP:SOCK for things you should not do with the second username). In the case of dongles, circumventing them for purposes of backing up your software or for having a spare in case the dongle fails is generally considered to be ethical. I won't comment on the legality, and neither should anyone else -- Wikipedia does not give legal or medical advice.
Different dongles have different internals. In general, if the company is sending out thousands and thousands of them, you can usually assume that they are cheap to make and thus pretty simple inside. If they only send out a few, the dongle may be more sophisticated and may even use a Secure cryptoprocessor. --Guy Macon (talk) 19:14, 9 February 2016 (UTC)
Well, it's true that wikipedia is not "censored" but it is against the ref desk policy to provide legal or medical advice and it's generally frowned upon to give advice about illegal activities, such as harming people or explicitly breaking the law, such as committing software piracy etc... Vespine (talk) 00:55, 10 February 2016 (UTC)
Well basically this isn't necessarily about the cracks themselves, nor would I provide any advice or directly encourage them anyway. What I'm wondering is, since some of the boxes are necessary for SIM unlocks to be done, do they contain any actual circuitry for communicating with the device (which I'm sure it does especially with certain protocols), or are they reduced to just software protection dongles like in newer smartphones, as most of the functions provided with the likes of Sigmakey can be done using freely available tools anyway? Are they bespoke ASICs or just programmable FPGA chips? Blake Gripling (talk) 05:37, 10 February 2016 (UTC)

Buying digital cameras compatible with legacy analog lenses[edit]

My father has hundreds of dollars worth of (over $1000) cameras with special lenses bought in the 70's and 80's. He keeps asking me why they don't sell digital backs that are compatible with the fronts he has. My answer is prohibitive economics. (I can explain the economics, I just don't know the mechanics.) But I would like to confirm that there isn't such a thing for which he is asking, a way to take digital photos with his old lenses. Does such a thing exist? Thanks. μηδείς (talk) 18:50, 9 February 2016 (UTC)

The only technical reason I can think of is that older lenses would be manually adjustable, which interferes with a digicam's ability to do things like autofocus. StuRat (talk) 19:04, 9 February 2016 (UTC)
In the Nikon world, many lenses with the Nikon F-mount (which was introduced in 1959) can be used on even their most modern digital SLR cameras, although there are limitations and some incompatibilities. I don't know what the situation is for other camera or lens manufacturers, however the first sentence in the History section of the F-mount article gives a clue - "The Nikon F-mount is one of only two SLR lens mounts (the other being the Pentax K-mount) which were not abandoned by their associated manufacturer upon the introduction of autofocus, but rather extended to meet new requirements related to metering, autofocus, and aperture control." Both cameras and lenses have had more and more functionality added over the years. An older Nikkor lens on a Nikon D90 likely would not support autofocus or aperture setting. Taking another approach, there have been various attempts to create a digital back for film SLRs, but none seem to have really taken off - search "Digipod" on your favorite search engine for one of the most recent attempts. --LarryMac | Talk 19:07, 9 February 2016 (UTC)
(edit conflict) Yes, that's perfectly possible. You will just need a lens adapter to be able to physically mount the lens onto the camera body. Note that, as StuRat says, you'll miss out on most of the focussing tricks that modern DSLRs offer, but it will certainly work and you'll be able to take pictures. There's a guide here that applies specifically to Canon EOS bodies, but the principles are the same for any manufacturer. - Cucumber Mike (talk) 19:13, 9 February 2016 (UTC)
  • Excellent. Now that I know this is possible, I will make him send me a detailed list, since I am not a camera buff, and have no idea what he owns. Thanks everybody! μηδείς (talk) 19:17, 9 February 2016 (UTC)
There IS even such a thing as a Digital camera back which you can get for cameras with lenses that don't have adapters for equivalent DSLR but they are quite expensive. NOW having said that, I actually had the very same issue, I had over $1000 worth of lenses bought over the years which fit my old Canon film camera. I Finally decided to take the plunge to a DSLR 2 years ago, I did a LOT of research and consulted with friends, a neighbour of mine had a Canon 6D with the gorgeous EF 24-105mm f/4L IS USM L series Lens. I confirmed my other lenses would fit on the body, waited until the Christmas sales and decided to get the kit with the lens for about $2000 instead of the body only would have been about $1400. Thing is, apart from some "playing around" early on, I've never used by old lenses anymore. The ONE lens that came with the kit, the L series, just blows all my old lenses out of the water. Now I know there's lenses and then there's "LENSES", but if you're saying that each lens is hundreds of dollars rather than thousands of dollars, I suspect you "might" be in the same boat as me. Lenses and cameras have come a LONG way over the last 20-30 years and unless you are a real "artist" and have antique zeiss glass:) I suspect if you spend $1000 on one lens these days, it will preform better and be more convenient than any of your old glass. Unless of course you mean he has fisheye and super macro or super wide lenses or something like that. Vespine (talk) 22:21, 9 February 2016 (UTC)
  • I'll sum it up: it's certainly possible using a modern camera - though possibly a bit cumbersome. For any manufacturer, it's possible on an adapter to a mirrorless camera like this Sony where the mount is quite close to the sensor - I can probably give you more detailed advice when you know what he has, but here's an article on using Leica lenses with it, and here with Olympus kit. Adapters tend to be made by Chinese manufacturers and sold third-party via eBay (there are some exceptions), and quality apparently varies a lot, so you'll want to look at guides and reviews. But do remember that modern lenses have modern features like image stabilisation, autofocus, automatic diagragm control and modern computer aided-design and refinements in manufacturing aspherical lenses - things photographers have got to take for granted for the last twenty years - plus will work more precisely than a lens bolted to an adapter, since the extra connection increases deviation from the ideal position. So modern lenses may give better results unless your dad's old lenses are truly excellent and your adapter good, and this is particularly true for any kind of 'action' photography where autofocus is a great aid. In addition, it may be cumbersome since you'll need to focus and set the aperture manually every time and then meter before taking a shot. I should stress that I just know about this for interest, I've never done this myself.
    On Nikon, things get much better: it's possible to use some of the modern Nikon cameras (though often the more expensive pro ones) with older equipment. See Ken Rockwell's website; I've seen him say that Nikon actually offers surprisingly good phone support in the US on users trying to make odd combinations of equipment work.
    An additional problem is that cheaper digital cameras use a sensor smaller than 35mm film, so unless your dad gets a more expensive prosumerish camera (like the aforementioned sony) he will have to deal with his images looking cropped compared to the same lens on film. But again, if you tell us what your dad has it should be possible to give you more advice.
    Finally: this only works with 35mm film cameras, the normal kind. If your dad has equipment for medium format film, getting less popular in the 70s but still often used by serious landscape and fashion photographers, or a huge view camera, he's probably stuck with film- you can't get digital sensors that size for any sensible amount of money. Blythwood (talk) 06:04, 10 February 2016 (UTC)

follow-up: Do they make a digital conversion back so film cameras can have the film mount back replaced?[edit]

My thanks, and my dad's for all the answers above. Again, I am acting as a proxy for my dad in asking this; I know nothing about the technology myself. μηδείς (talk) 19:42, 10 February 2016 (UTC)

In response to the answers up to Feb 9th, he asked me to repose the question as:

"Do camera manufacturers make a digital conversion back so that film type cameras can have the film mount back removed and an electronic receiver installed?"

See the Digital camera back article linked by Vespine - short answer, not at any affordable price point, and mostly for medium and large format cameras (i.e. not your typical 35mm SLR). The "Digipod" that I mentioned was to sell for around $370; it was an Indiegogo project that failed to reach any significant interest. Understanding that you are a proxy, but knowing your father's camera brand and type would be helpful in providing more detail. --LarryMac | Talk 21:20, 10 February 2016 (UTC)
Exactly. The thing is, refitting a film camera to interface with a sensor, getting the alignment of the sensor right and so on...all very precise machining jobs. A whole new camera is actually easier (and cheaper, since it can be mass-produced as one module). Digital camera backs are almost without exception for medium format cameras, systems that cost five figures for which it might genuinely be worth upgrading the sensor but keeping the camera module every now and then. Blythwood (talk) 21:42, 10 February 2016 (UTC)
As a separate point, modern film scanners are very good and film development companies can often now do development from film and scanning in one go then send you images on a USB stick. If your dad doesn't want to take that many pictures, and doesn't fancy some kind of completely new camera setup, that might be a good option. Here's one blogger on it. But obviously you're stuck with the limitations of film (not great indoors, no instant replay, 36 photos at a time) and it would get expensive for many photos. Blythwood (talk) 21:50, 10 February 2016 (UTC)
Also, the availability of film and film developers could be an issue. Some models of cameras have already discontinued film support, and you can expect more to do so in the future. The eventual fate of film is an interesting question. Will it disappear completely or will some remain indefinitely, but in small quantities, like vinyl records ? StuRat (talk) 22:01, 10 February 2016 (UTC)

February 10[edit]

Do tracking features offer any benefits to users?[edit]

Criticism_of_Microsoft_Windows#Data_collection describes the reasons why users would want to prevent the Advertising ID from being transmitted. Similarly, Do Not Track lists the advantages for opting out from that. Is there any reason for a normal user to keep those set to the default? Microsoft User 2016 (talk) 14:55, 10 February 2016 (UTC)

It would benefit me and benefit the advertisers if they could stop serving me adds for things I will never buy. Alas, it often works the other way; someone searches for car insurance or a new PC, makes their decision and buys the product, and then gets bombarded with ads for the thing they already bought for the next year or two. --Guy Macon (talk) 15:15, 10 February 2016 (UTC)
It could be worse: Our neighborhood beautification group wanted to buy some dog-poop cleanup stations for the neighborhood - I did a search to see what they cost (I wasn't going to be buying them) - and for a month afterwards got nothing but adverts relating to dog poo. Just wonderful! SteveBaker (talk) 18:45, 10 February 2016 (UTC)
Thank you, that makes sense for the Advertising ID. What about Do Not Track? --Microsoft User 2016 (talk) 15:27, 10 February 2016 (UTC)
The really annoying advertisers ignore Do Not Track, but it doesn't hurt to turn it on. I suggest that you use Privacy Badger, which makes it really hard for anyone to track you. --Guy Macon (talk) 16:02, 10 February 2016 (UTC)
Thank you. Privacy Badger doesn't work for Edge or IE, so it seems I'd have to change the browser, too. --Microsoft User 2016 (talk) 16:55, 10 February 2016 (UTC)

Are my AVG, Firefox, and Window Defenders issues related?[edit]

I am currently using a 2009 Gateway NV78 laptop running Windows 7. I have resorted twice since last November to reinstalling from the installation disks. I have basically run into three repeating issues:

I cannot install AVG antivirus past 92%. I have used it for years, but since my last reset this January, I have not been able to reinstall the 2016 or 2015 version.
Firefox freezes up, and takes forever to run, if ever it does run. Using it has resulted in two bluescreen errors in January, and a reinstall from restore disks. I currently have Firefox uninstalled.
Even after uninstalling every antivirus program I can with ComputerCleaner, I cannot get Windows Defender to work or stay activated. (I get an error message saying it is turned of, and if I try to turn it on I am lucky if the computer doesn't freeze.)

I am wondering if these three issues might be related? I was able to make and run an updated off line disk for Windows Defender. It works and finds no malware. But I can't restore AVG or a working version of Firefox at all. I find Chrome tedious and refuse to go back to it. I would like Firefox, but have no problems with IE. But I am very concerned that I have no working antivirus, and would really like to get AVG or an equivalent free program back.

Can anyone at least suggest a good downloadable free antivirus program, or suggest what might be wrong with my AVG downloads is?

I have gotten a certificate invalid with timestamp error on occasion, but my computer shows the correct time and date for my EST timezone. Thanks for any help. μηδείς (talk) 20:40, 10 February 2016 (UTC)

You could try Avast or Avira they seem to get decent writeups on the tech sites. I personally stick to the MSSE. 6 years for a windows laptop is a pretty decent run, I'd consider upgrading if I were you. Vespine (talk) 00:24, 11 February 2016 (UTC)
You could also download free Malwarebytes and Spybot – Search & Destroy for extra checks that you don't have some sort of malware. This answer typed on an eight-year-old laptop. Dbfirs 11:15, 11 February 2016 (UTC)
Thanks, I was already recommended Avast and I'll try the rest. 20:30, 11 February 2016 (UTC)
I have several installed, but keep just one running for protection. I run the others as a one-off check if I think something is wrong or if I've accidentally clicked on something risky. Dbfirs 00:20, 12 February 2016 (UTC)

February 11[edit]

OSX Yosemite not opening / launching files / dmg / updates etc.[edit]

I don't know if people still answer questions like this here (last time I was *here* was like 2009 or something) but I am having a huge bug with my recently applecare expired macbook pro retina. Now, whenever I double click on a picture on the desktop, or try to install a new program (try to open the .dmg file to allow for installation, or even just update adobe flash, I instead just get a "verifying" pop-up with a security icon that never loads or finishes. So basically I cannot do anything. I can still open files from within programs (like open a picture from the open drop down menu in preview) but I cannot install new programs or updates.

Here is a picture of the messages

Things I have tried : It's not the right click give permission to open apps not from the apple store thing. I checked my security settings. I restarted computer in safe mode and tried. I tried as a different user. I repaired disk permissions. I let someone smart look at what was showing up in the console (they didn't see anything that looked suspicious).

At the very least could someone maybe at least tell me what the actual words I should be using to describe this phenomenon are?

Finally, if the answer is to wipe the whole thing and reinstall the OS, can I still just install Yosemite (Don't want El Capitan yet since have legacy software); how would one then put everything back from time machine without putting back the problem? I used to be really good at Macs until they got sort of automated. I do not want to screw things up more by typing unix code in the terminal based soley on stuff I read in online threads since I don't even know which problem I actually have.

Thanks Saudade7 11:02, 11 February 2016 (UTC)

Unifying devices and e-mail[edit]

This may just be a question about the right terminology, as I don't know the right words to google for. But the situation must be not uncommon: I have three devices that each have their own OS and data, which I would like to combine:

  1. desktop: my old machine, still running Win XP. E-mail through my main account (not gmail or MS mail, but a domain hosted through a traditional web host), managed with Outlook;
  2. laptop: Windows 10; just bought. This apparently requires a Microsoft E-mail Account for some basic functionality;
  3. phone: Android; e-mail via gMail.

I forward e-mail from my main account (on the desktop) to the gMail account through an Outlook rule. This is cumbersome (Need to edit the rule constantly because I selectively forward only some mail - I get too much mail through my main account), so I'm looking for a way to simplify my life. Also, any data transfer between the devices currently is through USB sticks or USB cable, which isn't the easiest way. How do others juggle three different devices? --Microsoft User 2016 (talk) 17:18, 11 February 2016 (UTC)

It sounds like you need to exclusively use a server-based email system, like Gmail. This will allow you to access it from any device with an internet connection. Outlook, I believe, works the other way, and downloads email to your device. This is problematic with multiple devices. For now, I'd just forward everything from Outlook to Gmail, and sort it out there. If you are getting lots of spam, then maybe you need to set up some filters, whitelists, or blacklists to stop those. (Note that Gmail still allows you to download things, when you choose to do so.) StuRat (talk) 17:31, 11 February 2016 (UTC)
Thank you for your reply. The bulk of mail is not spam, but just such mails as notifications from web pages I signed up for, or organizations or initiatives I support or follow. Currently I move them to their respective folders and look at them when I feel like it. Over the years, I got quite an intricate system of folders and rules for that; I'm not even sure if I could replicate that on Gmail. Also, that would still not help me with the need for a Microsoft Account, or would it? It seems the MS account is needed to start such simple applications as OneNote, and I thought it would help me unify non-e-mail data across devices. Or can that all be done with Gmail, too? --Microsoft User 2016 (talk) 18:04, 11 February 2016 (UTC)
Regarding "This will allow you to access it from any device with an internet connection." I can do that through Outlook, too, via IMAP, but that comes with its own set of issues. --Microsoft User 2016 (talk) 18:07, 11 February 2016 (UTC)
I don't have any suggestion on the email front, but for data transfer, there are numerous cloud storage options, such as Dropbox and Google Drive. Bluetooth file transfer might also be an option, but possibly not on an XP-vintage PC. --LarryMac | Talk 18:14, 11 February 2016 (UTC)
Actually, that could conceivably be a solution for the email front, too, if I kept a synchronized copy of the Outlook data file on that location, too. (I prefer synchronized because I don't want to be dependent on always having an internet connection. My Outlook file contains information such as appointments which I may need to access on the road.) --Microsoft User 2016 (talk) 18:20, 11 February 2016 (UTC)
I can tell you what I do... I have many email accounts because every university I teach at gives a new one and every organization I work for gives me a new one. I forward all of them to gmail. I use IMAP clients to read gmail (like Thunderbird). I built rules through the gmail web client to put emails in folders based on subject lines, sources of the emails, and other things. Then, if I'm on my own computer, I run Thunderbird. I my phone, I use the built-in IMAP client. If I'm using someone else's computer, I use the web and go to gmail from there. I have only had one downside. When I send email, it always comes from my gmail account. I don't care, but I've had some people ask why I'm using my gmail account when they gave me an email address. I tell them that I always send email from one account so they will know it is me. They accept it. 209.149.115.90 (talk) 18:42, 11 February 2016 (UTC)
Thank you for relating your experience. With your gmail rules, do you achieve that when an unimportant mail arrives, it will not show up as a new arrival, and you won't get the notification sound? Regarding the sender address, Outlook allows to change the sender, but it still says "sent on behalf of ...". Maybe gmail has a similar feature. --Microsoft User 2016 (talk) 19:04, 11 February 2016 (UTC)
When new mail arrives, it gets sorted and then the email program I'm currently using can alert me however I've told it to. Thunderbird has rules per folder - so I can tell one folder to play a deafening siren and another to do nothing. On the web interface, I have no sounds. On my phone, I don't let it alert me at all. When I feel like looking, I see if any folders are highlighted as having new email. This requires me to keep up with the email. I'm not someone who lets emails sit for months and then have over 1,000 "new" emails in every folder. As for the sender email address, you can change it in Thunderbird easily, but that can be overwritten by the SMTP server. GMail's SMTP server ignores your request and uses your gmail email address. 209.149.115.90 (talk) 19:10, 11 February 2016 (UTC)
So I understand the following; please correct me if I'm wrong: The functionality you're describing seems to work for any IMAP account. So it would just as well work with my current (main) account, or with a Microsoft account. The reason why you're using gmail is because the phone expects it. Right? Do you have a Microsoft device that expects a Microsoft account, too? If so, do you use that account at all, or just as a temporary thing to get to the features? --Microsoft User 2016 (talk) 19:28, 11 February 2016 (UTC)
Correct. What I doing works for any IMAP account. I use gmail because it is my personal email account. I assume that I could lose one of the other email accounts at some point in the future (and I wish I could lose some that I don't use and I've lost my password for, but still get forwarded to me). 209.149.115.90 (talk) 19:56, 11 February 2016 (UTC)

Why speakers on bottom of device?[edit]

Just out of curiosity: Both my Samsung Galaxy Core Prime phone and my new Lenovo ideapad 100S have the speakers at the bottom of the device. Searching for "why speakers at bottom of devices" gave me http://androidforums.com/threads/why-speakers-on-rear-of-phone.392616/, which (in unnamedny's) reply, at least gave me one reason that would apply to my phone. Their main argument there, that it's for space limitations, doesn't apply to my phone, and certainly not to the laptop, which has plenty of space next to monitor and keyboard. --Microsoft User 2016 (talk) 18:55, 11 February 2016 (UTC) PS: What's a "congabible"? That's the CAPTCHA word I had to enter because of the link. :-) (No need to answer, I know it's random; I just found the word funny.) --Microsoft User 2016 (talk) 18:55, 11 February 2016 (UTC)

Congabible is two words "conga" and "bible" crammed together. The CAPTCHA here makes the assumption that nobody would figure out how to use a dictionary of 4 and 5 letter words to do a best two-word match and write a script to automatically fill in the CAPTCHA. 209.149.115.90 (talk) 19:04, 11 February 2016 (UTC)
Yes, I know, I shouldn't have posted this on the Computer reference desk. I just couldn't resist. --Microsoft User 2016 (talk) 19:06, 11 February 2016 (UTC)
Heh! That suggests a challenge. Record the next 5 CAPTCHA "words" one encounters, and write a story that uses them as neologisms. The Congabible could be a definitive dance instructional manual. {The poster formerly known as 87.81.230.195} 185.74.232.130 (talk) 14:49, 12 February 2016 (UTC)
Hmmm...I assumed it was the regular King James' bible, encoded as a series of complex interpretive dance moves played out to the conga rhythm. This is why it's inadvisable to ask these things on the Computing desk! :-) SteveBaker (talk) 18:30, 13 February 2016 (UTC)
The Lonovo Ideapad 100S uses rather poor speakers (very tinny) to begin with. I assume they figured nobody would care if the sound was also muffled. 209.149.115.90 (talk) 19:29, 11 February 2016 (UTC)
I've seen sub-bass speakers pointed downward. I had assumed that was to transfer the vibrations to the floor or furniture to make that into a larger resonator. StuRat (talk) 20:06, 11 February 2016 (UTC)
That is true. Before answering, I checked to see if these were sub speakers. The Lenovo Ideapad 100S only has two cheap rectangular general purpose speakers. No sub. Found many people complaining that the sound is terrible and others saying that you get what you pay for. 209.149.115.90 (talk) 20:08, 11 February 2016 (UTC)
Yes, sound is tinny, and I'm not complaining; I understand that it saves money. I'm just wondering why in modern devices speakers are placed at the bottom. --Microsoft User 2016 (talk) 23:31, 11 February 2016 (UTC)
One theory: Cheap-ass speakers produce way too much treble and too little bass. Treble tends to be blocked more by solid objects. Thus, one way to fix that problem is to cover the speakers and crank up the volume. Most of the treble gets blocked and the ratio is more acceptable. Of course, if they have treble/bass adjustments, that should fix the problem, too, except those may be absent or of minimal effectiveness on a bad speaker set. So, they just pointed the speakers down so most of that excess treble will be absorbed by whatever's underneath the speakers. You might try putting something soft under them, as that may be more effective at absorbing the treble. StuRat (talk) 04:58, 12 February 2016 (UTC)
Alternative theory: The gap between the laptop and the table improves the acoustic impedance matching by acting as a horn loudspeaker. --Guy Macon (talk) 05:16, 12 February 2016 (UTC)
I'd go with that one. Very *very* many years ago, I worked in the field of telephony in the UK. When the (then very innovative) Trimphone came out, we wondered about the electronics that transformed the electromechanical bell stimulus into a cute warbling tone worked. It turns out that the downward facing speaker did indeed use the air-gap beneath as a resonant chamber....which is why the phone was notorious for sounding crappy when the little rubber feet came unglued. SteveBaker (talk) 18:30, 13 February 2016 (UTC)
(Come to think of it - having read the Trimphone article and stirred a few memories - I think the volume control worked by mechanically raising and lowering the speaker to disrupt that resonance effect!) SteveBaker (talk) 18:34, 13 February 2016 (UTC)
Wouldn't the missing rubber feet cause vibration, and ruin the sound that way ? StuRat (talk) 18:39, 13 February 2016 (UTC)

echo characters onto new lines[edit]

Resolved

At the command line, how might one echo each individual character of a string onto a new line? For example

echo test

would become

t
e
s
t

Thank you 82.44.55.214 (talk) 21:30, 11 February 2016 (UTC)

You didn't specify what OS or what shell you're using, but if you have sed and pipes, you could do
echo test | sed 's/./&\n/g'

Mnudelman (talk) 21:46, 11 February 2016 (UTC)

Thank you 82.44.55.214 (talk) 22:15, 11 February 2016 (UTC)

Battery Warranty[edit]

If I have a warranty for my Laptop battery i.e. dying before the warranty ends, will I be able to replace it and get a new one? If so, what percentage does it have to be? Its was bought on 21-12-2015. thrice it was completely battery less, and so on. Its on 21% now. -- Apostle (talk) 21:37, 11 February 2016 (UTC)

There's no percentage batteries have to be at. It's by design that percentage changes as a battery gets charged and discharged. Or are you saying you can never go above 21%, no matter how long you charge it? That would definitely be defective. --Microsoft User 2016 (talk) 23:25, 11 February 2016 (UTC)
The percentage shown in the display just indicates the current state of discharge and is unreliable. If the capacity is only 21% of what it was initially (and as in the specifications), as measured by milliampere-hours, then you have a faulty battery and should claim under warranty. Dbfirs 00:26, 12 February 2016 (UTC)
It says: Wear Level: 21%.
Full Charged Capacity: is where the mWh decreases - I'm measuring it with the Designed Capacity: 48400mWh the software states.
Current Capacity: is where it displays how much it is charging... A few times the mWh did not decrease in the Current Capacity: row. During experimentation.
Note: The last (original) battery lasted since 2012. Designed Capacity was 48400mWh. I've done all sorts of things with and without charging.
Now, this current battery Confused-tpvgames.png
The battery had something like 1540444mWh (or something alike) on the Full Charged Capacity: that gave just about 5hr 13min. On the battery it says 5200mAh Its reduced to 3hr something (37807mWh) now. However, the Designed Capacity: 48400mWh of the 5200mAh. I guess its a modified version of 4200mAh
I discharged it many times after it was fully charged while it was on. - reduced mwh drastically.
It also switched off by itself automatically (to the power reserve mode) two or three times- reduced mwh drastically - the third time I quickly put to recharge mode cause I knew what it would do to the battery. Yes, I lost mWh but not as much as the last two times. Also note, while it was charging and was trying to go to reserve mode, I manually switched the laptop off - just did not trust anything...
I'm just wondering whether I could change this battery and get a new one cause its been 7 weeks...if so than I can proceed with more experiments...
Apostle (talk) 19:09, 12 February 2016 (UTC)
A lot of modern laptop batteries are more than just a pile of batteries in a plastic box. They often contain a small microcontroller to manage this stuff. Before you try experimenting with your battery - you might want to read the warranty terms more closely because it's very possible that the battery microcontroller is logging your charging and discharging behavior, and if the manufacturer decides that you somehow abused it, they may not replace it for free on grounds that this wasn't "fair wear and tear" usage. That was the case with a reconditioned HP laptop I bought for my mother in law - it had a 1 year warranty on the 3rd party battery they'd used during the refurbishment. But when it failed completely after 10 months they refused to replace it for free because the battery reported that it had been driven down to zero charge over 100 times in 10 months. Needless to say, I was not a happy camper - and eventually haggled a half-price replacement by applying the "If a little old lady can't use it..." defense, loudly, in the store and with a line of people waiting behind me. It is likely that they'll check how you've been using it - and if you've been ignoring the instructions on how it should be charged, they may well not replace yours for free either. Go read the fine print! SteveBaker (talk) 18:19, 13 February 2016 (UTC)
After an analysis, together with your past time scenario, I believe I'm alright, because,
1) the shopkeeper told me to charge it for about four-five hours for the first time. He did not say whether to charge it on/off. The manual however did not say what the shopkeeper said and I followed the manual's one/two line definition statement i.e."Charge it to 100% when it goes to 2% for the first time." - I guess it meant by keeping the Laptop on. There was no other definition/description rather than another section to the statement above i.e. "For maximum power ____________(something)___________ to 70%." Now, before I followed through the first section of the one/two line the manual stated (every time thereafter the first time), I sought advice from you guys and you guys redirected me to the manual - only Stu stated what the electronic shopkeeper stated as above. I knew over charging creates problems, so I took a risk - if this could create a problem than the manual should've addressed it clearly that "Charge for four-five hours non-stop for the first time", not the shopkeeper.
2) Its been 7 weeks, its gone to auto reserve mode three times in a month - not consecutively. Its been charged and discharged a few times while it was on - this should be a minor problem upon which a finger shouldn't be pointed at. I called the guy after messaging you guys because it went up to 21%. It was Friday so I couldn't explain the truth to the guy. He respected me for not disturbing him on his day off and the next day, without a discussion he said that he'll change it. Now, before I called him on Saturday, the battery stopped functioning - it wan't working without the power plug. I thought since he respected me so much I would take my Laptop down and show it to him - I was suppose to do this the first time as he requested but I did not wish to and he trusted me that's why I took it the second time.
Now, if you think the guy might point his finger for the first point, than Confused-tpvgames.png -- Apostle (talk) 09:46, 14 February 2016 (UTC)
I'm just concerned that you said "..if so than I can proceed with more experiments..." - and if your experiments go beyond normal wear and tear, you might have a hard time getting it replaced under warranty. Just because the battery failed within the warranty period doesn't guarantee they'll replace it for free if there is evidence that there was more than 'fair wear and tear' involved. SteveBaker (talk) 17:15, 14 February 2016 (UTC)
Face-smile.svg I understand. I would've proceeded, intended to because Dbfirs wished for me to notify about my experience(s) to you all. I just wanted a clarification. Thank you for your posts for analysis...I never knew that battery controller existed... -- Apostle (talk) 20:43, 14 February 2016 (UTC)

February 12[edit]

Can we include regular expressions on transitions in finite automata?[edit]

It's clear that we can include Regular Expressions on transitions in G-NFA's to convert an NFA into regular expression.I would like to know if we can include regular expressions on transitions in DFA's and NFA's to check whether a string is accepted.Is this possible or could anyone tell why this isn't possible.JUSTIN JOHNS (talk) 07:20, 12 February 2016 (UTC)

Proving that finite automata are a regular language is a common homework problem in discrete mathematics. At least I remember doing that. Because finite automata are regular, you can use a regular expression to define a specific automata. You can also use an automata to represent a regular expression. Quick google brings up these lecture notes on the topic. 209.149.115.90 (talk) 15:20, 12 February 2016 (UTC)

OFFLINE DICTIONARY FOR A GALAXY TAB3V[edit]

Could you recommend a good offline English-English dictionary for a galaxy tab 3v? There are so many. But what you recommend ? Thank you.175.157.48.158 (talk) 09:15, 12 February 2016 (UTC)

www.[edit]

Some URLs work fine whether you add www. or not; http://google.com and http://www.google.com take you to the same place. However, this isn't always the case; http://www.co.athensoh.org, formerly the official website of Athens County, Ohio, is a 404 error, so I've had to replace it with the current URL, http://co.athensoh.org, and I've previously seen websites (can't think of a URL now) where www. is required, and omitting it produces an error. Why do some URLs care about the www. and others don't? Is it a setting specified by the webmaster and/or some other technical person running the website (and if so, what benefit is there to requiring one and disallowing the other, instead of accepting both), or is there some other reason? I didn't see anything relevant in URL or hostname, and I don't know what else to call this topic. Nyttend (talk) 14:53, 12 February 2016 (UTC)

First, you need to separate the domain name from the host name. In your examples, google.com is a domain name. athensoh.org is a domain name. wikipedia.org is another domain name. Notice that the domain name is a string of letters followed by .com, .org., .net, .mil. .edu, etc... (ignoring the country-specific codes because they work the same and no point in confusing things). Everything before the domain name is a hostname. For www.google.com, www is a hostname. For www.co.athensoh.org, www.co is a hostname. The domain name is managed by the owner of the website. The hostname allows the owner to have more than one server. You can point a.mydomain.com to one server and b.mydomain.com to another server. You can point www.mydomain.com to one server and mydomain.com (the empty hostname) to another server. For web-based organizations, it is common to make the empty hostname point to the webserver. So, www.google.com points to the webserver and google.com also points to the webserver. But, it doesn't have to be that way. I can point mydomain.com to a server that doesn't respond at all and www.mydomain.com to my webserver. So, as for why it works, it is because the domain managers set it up to work that way. When it doesn't work, it is because the domain managers set it up to not work that way. 209.149.115.90 (talk) 15:16, 12 February 2016 (UTC)
You may notice on television commercials, print ads, etc., that retailers tend to be moving away from the "www" and just going with the domain name. I believe it's just for sake of simplicity. Justin15w (talk) 15:33, 12 February 2016 (UTC)
Recall that the Internet and the World Wide Web are not the same thing. Nowadays, all people use is a Web browser and email—and the email is frequently through a browser too. (Other protocols are certainly relevant (SSH, BGP, DHCP, and innumerable game protocols), but not in the minds of the public.) When it was introduced, "www.mit.edu" meant "the machine at MIT that handles HTTP requests", since many people would be more interested in news.mit.edu or ftp.mit.edu or so. Now the convention is that that machine is "the default" mit.edu, and anything else (e.g., ftp.gnu.org) must be qualified. --Tardis (talk) 16:35, 12 February 2016 (UTC)
When I set up a new website with my internet service provider, I use a handy tool they provide rather than doing it manually. That tool gives me a whole bunch of options to deal with the www thing:
  • Redirect www.xxx to xxx
  • Redirect xxx to www.xxx
  • Allow only xxx
  • Allow only www.xxx
  • Have www.xxx point to a different page than xxx
Personally, my view is that www is kinda obsolete for most members of the public, so I elect to redirect www.xxx to xxx on grounds that this saves most people from having to go through a redirection step. But it's up to the individual site owner to decide what they want to do with it.
In modern server setups, the original idea that (for example) images.google.com is an identifiable, individual computer that's different from the one that handles (say) www.google.com is far, FAR from the truth of what's going on! Firstly, there is no way that one server could handle all of the www.google.com traffic - so there are multiple physical computers handling those requests. And from a load balancing perspective, I doubt that images.google.com is a different set of physical servers than the ones that handle www.google.com. So the prefix might as well be a suffix - and indeed images.google.com gets you to the same exact page as www.google.com/images.
The URL prefix is fast becoming something of an anachronism for most websites out there. That's not to say that some behind-the-times server setups won't be outdated. Athens County Ohio only has 64,000 residents - and I doubt that with such a tiny tax base, they have much of an IT department running their web site - or much funding to keep it up to date with modern trends. So this is exactly the kind of website that might be stuck in the 1990's way of doing things.
SteveBaker (talk) 18:06, 13 February 2016 (UTC)

Wordpress plugins[edit]

I have 2 Wordpress plugins, Read More Right Here and WP EasyScroll Posts. Both work fine, but they aren't compatible with eachother. In order to make them compatible I need to call Read More Right Here's JavaScript function after WP EasyScroll Posts has finished loading the new posts. Normally this should be easy, but for some reason I can't get it to work. The Quixotic Potato (talk) 15:44, 12 February 2016 (UTC)

A problem with spam emails[edit]

I'm a BT customer and am wondering if anyone else has recently been having problems with spam emails from an address purporting to be from BT. Over the last twenty four hours or so I have received around thirty emails claiming to be from organisations as diverse as ASDA, Sky Vegas Casino, Argos and even Bathing Solutions, but all originating from the same address, bt.comteam@bt.com. They consist of apparently genuine advertisements, and I have forwarded a selection on to BT as to me this is obviously the work of hackers. It's really starting to irritate me and I'd like to block the address, but I don't know whether that is an address used for genuine BT correspondences. Can anyone tell me if the email address is one used by BT at all, or would I be same to block it. Thanks, 109.154.219.120 (talk) 15:49, 12 February 2016 (UTC)

It took me a bit of digging, but "BT" is British Telecommunications. Nyttend (talk) 16:12, 12 February 2016 (UTC)
I've been having a similar problem and just came to this discussion through Google, although until it was pointed out here I hadn't noticed they were all coming from the same source. You can report stuff like this to abuse@bt.com for them to investigate. I just attached a handful of stuff to an email and sent it to them. As for the address, I don't whether bt.comteam is one of their official correspondences, but perhaps someone else can shed light on that. This is Paul (talk) 13:16, 13 February 2016 (UTC)

Can we generate an infinite number of commands in the usual programming languages?[edit]

In the same way that we can generate an infinite number of sentences in a natural language (this is only restricted by our memory), is there a limitation of the number of commands (or expressions, or whatever) that can be expressed in a mainstream programming language? Or can we expand a recursive expression as long as we wish? --Llaanngg (talk) 17:31, 12 February 2016 (UTC)

You are entering into a semantic argument. I could infinitely place a function as a parameter for another function as a(a(a(a(a(a(a(a(a(a)))))))))). However, you have to semantically define the language. Is the language anything that a human can dream up or is the language defined as what can be parsed? If it is what can be parsed, the parser will have limitations as to the number of recursive function calls and length of text it can parse as a single statement. 209.149.115.90 (talk) 17:49, 12 February 2016 (UTC)
I mean a real language, not just one that could exist. Is there any limitation for C, Java and the like, besides the obvious limitation of memory and time (in the same way as humans are limited). --Llaanngg (talk) 17:51, 12 February 2016 (UTC)
Neither C nor Java specify a maximum program length: refer to the standardization documents. For example, the Java 7 (SE) language specification or the The C Programming Language book outline which limitations exist on valid source code, and they do not specify a maximum input length. A handful of other, perhaps less universally-known, formally-specified programming languages do specify explicit maximum input length for source code. I can think of a few vendor-specific variants of BASIC or tcl (VAL3 immediately springs to mind) where the user manual explicitly expresses some limitation on source code length. We could quibble about whether these are "language" or "implementation" limitations, but you probably get the idea. Nimur (talk) 17:59, 12 February 2016 (UTC)
I'm not sure I follow what you're asking, but it's pretty trivial to generate an infinite number of different statements in any programming language. In fact it's easier than in a natural language, since there are only a finite number of words in a natural language, but there are an infinite number of tokens that can be used in a programming language. As a trivial example, in C, there are an infinite number of statements of the form "a=a+1;", "a=a+2;", a=a+3", etc. In real life you can of course only have a finite number of these in any one program due to memory constraints. Mnudelman (talk) 21:09, 12 February 2016 (UTC)
The example you mention is also valid for natural languages: "sum one plus one", "sum one plus two", ... and so on. I'd say that any language that can mention infinite numbers is necessarily infinite. --Scicurious (talk) 21:17, 12 February 2016 (UTC)
Well, it's arguable if there is really a word for every integer. Is there a name for Graham's Number (other than "Graham's Number")? I suppose you can just call it "a million million million million ..." and keep it up until the heat death of the universe. Of course the same is true if you tried to express it as a decimal number in a program. Mnudelman (talk) 21:40, 12 February 2016 (UTC)
I am pretty sure there is no name for every integer, but there is always a form of referring to each verbally. I don't know whether using formalized mathematical terminology would imply that you are not using natural language anymore. Scicurious (talk) 23:08, 12 February 2016 (UTC)
You don't need a name for every integer: you can also write "a=1+1+1+1"; etc. with as many repetitions of +1 as you like, and the same in natural language. --76.69.45.64 (talk) 23:49, 12 February 2016 (UTC)
That's the point. If you can expand something like 1+1+1+1+... or one plus one plus one ... you can generate a sentence as long as you want. Scicurious (talk) 13:45, 13 February 2016 (UTC)

February 13[edit]

mATX motherboards bottom connectors are blocked when 2 dual-slot graphics cards are installed[edit]

The title describes itself. There is no problem in this Asus Maximus VIII Gene, because its 2 PCIe slots are located at expansion slot 1 and 3. But the majority of micro-ATX sized motherboards have the 2nd PCIe slot located at slot 4, like this MSI B150M Night Elf.

If 2 graphics cards are installed, with the one at the 2nd PCIe slot being dual-width, the connectors at the bottoms of mATX motherboards are effectively blocked, except for those which follows the Asus Gene's slot distribution. What is the purpose of such motherboard design?

P.S: It is somewhat irrelevant, but I have noticed that the Asus Gene run 2 graphics cards at x8/x8, while those such as the MSI Night Elf runs them at x16/x4. I thought they were not designed to run 2 graphics cards at first, but after digging through their specifications, they all support 2-way CrossFireX and/or SLI. Livy (talk) 10:40, 13 February 2016 (UTC)


February 14[edit]

Wireless webcam for Skype?[edit]

I've tried to find a true wireless webcam on www.amazon.com and could not see one. I would like to walk around the house having this webcam on my forehead like a headlamp and talk into a built-in microphone hoping that the images and the sound will be input into the Skype application in my desktop. Is it possible? Thanks, --AboutFace 22 (talk) 02:06, 14 February 2016 (UTC)

Pretty much all smartphones could do this directly without needing a desktop, if you can improvise a head mounting. If you want a separate camera, I think most Go Pro style cameras could do this as well. There's probably a cheaper non-branded alternative to those. Fgf10 (talk) 02:48, 14 February 2016 (UTC)

What does this mean?[edit]

I am not sure where to post this, so I settled on the Computer Help Desk, here. I was reading a Wikipedia article (Auction sniping). In the article, way at the bottom under the "Buy It Now" section, it states: Many of these buyers use custom software to search eBay frequently via eBay's API and RSS feeds in order to see newly listed BIN items before "regular" users have a chance on the standard eBay.com website. These users are actively waiting for new items to be posted and make quick purchasing decisions as these deals usually sell within the first minutes or even seconds. What the heck does that mean, exactly? I am not that familiar with computers and technology, which is why I probably don't understand what it's saying. I also have no idea what all those acronyms mean (API, RSS, etc.). Please advise. In other words, if I want to do this (i.e., find these items before the "regular" users have a chance to), what exactly is it that I need to do? Thanks. Joseph A. Spadaro (talk) 06:13, 14 February 2016 (UTC)

@Joseph A. Spadaro: Become a computer nerd. Nerds rule the world. Basically these people are using software to check for recently added ads.
eBay's API allows someone who writes a computer program to make the program interact with eBay (see Application programming interface).
An RSS feed is a list of stuff, in this case a list of items on eBay.
There is loads of software like this, both paid and free (if you Google "ebay sniper" you will find stuff like jbidwatcher), but it is better to write it yourself.
It is, for example, possible to write software that asks eBay every five seconds: "Do you have an advertisement that mentions the word uranium?".
When eBay API answers that there is an advertisement that contains the word uranium you can make the software warn you (e.g. by sending an email), or you can even make the software buy it without requiring any human input.
Of course this is a silly example, but you get the idea.
If you really want to do this then the first step is to learn a programming language (or to convince a nerd to help you). Most of the people who are doing this kinda stuff professionally have their own custommade software, and they aren't sharing it for obvious reasons. The Quixotic Potato (talk) 06:40, 14 February 2016 (UTC)
Thanks. I am still unclear, but it's starting to make a little bit of sense. So, assuming that I do not use the "automatic purchasing method", doesn't that mean that I would have to be on constant vigil, watching the computer (or my email or whatever), waiting for these "warning" messages? I would have to be staring at my email inbox 24/7, no? Joseph A. Spadaro (talk) 06:58, 14 February 2016 (UTC)
@Joseph A. Spadaro: Nope. Most smartphones have the ability to regularly check your inbox, and warn you whenever you have a new message. You can use a browser plugin (like Checker Plus for Gmail) that warns you whenever you receive an email. It can check your inbox every 30 seconds and you can configure it to shout: "You've got mail!". And of course using email isn't required, it would also be possible to write software that sends a text message every time an eBay ad for uranium appears (but it is 2016 and no one uses text messages anymore). The Quixotic Potato (talk) 08:03, 14 February 2016 (UTC)
Screenshot. The Quixotic Potato (talk) 08:16, 14 February 2016 (UTC)
That's my point. I have to be on constant vigil with my email or text messages or whatever warning system I have. Whether I get a "you've got mail" prompt or not, I have to always be keeping vigil with my emails and texts. Right? Joseph A. Spadaro (talk) 08:19, 14 February 2016 (UTC)
@Joseph A. Spadaro: Well, to me keeping vigil sounds like you actually have to pay a lot of attention (non native speaker here, please correct me if I am wrong). Many people have an smartphone on them 24/7, and you can configure the smartphone to start buzzing or ringing when you receive an incoming text message. And many people spent most of their time within earshot of a computer with speakers attached, and you can configure the computer to start making sounds when you receive an email. So you don't have to sit somewhere staring at your inbox 24/7, you can just live your life normally, and when the phone rings/buzzes or the computer starts making noise then you'll know that you can buy more uranium (purely for non-nefarious reasons of course). The Quixotic Potato (talk) 08:25, 14 February 2016 (UTC)
Yes, I see. But, your assumptions are inaccurate. I have a computer, but I don't sit at my PC 24/7, obviously. I have a cell phone, but I don't carry it with me 24/7. It might sit on my desk, while I go on about my day. I am not "attached at the hip" with my cell phone and/or computer. So, if I wanted to make sure that I hear these beeps and buzzes and messages, etc., I'd have to keep a vigil on my computer and/or cell phone. Which I would otherwise not do. In other words, me and my cell phone (or me and my PC) could be "separated" for hours at a time (or even days at a time). I know this is not the norm with the younger generation. But, it's quite the norm for us "older" folks. And I am not even that "old". Joseph A. Spadaro (talk) 09:07, 14 February 2016 (UTC)
@Joseph A. Spadaro: If you do not sit at a PC 24/7 then you must be very very old. The question "how to relay a message to someone who is walking around outside and doesn't have a mobile phone/laptop with him/her?" is difficult to answer. Walkie-talkies are basically mobile phones, that's too easy. You can use messenger pigeons for fixed locations or maybe a robot (or well-trained dog, or child) or something like that for when you are on the move. Personally I would recommend using something similar to the Bat-Signal, but I don't think that'll work in bright daylight. Maybe you can use a smoke signal during the day. But if you don't bring a smartphone/tablet or computer with you when you go outside then it is probably pointless to relay a message to you that requires the use of one of these devices to act on it. The Quixotic Potato (talk) 09:33, 14 February 2016 (UTC)
OK, thanks. Joseph A. Spadaro (talk) 10:17, 14 February 2016 (UTC)
Not sure if this would work, but if you are interested in specific items that there isn't a huge demand for, you might still get them before they are posted if you check your e-mail once/twice a day. It would depend on how many other people are using software to search for those items, and how long the delay is between your software finding the item, and it actually being posted. Another option would be to have the software make the purchase for you (similar to algorithmic trading, with all the attendant methods and pitfalls). OldTimeNESter (talk) 13:05, 14 February 2016 (UTC)
Of course none of the people who use this kind of software are able to use a computer/smartphone 100% of the time, which means that they may miss out on certain deals while they are sleeping (for example), but in some cases auction sniping can be lucrative, even if you do it for only a couple of hours per day. The Quixotic Potato (talk) 16:49, 14 February 2016 (UTC)
Note that the option to have the computer buy things for you automatically that match a certain criteria is a very dangerous one:
1) You have to limit the numbers, or you could find you bought a thousand items overnight.
2) You would have to pay attention to total cost, as sometimes an item is listed for $0.01, with all the profit coming from the high shipping costs.
3) It would be difficult to buy the precise item you want. For example, you tell it to buy any "TV sized 42 inches or greater, for sale for under $100", and you may find it selects old analog TVs, as well as remote controls, stands, mounts, and instruction manuals for such TVs. So, you could end up spending lots of money and not actually getting a TV you can use.
4) Another program might figure out what your program is doing, and tailor the descriptions of whatever they are selling to match the description you are looking for: "Pet rock for sale that's great to play with while watching your 42 inch TV". :-) StuRat (talk) 17:20, 14 February 2016 (UTC)

Google apps and passwords[edit]

The Google system from my point of view consists of platforms (my phone and computer) and their applications which I use (gmail, Google+, Chrome,etc.). Google requires passwords for some of these (gmail) but not those in the public domain, like Chrome.

I have a password on my PC which gives me access to everything Google there. However, I find out, on my phone and any other devices I need a different password.

So I follow the process to get one. I am at the screen on my phone where they want a username and password. I have no password, but I notice that "Need Help" is displayed. I click through the screens to the end. There, they say: "[If you} are trying to sign in to your Google account through a device, mobile app or desktop app, you'll need to enter an app password."

I follow the link which explains in detail what passwords are all about and so on, but it doesn't tell me what I need to know: How do I get a password for this device?

The page helpfully tells me I should go to a certain web site, enter my username and password and type the letters on the screen. Then I am to go back to the application and enter my app password.

There is no contact information given.I would try to contact Google, but I believe that information does not exist anywhere.

I"m serious. This is what happened.

Can you help?

Sorry, I meant to let you know who I am. --Halcatalyst (talk) 20:14, 14 February 2016 (UTC)

Battery behaviour/usage Log software[edit]

Any idea where I could find a good software that logs the battery usage/dealings physically? --Apostle (talk) 21:06, 14 February 2016 (UTC)

How to assess the CAPEX of U-TUBE heat exchanger?[edit]

Dear sirs, my name is Saade Haddad, Manager of Zouk power plant in Lebanon. we intend to purchase 3 U-TUBE heat exchangers. I shall be very grateful If you could provide me with a practical procedure that would allow to assess the CAPEX of each heat exchanger, or inform me about references that can help us for this purpose.

Regards

Saade Haddad — Preceding unsigned comment added by 178.135.82.98 (talk) 21:11, 14 February 2016 (UTC)