Jump to content

Wikipedia:Reference desk/Computing: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Line 192: Line 192:
:If you already had Firefox installed, bear in mind installing a new version isn't generally going to remove addons. If you had it installed at some stage but uninstalled it, the addons may still hang around. If you never had Firefox installed, it's still possible other software may have set themselves to be added to Firefox if you even installed it. Particularly malware. <p>Browsing this directory [https://releases.mozilla.org/pub/firefox/releases/], you should be able to find the hashes that Mozilla published for your release. For 44.0 (not 4.40 which never existed) the [[SHA2]]-512 hashes are here [https://releases.mozilla.org/pub/firefox/releases/44.0/SHA512SUMS] and it's c4ef058366ae0e04de87469c420761969ee56a03d02392a0cc72e3ced0f21be10e040750f02be3a92b6f25e5e2abdc30180169ae2bc84ef85c5343fdf9b632cf. <p>There are no [[MD5]] or [[SHA1]] hashes, I presume Mozilla stopped publishing them a while ago considering neither are considered secure. However unlike MD5, the cost of generating a SHA1 collision (particularly a useful SHA1 collision) is as far as we know, high enough that I think it's fairly unlikely that you happened to have a version with the same SHA2-512 hash but a different SHA1. And I can confirm that the file I just downloaded has the same SHA2-512 as I see published my Mozilla and SHA1 that you published. [https://fossies.org/windows/www/Firefox_Setup_44.0.win64.en-US.exe/] also has the same SHA1. <p>Note however I did not verify the hash file I downloaded using the [[GnuPG]] [https://releases.mozilla.org/pub/firefox/releases/44.0/SHA512SUMS.asc] which means if my connection was also compromised, my hashes are useless. Note also if you do have malware, any hash you generate, any certificate that appears genuine, any verification of the hash file is basically useless. (And even if you went through this much effort fo all the software you ever downloaded and ran on your computer, the existing of bugs means it's still impossible to be certain you don't have malware.) <p>However most malware doesn't go that far. In fact, I can only imagine the certificates ever really happens unless you were particularly targetted e.g. by a very dedicated and smart individual, or a criminal group or an intelligence agency, who for some reason are out to get you. Of course if this is happening, you may not be reading my message (either at all or in original form) either so.... <p>Presuming you're correct about weird addons, personally I think the most likely thing is your system was already somewhat compromised but the software you just downloaded is the genuine original file. <p>[[User:Nil Einne|Nil Einne]] ([[User talk:Nil Einne|talk]]) 13:18, 27 January 2016 (UTC)
:If you already had Firefox installed, bear in mind installing a new version isn't generally going to remove addons. If you had it installed at some stage but uninstalled it, the addons may still hang around. If you never had Firefox installed, it's still possible other software may have set themselves to be added to Firefox if you even installed it. Particularly malware. <p>Browsing this directory [https://releases.mozilla.org/pub/firefox/releases/], you should be able to find the hashes that Mozilla published for your release. For 44.0 (not 4.40 which never existed) the [[SHA2]]-512 hashes are here [https://releases.mozilla.org/pub/firefox/releases/44.0/SHA512SUMS] and it's c4ef058366ae0e04de87469c420761969ee56a03d02392a0cc72e3ced0f21be10e040750f02be3a92b6f25e5e2abdc30180169ae2bc84ef85c5343fdf9b632cf. <p>There are no [[MD5]] or [[SHA1]] hashes, I presume Mozilla stopped publishing them a while ago considering neither are considered secure. However unlike MD5, the cost of generating a SHA1 collision (particularly a useful SHA1 collision) is as far as we know, high enough that I think it's fairly unlikely that you happened to have a version with the same SHA2-512 hash but a different SHA1. And I can confirm that the file I just downloaded has the same SHA2-512 as I see published my Mozilla and SHA1 that you published. [https://fossies.org/windows/www/Firefox_Setup_44.0.win64.en-US.exe/] also has the same SHA1. <p>Note however I did not verify the hash file I downloaded using the [[GnuPG]] [https://releases.mozilla.org/pub/firefox/releases/44.0/SHA512SUMS.asc] which means if my connection was also compromised, my hashes are useless. Note also if you do have malware, any hash you generate, any certificate that appears genuine, any verification of the hash file is basically useless. (And even if you went through this much effort fo all the software you ever downloaded and ran on your computer, the existing of bugs means it's still impossible to be certain you don't have malware.) <p>However most malware doesn't go that far. In fact, I can only imagine the certificates ever really happens unless you were particularly targetted e.g. by a very dedicated and smart individual, or a criminal group or an intelligence agency, who for some reason are out to get you. Of course if this is happening, you may not be reading my message (either at all or in original form) either so.... <p>Presuming you're correct about weird addons, personally I think the most likely thing is your system was already somewhat compromised but the software you just downloaded is the genuine original file. <p>[[User:Nil Einne|Nil Einne]] ([[User talk:Nil Einne|talk]]) 13:18, 27 January 2016 (UTC)
::Thanks. Seems like it's indeed the add-ons from a previous "dirty" installation of Firefox that's causing the problem. I tried uninstalling firefox, restarting, and installing it again but all the add-ons from the previous version is still there. How do I prevent this from happening? I want to completely wipe everything Firefox related, and then install from one of the SHA512 verified binaries. I'm on Windows 10. [[User:Johnson&#38;Johnson&#38;Son|Johnson&#38;Johnson&#38;Son]] ([[User talk:Johnson&#38;Johnson&#38;Son|talk]]) 04:37, 28 January 2016 (UTC)
::Thanks. Seems like it's indeed the add-ons from a previous "dirty" installation of Firefox that's causing the problem. I tried uninstalling firefox, restarting, and installing it again but all the add-ons from the previous version is still there. How do I prevent this from happening? I want to completely wipe everything Firefox related, and then install from one of the SHA512 verified binaries. I'm on Windows 10. [[User:Johnson&#38;Johnson&#38;Son|Johnson&#38;Johnson&#38;Son]] ([[User talk:Johnson&#38;Johnson&#38;Son|talk]]) 04:37, 28 January 2016 (UTC)
:::I would try a [https://support.mozilla.org/en-US/kb/refresh-firefox-reset-add-ons-and-settings Firefox "refresh"]. Uninstalling Firefox does nothing because it doesn't touch your profile, which is where extensions, settings, and the like usually live. Uninstalling simply removes the program files. --[[Special:Contributions/71.119.131.184|71.119.131.184]] ([[User talk:71.119.131.184|talk]]) 05:06, 28 January 2016 (UTC)

:BrowserFox is one of the malwares I had on my home computer recently, and it would persist addons across new browser installs (notably "OutrageousDeals!" adware). It may be worth it to run a malware scan using your favorite application and see if it picks up anything. [[User:FrameDrag|FrameDrag]] ([[User talk:FrameDrag|talk]]) 14:36, 27 January 2016 (UTC)
:BrowserFox is one of the malwares I had on my home computer recently, and it would persist addons across new browser installs (notably "OutrageousDeals!" adware). It may be worth it to run a malware scan using your favorite application and see if it picks up anything. [[User:FrameDrag|FrameDrag]] ([[User talk:FrameDrag|talk]]) 14:36, 27 January 2016 (UTC)



Revision as of 05:06, 28 January 2016

Welcome to the computing section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


January 23

Certifying that picture is not older than a certain time

How can someone certify that a picture is not older than the timestamp it has in it? Is using tamperproof specialized hardware the only option? Notice that this is different from certifying that the picture already existed at time t. --Scicurious (talk) 15:38, 23 January 2016 (UTC)[reply]

You could incorporate unpredictable information in the one of the free-form EXIF fields and then submit it to a timestamping service. The unpredictable information might be the numbers drawn in a famous lottery. Jc3s5h (talk) 16:18, 23 January 2016 (UTC)[reply]
I am afraid that this won't work, and the task at hand is impossible. You could still include the unpredictable information into the EXIF years after the picture was taken, and submit it anyway. The problem remains, there is no difference between an old bit and a new bit of information. And every bit in my machine can be changed by me at will. You could obviously take a picture of a current newspaper in what is called authentication by newspaper or newspaper accreditation. You would have to perform some digital forensic analysis on the picture to exclude a possible photoshopped image.--Llaanngg (talk) 16:32, 23 January 2016 (UTC)[reply]
The simple reason why the task is impossible is that anyone could always take a new picture of the old picture. --76.69.45.64 (talk) 23:18, 23 January 2016 (UTC)[reply]
  • If this is just a general poser, and you aren't looking only for a digital timestamp, you can date certain historical events such as a picture of Obama being inaugurated to no earlier than his inauguration, but that's not very useful when you are dealing with generic items. μηδείς (talk) 02:53, 24 January 2016 (UTC)[reply]
Yeah, maybe - but also, see: Photo_manipulation. If I know that some event is likely to occur ("One of the N Republican candidates will formally accept the parties' presidential nomination on a specific date"...that's something I know with pretty good certainty that lies in the future) - then I could certainly fake a set of pictures, one for each candidate, a solid month beforehand and present the appropriate photo as real some days after the actual event. Perhaps not in that exact case - but in many others, it should be possible to defeat the "historical events" approach. Much depends on the context and how much effort would be spent on debunking my fake. SteveBaker (talk) 15:26, 25 January 2016 (UTC)[reply]
See Trusted timestamping, if it is a photo that you've taken recently. This will allow others to verify that the photgraph existed at the time you had it notoriesed, but they can't verify how long it had existed prior to you notoresing it - including a newspaper, etc, as LLaanngg mention, will give an earliest possible date. If the document is very sensitive, you could submit another document that outlines the original, and includes a Cryptographic hash of it. LongHairedFop (talk) 12:23, 24 January 2016 (UTC)[reply]
It doesn't matter how clever your timestamping system is because you still don't know how old the photo already was at the moment it was timestamped. For that you'd need to embed the timestamping algorithm (and secure access to the Time Stamping Authority) into the camera itself. If timestamping were a separate activity (take the photo, transfer it to some other system, timestamp it), then you have no way to know that I didn't take a 100 year old photo and timestamp it today. Even then, you can't guarantee that I didn't use the "trusted" camera to take a photo of an older photo and thereby embue it with a more recent time-stamp.
Timestamping can guarantee that something is no more recent than... some date - but our OP wants no older than... - and for that, you need something embedded in the camera that is at least as secure as Trusted Timestamping...and which somehow captures information that could not be inserted into the camera by other means (eg, if the camera captured the body temperatures of the subject or the distance of each pixel from the camera). That amounts to "tamperproof specialized hardware" - which our OP wishes to avoid.
So I think the answer is "No".
SteveBaker (talk) 15:26, 25 January 2016 (UTC)[reply]

Pseudocode: good enough for teaching, not good enough for programming

Could a compiler for some form of pseudocode be created? Otherwise, why would it only be precise enough for teaching, but not good enough for been compiled into a program? --Llaanngg (talk) 16:34, 23 January 2016 (UTC)[reply]

It's an artificial intelligence problem. We don't know how to write a compiler that's as good as humans at filling in "obvious" gaps. -- BenRG (talk) 18:03, 23 January 2016 (UTC)[reply]
Could you cite a concrete example of pseudocode that would be an "obvious" gap for a human, but a stumbling stone for a compiler? --Llaanngg (talk) 18:25, 23 January 2016 (UTC)[reply]
Here's a random example from a problem I was just thinking about: given n "red" points and n "blue" points in general position in the plane, find a pairing of them such that the line segments between paired points don't intersect. An algorithm that works for this is "Pick an arbitrary pairing. While there are still intersecting line segments { pick a pair of intersecting segments and uncross them }." (This always terminates because uncrossing reduces the total length of the segments (triangle inequality), but that isn't part of the algorithm.) Turning this into a program requires a pretty good understanding of the problem statement and plane geometry. For example you have to figure out what "uncross" means and that there's only one way to do it that preserves the red-blue pairing. You also need to choose an input and output encoding (how you provide the points to the program and how it reports the answer). Still the pseudocode is useful because it contains the nonobvious core idea, and everything else is straightforward. -- BenRG (talk) 19:08, 23 January 2016 (UTC)[reply]
But I think many pseudocode algorithms are close enough to executable functions that you might as well just use Python as your "pseudocode". -- BenRG (talk) 19:10, 23 January 2016 (UTC)[reply]
The main issue is pseudocode is generally not a formally defined language, hence the name. "Real" programming languages have formally defined grammar and syntax. Read something like the C standard to get an idea of how much goes into doing this. This is so, ideally, every statement that can be possibly written in the language has an unambiguous meaning that can be interpreted by computer programs (here I mean "interpreted" in the general sense; I'm not specifically referring to interpreted languages). A program written in C, for instance, will, ideally, always mean the exact same thing to any standard-compliant C compiler or interpreter. Contrast this with the state of machine translation; natural languages aren't well-defined, so there's tons of ambiguity, and consequently the programs we have at present often get things completely wrong. You could consider pseudocode a kind of "natural language for programming"; it's intended to convey general ideas to other humans. If you formally define the language you're using, it's no longer pseudocode; it's a programming language. --71.119.131.184 (talk) 06:38, 24 January 2016 (UTC)[reply]
I think there are two main problems: context and background knowledge.
  1. The meaning of pseudocode often tends to depend on the context in which it is used. And unlike real code this context is not limited to the program itself. This puts the problem into the area of natural language understanding as BenRG and 71.199 point out.
  2. Pseudocode tends to be highly declarative and domain-specific. It makes statements like "now solve problem X", where it is not clear how this should be turned into a computational process. The reader is assumed to have background knowledge which allows them to do so.
See Buchberger's algorithm and Quickhull#Algorithm for some good examples.
Ruud 14:08, 25 January 2016 (UTC)[reply]

Someone has probably said this, but pseudocode is for humans. If a computer could compile it, then it wouldn't be pseudocode. Bubba73 You talkin' to me? 04:40, 26 January 2016 (UTC)[reply]

Pseudocode always has the proper level of abstraction, and always the right library functionality to compactly represent the problem I'm talking about. That said, as BenRG suggests: Nowadays I often use Python as "executable pseudocode" for first year algorithms. --Stephan Schulz (talk) 13:45, 27 January 2016 (UTC)[reply]

January 24

we couldn't save your file to pdf/docx this time

I am posting the below question for another user, I do not have a smart phone, and am hence clueless. μηδείς (talk) 02:40, 24 January 2016 (UTC)[reply]

"I use an iPhone 4S running iOS 8.3, and I have recently created a Google account. I downloaded the Google Docs App and I uploaded a word document to my account. It's all fine until I try to share that document with someone else. The "share" icon appears in light grey while the rest options remain black. So I cannot share the document. When I go to the convert to pdf/docx it says "we couldn't save your file to pdf/docx this time". Does anyone have any idea of what might this problem be? Is it possible that I also might need to download the Google Drive app for me to be able to share files through Google Docs?"

Yeah, sounds like you need the Google Drive app. The Quixotic Potato (talk) 05:27, 24 January 2016 (UTC)[reply]
AFAIK, a new out of the box iPhone doesn't have shared storage that all apps can read and write to. See this: [1] Each app contains its own data storage, and will only be able to share it with other apps that are designed to know it exists. Google Drive must be performing that shared-storage function for other Google apps. If you plug an un-jailbroken iPhone into your computer, the only "drive" that shows up is the camera roll. 94.12.81.251 (talk) 18:39, 24 January 2016 (UTC)[reply]
I hate it when error messages don't tell you WHY something happened or didn't happen. StuRat (talk) 19:05, 24 January 2016 (UTC)[reply]

Thanks, I have passed on the updated information, oddly enough, it is all Greek to me. μηδείς (talk) 05:00, 27 January 2016 (UTC)[reply]

January 25

4G Over 3G

What is the advantage of 4G mobile phones over 3G ones? KägeTorä - () (もしもし!) 00:52, 25 January 2016 (UTC)[reply]

see Comparison of mobile phone standards#Comparison of wireless Internet standards. But you will also have to consider whether the phone can do it, wherther the band is on the phone, whether the carrier offers it, whether it is on a cell tower where you want to use it. Also consider that 4G is not genuine: (4G#Technical understanding). The phones may actually support Long Term Evolution or LTE Advanced, so look for that in your comparison. Graeme Bartlett (talk) 01:32, 25 January 2016 (UTC)[reply]

Unix command question

When you type something like

./mysql -u root -p

What do the u and p signify? 68.142.63.182 (talk) 02:15, 25 January 2016 (UTC)[reply]

Command line options like that tell the program what to do. From [2]: The -u root means to connect as user root and the -p means which port number to connect to. In your example, you are missing the number that would follow the -p. RudolfRed (talk) 02:50, 25 January 2016 (UTC)[reply]
CLOSE but you only got the 1st half right. -P (case sensitive) is port. -p is PASSWORD. As in, specify user account to run the command and authenticate using the account's password. The -u -p convention is very common among command line interpreters. Vespine (talk) 03:51, 25 January 2016 (UTC)[reply]
More generally, they mean whatever the program (in this case, mysql) interprets them to mean. They're just arguments that get passed to the program. For the meaning, look at the program's documentation. There are conventions for a few common "switches": -v usually means "verbose", making the program print more output. But this isn't enforced by anything other than programmers choosing to follow those conventions. --71.119.131.184 (talk) 03:59, 25 January 2016 (UTC)[reply]
The relevant reference being Command-line interface#Arguments Vespine (talk) 05:16, 25 January 2016 (UTC)[reply]
And to clarify for the OP, the -u, -p, etc. are most commonly referred to as flags or options. This is a bit different than say, an input argument. Knowing this makes similar questions easy to google. What does -e do for grep? Just google /grep e flag/, and the first hit [3] gives the answer. Sometimes the flags have similar meanings between programs (-u is often used to set user), but not always. SemanticMantis (talk) 14:54, 25 January 2016 (UTC)[reply]

Thank you. 68.142.63.182 (talk) 00:48, 26 January 2016 (UTC)[reply]

Is GPU built into CPU wasted if graphics card is present?

Some CPUs have built-in GPUs which eliminate the need for a graphics card for some applications. If the processor is used in a motherboard with a graphics card inserted though, does this mean that the GPU built into the processor is a waste or does it still contribute? If not, is it a sensible strategy to look for a processor without GPU to save money? --78.148.108.55 (talk) 13:22, 25 January 2016 (UTC)[reply]

Some laptops use switchable graphics solutions where the stand alone GPU is only supposed to be used when needed (particularly for games). But you mentioned card so I guess you're not thinking of laptops. While the same thing was tried with desktops (and similar) way back before the GPU was integrated into the CPU, it was largely abandoned for numerous reasons including changes in the Windows driver model with Vista; but probably most of all because as graphics card idle power improved, the power savings are fairly small compared to the compatility problems.

If you have an AMD GPU and the right AMD APU, these can work in a Crossfire config. But because AMD hasn't updated their APU GPUs for a while, you're limited to fairly old cards. And the benefit is small anyway if you have a mid range card. And if you have cross vendor GPU-CPU (i.e. NVIDIA - Intel, AMD/RTG - Intel, NVIDIA - AMD) hasn't really been possible for a while. Okay LucidLogix Virtu MVP tried something like that but IIRC it was worse (and worse supported) than AMD's Crossfire setup so never really took off and seems to have been largely abandoned.

Theoretically and especially with GPGPU it's possible for both to be used. Practically this rarely happens for home users. (I guess some miners and others who go out of their way may use both.) It's possible that DX12 will change things, but it's hard to say whether this is really going to happen. [4]

As for your suggestion of a sensible strategy, the answer is mostly no. For starters since we're so far into the APU/SoC era, very few CPUs don't have GPUs particularly those targeted at home users. More significantly, particularly once you get past the low end, the connection between CPU cost and production cost is very teneous. It's really all about coming up with different products for the different market segments, disabling features as needed (often not because they are broken or this saves costs but because you know people will be willing to pay more for them). And considering the poor situation AMD is in, it's really mostly Intel we're talking about here. But Intel has no real interest in encouraging the standalone GPU market.

The best you can is is if you're planning to get a standalone GPU, don't worry about the iGPU. But even this is limited utility since the best CPUs tend to have the best GPUs. (There are exceptions, particularly for Intel's top end iGPUs.)

Nil Einne (talk) 14:18, 25 January 2016 (UTC)[reply]

Here is an article from 2010: Inside Apple’s automatic graphics switching. "The main goal of Apple's automatic graphics switching is to balance graphics performance with long battery life..."
What this means to the end-user of a Mac with automatic graphics switching is that system takes advantage of both the discrete GPU and the Intel HD GPU.
If you are using some other hardware or system software, the onus is on the system-designer to make intelligent use of all the hardware they've got. Nimur (talk) 15:09, 25 January 2016 (UTC)[reply]
You mean it's the responsibility of Apple? At the time, it sounds like it was only working on Apple laptops with discrete GPUs (well laptops that also had CPUs with iGPUs, but laptop CPUs without iGPUs are so rare now, it's not worth worryhing about), probably for the same reason everyone else ended up limiting it to laptops that I mentioned above. From what I can tell it's still only on Apple laptops (Macbook Pros) so I guess Apple hasn't found this any more useful than other vendors on desktops. (The other vendors who pretty much all had the technology in some form long before there even was an x86 CPU with a GPU. The technology was first used with IGPs which was when it was briefly tried with desktop.)

Perhaps Apple's implementation was better at the time, but it definitely wasn't the first. (Perhaps it still is better particularly in terms of compatibility and/or driver updats.) In fact pretty much everyone had it before that Apple implementation as your article sort of says. (To be fair my understanding is Apple also had it in some form before the article you linked to.) And as I hinted at above other vendors mostly still have it now.

Anyway, since the OP appears to be interested in desktops or similar (as I said above, they mentioned graphics cards), it remains unexplained how Apple is making "intelligent use of all the hardware they've got" for the use case the OP appears interest in.

Nil Einne (talk) 17:37, 26 January 2016 (UTC)[reply]

Let me clarify: if the OP is using some other device, including a system that the OP has assembled themselves, then the OP is the system designer. The onus is on them - the system designer - to intelligently use the GPU or GPUs that are built into their computer. Nimur (talk) 17:38, 26 January 2016 (UTC)[reply]
But the main point is very few system designers are actually using the GPU on the CPU if it comes with a discrete GPU on desktop like systems. Apple isn't as far as I can tell, nor are Dell, HP etc. Where this happens does happen on the dekstop, it mostly only happens at the software level and didn't have much involvement of the system designer. If you're saying when Apple sells you a MacBook Pro (or iMacs if any of them have discrete graphics cards) the onus is on Apple as the system designer; but when they sell you a Mac Pro, the onus is on you since you're not a system designer, frankly that makes no sense.

With laptops, you haven't really been able to design them yourself for a long time and pretty much all system designers have been using both GPUs in some fashion for a long time, before 2010 or GPUs integrated on to the CPU. So beyond it not being what the OP seems interested in, it doesnt seem to help much. Laptops with Linux are to some extent the only real exception since switchable dual graphics support has often been limited or if you were installing Windows yourself you do have to be careful with drivers. (Likewise if you really were designing the system yourself you do need to take a bit of care to ensure switchable dual graphics works.)

Getting back to my earlier point, it's actually been possible to use both GPUs in some fashion for a long time, especially after GPGPU began to become a thing (which was before iGPUs existed). This has been supported at some level by the OS and the systems as designed. Even if you were assembling your own system, you didn't really need to do much a lot of the time. But while it's been supported, as mentioned in my first post, it hasn't AFAIK actually been used much. This is for a variety of reasons including that the support wasn't that good and that software designers just didn't feel it was useful particularly considering the compabitility problems that can result (which to some extent relates to the support issue). For the earlier part you can I guess blame it on the system designer. For the later part, it doesn't make much sense to blame it on the system designer. Unless you use the odd definition of system designer where when I buy a Mac Pro or iMac or Alienware desktop or HP desktop or whatever from my local store and take it home to play GTA5 and Fallout 4, I'm a system desiger. (But maybe not if I bought a Dell laptop or Mac Book Pro?)

Ultimately whoever you want to blame it on and whatever you want to call them, the point is as an end user you have limited choice. If your software doesn't use both GPUs and there's no software which will fulfill the same purpose in every way but will use both GPUs and be better for it, then there's no much you can do. Except code your own software which makes little sense for most users. It gets even worse if you're talking about games. If I want to play GTA5, I'm not that likely to choose some other game just because it uses both GPUs and coding your own GTA5 or even hacking it to use both GPUs is most likely untenable even for an excellent coder.

And unless you actually have a need for the software which will use both GPUs, it doesn't make sense to run it just because the GPU is otherwise going unused. Given idle power improvements, using the GPU on the CPU will generally mean more energy consumption and heat generated which even in winter and if you use electrical heating is IMO not necessarily useful. More significantly, if the CPU supports some sort of turbo mode, using the GPU may mean the CPU which you may be using for stuff you actually want isn't clocking as high or as long. And that's not even considering possible slow downs due to the memory subsystem or CPU or other elements being used by this program you don't actually have any usefor but are just using because your GPU on CPU would otherwise go unused.

What this all means, and to get back to the OP's original point is that you may have to accept that your GPU on CPU is simply going to be unused. From your POV, it may be better if the CPU doesn't have a GPU since it's going to waste and may increase power consumption a little even when unused. But since you aren't Intel and can't control their marketing decisions, the most intelligent thing to do is to choose the best CPU based on price-performance that fits your purposes and budget. Which may sometimes mean a lower end GPU, but often isn't going to mean no GPU. To be fair, this isn't unique to Intel, all companies add features to their products for a variety of reasons and some of these features are going to be unused by a fair few end users. And just as with these cases, it may seem a bit stupid to have this feature you aren't going to use, but if it isn't causing problems you should ignore it and concentrate on the features you do want and the price.

If you really want to look in to it, LucidLogix Virtu MVP that I mentioned before is actually an interesting case study IMO. As I understand it, it was initially at least dependent on the system or motherboard. (I'm not sure if this changed. I didn't follow or research that closely when reading this except to check that it exists. Most results seem to be old probably for the reason mentioned, it didn't have much success so no one cares anymore.) But I think this was a licencing or compatibility thing, it was otherwise purely software and just required the 2 GPUs. So theoretically the system designers did provide something to use both GPUs (just as they did in the early pre iGPU days when they supported switchable graphics with IGPs and discretes).

But as I mentioned, this seemed to largely fail. Whether it was because the software wasn't that good (compability problems etc) or it didn't help enough to be worth it, or it did help a fair amount but people didn't realise; the technology mostly failed. So who you want to blame it on is complicated. FWIW it was still supported up to Windows 8/8.1 at least, not sure about 10 but I guess you could still try it if you think people were wrong to reject it. One thing which I perhaps didn't make clear enough until now, perhaps the reason why these all failed is because the actual advantage you get from using the often very slow iGPU when you have a much faster discrete GPU is very limited. (Which is another factor not in favour of LucidLogix etc. These technologies add cost so they are added to expensive systems which are also the systems which tend to have very fast discretes.)

To be fair, with the iGPU improving combined with certain non graphical tasks which aren't particularly demanding compared to the graphics being performed on the GPU with games (like the physics or sometimes even parts of the AI) and where even the weak iGPU is a lot better than the CPU, it does seem like it would make sense to use the iGPU. And with the GPU capable of sharing resources with the CPU it can mean despite the low performance compared to the discrete it has particularly advantages. AMD definitely believed in the HSA for a long time (I think they still do) and there's also interest it in on mobiles (albeit these don't have discretes). So perhaps with DX12 and similar combined with other areas of progress, this really will finally take off. However since we can't be sure this will happen, I don't believe it makes sense to choose a higher end iGPU (or even an iGPU if you find you do have the choice) because you may want day use it.

P.S. It's possible even now you're one of the few that does have use for the iGPU. So I guess the OP should explore their software and check. If they find they are, then I guess you could say they're one of the lucky ones. It still doesn't change the fact that for most people, it seems intelligent use is no use and to answer the OPs question most of the time it does effectively go to waste but not the suggested strategy is probably not sensible.

Nil Einne (talk) 12:23, 27 January 2016 (UTC)[reply]

Handbrake not detecting video episode length correctly

I'd like to encode MKV files from a folder containing the contents of a DVD (VOB files etc). The DVD itself is a thousand miles away and I won't have a chance to collect it for three months. The VOBs appear to play correctly, though I haven't watched them all the way through, and Handbrake detects them as valid sources, but instead of 3 episodes of roughly 1 hour each, it shows 2, the second of which is roughly 2 hours. It must be missing the point at which the second episode ends. Without access to the original DVD, is there some way I can get Handbrake to see this? 94.12.81.251 (talk) 14:13, 25 January 2016 (UTC)[reply]

This is a common problem. The HandBrake user guide points you to this forum post, a guide to troubleshooting chapter scan problems. There is a UI preference to "scan all chapters" which forces a thorough scan of the input; that takes a long time, but will probably be needed in your case.
If that doesn't work, try following the full troubleshooting guide step-by-step. Nimur (talk) 14:40, 25 January 2016 (UTC)[reply]

Understanding sandboxes

Why isn't everything, and not only the browser, sandboxed? At least, the common targets of virus and malware could be sandboxed.--Scicurious (talk) 15:53, 25 January 2016 (UTC)[reply]

Sandboxing imposes very severe limits on what software can do. For example, JavaScript can't just decide to upload a file to the server...it has to ask you first...and it has to ask you every time. That restriction sharply limits what JavaScript is able to do. SteveBaker (talk) 16:05, 25 January 2016 (UTC)[reply]
Sounds like a reasonable limitation. Why should I allow MS Word, Adobe, Excel, Power Point, connect to a server and upload information to it?--Scicurious (talk) 16:40, 25 January 2016 (UTC)[reply]
Many programs work with other programs. Operating systems are built on inter-process communications. If you sandbox everything, you are basically saying that no program can talk to any other program without explicit permission on both sides for every single communication attempt. So, as a very simple anecdote, I run Gimp. I want to scan an image. XSane is a different program. Gimp cannot talk to XSane because Gimp is sandboxed. I have to specifically allow Gimp to talk to XSane. Then, XSane goes to talk to my scanner. Oops. The scanner driver is a different program. XSane can't talk to the scanner. I have to specifically allow XSane to talk to the scanner driver. A simple task of scanning an image creates at least two "Are you sure" prompts. As you probably know, humans don't read those. They just click "yes", "agree", "proceed", "just do what I told you to do" without reading. So, sandboxing with human intervention for inter-process communications doesn't solve the problem in any way. It makes it worse because it trains users to ignore the communication warnings for common communications. So, you should be asking why things are sandboxed in the first place. It is because users tend to install all kinds of plugins and addons without any concern for security. Sandboxing is a poor fix for human ignorance/laziness/stubbornness/stupidity... A better fix would be to remove the ability for users to cram their web browser full of plugins that they don't need and cause security issues. 199.15.144.250 (talk) 20:30, 25 January 2016 (UTC)[reply]
The design goal of Qubes OS is for applications to run in their own lightweight, quickly spun-up virtualisation environment. Communication with other applications, and access to the filesystem and devices, is mediated by the virtualisation envelope (which is a lot more granular than a simple yes/no sandbox). It's also possible (I'm remembering from some presentation I saw, hopefully correctly) to spin up a disposable application instance - so e.g. if you wanted to do some online banking, it could produce a browser instance clone, and once you're done and you close the instance, the whole VM and all its storage is destroyed, taking with it any credentials, cookies, session-keys and the like. -- Finlay McWalterTalk 20:41, 25 January 2016 (UTC)[reply]
Consider that naive sandboxing (unlike the fancy stuff Finaly mentions) would create serious obstacles to simple things like Clipboard_(computing) functionality. The Unix permissions are not the same as sandboxing, but they've been shoring up obvious security problems for a very long time now, using an analogous paradigm of only letting programs do certain things in certain places. Linux variants and OSX are all Unix-like (or *nix, etc.), so both have nice permissions systems, I don't know if the Win systems ever got around to doing user/group/file permissions better. SemanticMantis (talk) 21:22, 26 January 2016 (UTC)[reply]
It is for the same reason that security sucks in general: it's difficult and few people understand how to do it. And desktop OSes don't provide good support for it.
I strongly disagree with the people above who think that with sandboxing you can never get anything done. Sandboxing just means that you have isolated software components with clearly specified channels of communication between them. That's good software design, regardless. The current desktop OS model only supports the equivalent of spaghetti code: everyone has unrestricted access to everyone else's data on the same desktop, even across users. It's that way because these systems were designed before the current malware crisis, and people tend to adapt to the status quo and not realize how much better things could be. -- BenRG (talk) 22:29, 26 January 2016 (UTC)[reply]

January 26

What are these function keys on my keyboard for?

What are these function keys on my keyboard for? One is "F11" and one is "F12". The "F11" key has a little icon of Microsoft Word on it. The "F12" key has a little icon of Microsoft Excel on it. So, what are those function keys supposed to do? I use Microsoft Word and Excel all the time. So, I'd like to know if these function keys will be helpful or useful to me. If it matters, I have a Logitech keyboard. The Model Number is MK 700 / MK 710. Here is a photograph of it: [5]. If you click on the photo, the photo will become enlarged. And you can see exactly what I am referring to. Thanks. Joseph A. Spadaro (talk) 05:14, 26 January 2016 (UTC)[reply]

Here is the manual for that model: Getting started with Logitech Wireless Desktop MK710. According to the manual, if you have installed the "Logitech SetPoint" software, then pressing the FN and F11 buttons together will launch a "document application" and pressing FN and F12 together will launch a "spreadsheet application". Later in the manual the customizing screens of the software are shown, and it looks like you can change the FN+F# combos to open any file or program. --Bavi H (talk) 06:04, 26 January 2016 (UTC)[reply]
See also Function key for an overview.--Shantavira|feed me 08:52, 26 January 2016 (UTC)[reply]
The general idea is that those keys can be used to key in keyboard shortcuts. While the GUI is seen as easy to learn an use, many heavy computer users (and all before the mouse) find that they can do things much more quickly if they can keep their hands on the keyboard instead of reaching for a mouse every few seconds. See our Table_of_keyboard_shortcuts, which says that F11, pressed by itself, should toggle the current window between fullscreen and "normal" sized. There are operating system-level shortcuts (for things like window manipulation, quitting programs, etc) as well as application-level shortcuts. So the same shortcut can and will do different things in different applications, and the same maneuver can require different shortcuts in different applications. If you want to know more about how to use shortcuts in e.g. Word, here's a handy reference "cheat sheet" [6], and you can find many, many more by searching things like /[application name] keyboard shortcut [sheet/list]/. It may seem like a lot of effort to memorize these, but for any program that you use a lot, the shortcuts will save you hours per month, and if you truly use a command a lot, you'll quickly memorize it within a few days of taping up your cheat sheet. Here's a few research articles on the topic of keyboard shortcuts that you might like to skim [7] [8]. SemanticMantis (talk) 21:15, 26 January 2016 (UTC)[reply]
I dunno. Seems like an awful lot of trouble. For very-little-to-no benefit. When I want to open Word or Excel, I do one click of the mouse on my desktop. Actually, on my task bar. So, that's hardly burdensome. Not sure why I (or anyone) would ever need a "short cut" to something that is simply one step (merely one click) and takes literally less than one second. Odd. Joseph A. Spadaro (talk) 04:14, 27 January 2016 (UTC)[reply]
Opening Word or Excel may not be the most useful shortcut, but if you don't have a mouse in your hand, it will definitely take longer than one second unless you move your hand so fast to risk causing injury. Particularly if it isn't on the taskbar. If you use Word or Excel often enough to use the shortcut, perhaps it's worth having on the taskbar but perhaps not, people manage their computers differently and have different tolerance for clutter. Note also that the comment from Bavi H suggests up to 12 programs which is quite a lot to add to your task bar. (Of course some people just don't know about pinning stuff to the taskbar or use an OS where this isn't possible.)

Not sure what you mean by an awful lot of trouble. Opening a program using such a shortcut will take less than second for all Fn+F# if both hands are on the keyboard. If only one hand is (and most people probably have at least one hand when actively using the computer), then depending on where the Fn is (AFAIK most keyboards have a single key on the left) perhaps about half. You could optimise this by choosing programs you're likely to be touch typing with that require both hands. Of course you can use other keys as your shortcut keys.

Unless you have memoriy issue. Learning simple shortcuts like which key to open which program should take less than a day if you use them regular, less than a week if somewhat irregular. (If you're using them so irregular that you don't learn in a week, probably the shortcut key isn't worth it.)

Nil Einne (talk) 12:40, 27 January 2016 (UTC)[reply]

Also worth noting that shortcuts are a bit of a subjective taste, and their rewards are much greater for a so-called power user who wants/needs to make excel dance and sing, compared to a user who just wants to do simple things occasionally. I agree with Joseph that an application-opening button is not so great, in isolation. Especially since if you use excel frequently, then you only open them rarely. I suspect the symbol on the keyboard was an attempt to idiot proof the thing, but it fails because it requires the Fn key, and not just a simple press, so the functionality is opaque to the proverbial idiot. Personally, I think shortcuts are great for everybody, but then again, I roll my eyes when people tell me I should start writing more shell scripts to automate various tasks I do... SemanticMantis (talk) 14:14, 27 January 2016 (UTC)[reply]
A few of the function keys have more or less standard meanings. F1 is almost always Help. It should bring up Help for whatever application you are focused on. F3 is usually Exit or Escape. I don't know without looking up whether any of the other function keys have standard meanings. Robert McClenon (talk) 00:16, 28 January 2016 (UTC)[reply]

Confused About Diodes

I'm trying to build the circuit described here in order to add a Pause button to my SEGA Master System controller: http://www.smspower.org/Development/JoypadPauseButton. It ties the output from a logic gate to the processor's non-maskable interrupt pin (NMI), which is how the system already implements the pause function. The idea is be able to pause the game from the controller, instead of having to press the button on the console (which is several feet away). The web page says this:

"Important: The diode (not shown) has to be placed between pin 4 (cathode) and the wire to pin 22 of the VDP, to prevent the real pause button to fry the gate's output. Another solution can be simply cutting the ground pin of the pause button, making it ineffective."

Ok, fair enough. I wired a diode between the gate's output and the NMI pin. It is biased so that current will only flow from the gate output to the NMI pin. Now, when I send the pause signal, the output gate is pulled low (yay!), but the NMI pin isn't, because the logic level doesn't "pass through" the diode to the pin (i.e. my logic probe doesn't give a reading). I actually tried this first without the diode, but then the gate output wasn't defined.

So, it sounds like the diode is there to make sure the current can only flow from the gate output to the pin, but that's not what's happening. I guess I could have the diode backwards, but my multimeter's diode check function says that's not the case. Any thoughts on what I'm doing wrong? OldTimeNESter (talk) 18:11, 26 January 2016 (UTC)[reply]

What we have here is a wired-OR circuit using negative logic, so the diodes want to be the opposite way round to the diagram in the wired-OR article. The diode should be arranged so that current can flow from the NMI input to the gate output, so that when the gate output goes low, the NMI pin is pulled low. If the NMI pin is being pulled low by the existing circuit, no current will flow through the diode, so the gate won't be trying to drive a short-circuit. The description in the source instructions is correct - the cathode of the diode wants to go to the gate output. It sounds as though you have the anode connected to the gate output, instead. Tevildo (talk) 18:49, 26 January 2016 (UTC)[reply]
You're right, that did it. I also had to wire it to the video chip (like the instructions say) instead of directly to the CPU. I have zero idea why you can't wire it directly, or why the pause circuit involves the video chip at all (unlike the NES, the SMS doesn't use the NMI to trigger vertical blanking) but it kept reading an indeterminate logic level until I switched it. Thanks for the quick response, and the helpful links! OldTimeNESter (talk) 02:56, 27 January 2016 (UTC)[reply]
Without the diode, when the original pause button is pressed, the output of the new gate is shorted to ground. Although this won't damage it immediately, it won't do it any good - if your chip is a real 74LS02, it's only recommended to short the outputs for 1 second. Tevildo (talk) 06:59, 27 January 2016 (UTC)[reply]

Saving a worksheet with filters applied

This may seem to be a naive question to many (or may be not). Anyway, I am not an advanced user of spreadsheets, so please be in detail.

Suppose I create a worksheet in MS Excel with filters applied to a column. Now I want to save the file only with the filtered data which is appearing presently, sych that the rest of the data is deleted. In other words, If I send it to someone, say via email, the recepient should not be able to remove the filter and see the entire content. Copy-paste, I believe, doesn't work because when you select a range, the intermediate rows (not showing on screen) are also selected and then copied. Is there any way to do that? Jazzy Prinker (talk) 18:44, 26 January 2016 (UTC)[reply]

Here's instructions to copy visible cells only [9], and not the hidden stuff. I think the best thing to do is to paste the filtered data into a new file, and send that to others, rather than deleting your original data that you might later want. SemanticMantis (talk) 21:51, 26 January 2016 (UTC)[reply]
Thank you so much. That's exactly what I needed. Jazzy Prinker (talk) 04:57, 27 January 2016 (UTC)[reply]

Tooltips for editing wikipedia

I'm just learning how to edit wikipedia and was hoping there was a way to get more info from the tooltips. Anything I hover over just tells me to click it to insert the markup without explaining what it does.

The reason for my question is because I made a mistake making references on the science ref desk, which somebody else had to correct for me. I know there is the sandbox for experimenting, but if the tooltips could, for example, tell me what I'm going to click is going to make a reference at the bottom of the page, or an external link, etc. that would be very useful to newcomers like myself.

I've tried the VisualEditor, but it seems quite limited, maybe only useful for making minor changes to pages? But my question is about more help from tooltips, I can't imagine it would take much work to update them to help out more, but I'm not a programmer or web guru so I may be totally wrong. Thanks Mike Dhu (talk) 22:37, 26 January 2016 (UTC)[reply]

Hi! For questions about editing, you want the Help desk or Teahouse. The Reference Desk is for general knowledge questions. Other resources for help: Help:Contents, Wikipedia:New contributors' help page. --71.119.131.184 (talk) 00:49, 27 January 2016 (UTC)[reply]
Another place to post this would be Wikipedia:Village_pump_(idea_lab) or Wikipedia:Village_pump_(technical) - help desk is not so relevant IMO; because you're not asking for help, you're suggesting a technical change. Once you get some feedback there, you can make a formal proposal at Wikipedia:Village_pump_(proposals) or a similar place (I know, we have a lot of similar places...). As for your proposal: I think adding slightly longer descriptions might work ok, but a complete description is hard to do with just words and very long. This is why we have worked examples in several WP:TUTORIALs. You might also enjoy the slightly silly but informative Wikipedia:The_Wikipedia_Adventure. I like to learn by just looking at page sources to see how certain effects are achieved. You can work lots of fancy formatting and markup into articles, but you don't really need to if you just want to add refs or correct grammar, etc. For the desks, all you need to know is to put URLS in single square brackets like so [https://www.google.com/]-> [10], and put wikilinks in double square brackets like so: [[google]]-> google. Lastly, as you learned, the standard ref tags don't work well on talk pages, but that's just one of those idiosyncrasies that you have to pick up through experience :) SemanticMantis (talk) 14:03, 27 January 2016 (UTC)[reply]

January 27

SHA1 checksum for Firefox 44.0

I downloaded and installed the US English version of Firefox from firefox.com but it came with all sorts of non-English and suspicious looking addons. I doubled-checked the HTTPS connection to firefox.com and that its certificate is genuine, as far as I can tell. https://download.mozilla.org/?product=firefox-44.0-SSL&os=win64&lang=en-US[11] was the exact download link I used.

Does Mozilla publish SHA1 checksum for their Firefox binaries so that I can verify what I downloaded was genuine or not? Johnson&Johnson&Son (talk) 06:26, 27 January 2016 (UTC)[reply]

For reference, what I downloaded[[12]] has a SHA1 hash of a6f058b8fd8430db0f87746c331877e7f3c40078. Johnson&Johnson&Son (talk) 06:29, 27 January 2016 (UTC)[reply]

I managed to find this link[13], but it's a little outdated. The example http://releases.mozilla.org/pub/mozilla.org/firefox/releases/3.6.13/ still works, but unfortunately http://releases.mozilla.org/pub/mozilla.org/firefox/releases/4.40 doesn't work. Johnson&Johnson&Son (talk) 10:49, 27 January 2016 (UTC)[reply]

If you already had Firefox installed, bear in mind installing a new version isn't generally going to remove addons. If you had it installed at some stage but uninstalled it, the addons may still hang around. If you never had Firefox installed, it's still possible other software may have set themselves to be added to Firefox if you even installed it. Particularly malware.

Browsing this directory [14], you should be able to find the hashes that Mozilla published for your release. For 44.0 (not 4.40 which never existed) the SHA2-512 hashes are here [15] and it's c4ef058366ae0e04de87469c420761969ee56a03d02392a0cc72e3ced0f21be10e040750f02be3a92b6f25e5e2abdc30180169ae2bc84ef85c5343fdf9b632cf.

There are no MD5 or SHA1 hashes, I presume Mozilla stopped publishing them a while ago considering neither are considered secure. However unlike MD5, the cost of generating a SHA1 collision (particularly a useful SHA1 collision) is as far as we know, high enough that I think it's fairly unlikely that you happened to have a version with the same SHA2-512 hash but a different SHA1. And I can confirm that the file I just downloaded has the same SHA2-512 as I see published my Mozilla and SHA1 that you published. [16] also has the same SHA1.

Note however I did not verify the hash file I downloaded using the GnuPG [17] which means if my connection was also compromised, my hashes are useless. Note also if you do have malware, any hash you generate, any certificate that appears genuine, any verification of the hash file is basically useless. (And even if you went through this much effort fo all the software you ever downloaded and ran on your computer, the existing of bugs means it's still impossible to be certain you don't have malware.)

However most malware doesn't go that far. In fact, I can only imagine the certificates ever really happens unless you were particularly targetted e.g. by a very dedicated and smart individual, or a criminal group or an intelligence agency, who for some reason are out to get you. Of course if this is happening, you may not be reading my message (either at all or in original form) either so....

Presuming you're correct about weird addons, personally I think the most likely thing is your system was already somewhat compromised but the software you just downloaded is the genuine original file.

Nil Einne (talk) 13:18, 27 January 2016 (UTC)[reply]

Thanks. Seems like it's indeed the add-ons from a previous "dirty" installation of Firefox that's causing the problem. I tried uninstalling firefox, restarting, and installing it again but all the add-ons from the previous version is still there. How do I prevent this from happening? I want to completely wipe everything Firefox related, and then install from one of the SHA512 verified binaries. I'm on Windows 10. Johnson&Johnson&Son (talk) 04:37, 28 January 2016 (UTC)[reply]
I would try a Firefox "refresh". Uninstalling Firefox does nothing because it doesn't touch your profile, which is where extensions, settings, and the like usually live. Uninstalling simply removes the program files. --71.119.131.184 (talk) 05:06, 28 January 2016 (UTC)[reply]
BrowserFox is one of the malwares I had on my home computer recently, and it would persist addons across new browser installs (notably "OutrageousDeals!" adware). It may be worth it to run a malware scan using your favorite application and see if it picks up anything. FrameDrag (talk) 14:36, 27 January 2016 (UTC)[reply]

Windows Batch File Repair

I've Googled how to repair Windows Batch Files and can only find how-tos on repairing Windows with batch files.

How do I repair an improperly working Batch File containing a game?? Theskinnytypist (talk) 18:56, 27 January 2016 (UTC)[reply]

Just open it in a text editor (not a word processor that adds unwanted characters) and edit it, then re-save. Dbfirs 19:32, 27 January 2016 (UTC)[reply]

Finding where in my computer an open document is located

When I'm in finder and highlight some document, say after a search, I know where exactly it's located in my computer because it tells me at the bottom of the finder window. I can also click Command+I and that will also tell me its saved location. However, if I have a document open, say a Word document, and want to be told where that document is saved in my computer, how do I do that? (*Command+I" in an open document just invokes italics). I know of indirect ways. For example, I could copy and paste some unique text from it and do a search, but I was thinking there must be some easier and more direct way. Thank you.--108.21.87.129 (talk) 18:58, 27 January 2016 (UTC)[reply]

In most applications, including Microsoft Office, you can just click "save as" and it will show the current saved location as the default. Dbfirs 19:38, 27 January 2016 (UTC)[reply]
(ec) It probably depends on version, but in Word Starter 2010, just click on File, and you will see the document location under "Information about <file name>". Rojomoke (talk) 19:43, 27 January 2016 (UTC)[reply]
Ah, I've figured it out. Save as is not reliable. In excel for example it always tries to save in the last location where you saved any excel file, and not the location where the file you have open is from (a terrible feature). But the answer is that, while I don't have the facility Rojomoke describes, I can click on properties from the file menu tab, and in there click on the general tab where the location is set out.--108.21.87.129 (talk) 20:51, 27 January 2016 (UTC)[reply]
My older version of Excel does that only for a regular save, but I'm glad you've found a solution. Dbfirs 21:24, 27 January 2016 (UTC)[reply]
On OS X, you can Command+Click the document icon on most NSWindows. (It's the little icon, usually immediately to the left of the file name, in the window's Title Bar). This will show you the file system representation for the document that is open. You can also drag-and-drop the document icon onto Terminal or a text-editor, and it will emplace the pathname of the document in the terminal.
Most applications, including Microsoft Word for OS X, provide this "smart" document icon feature.
Once the UI springs up, you can select the parent folder and immediately open a Finder window there. Or, you can drag the icon from the open window's title bar onto a dock icon for a different application (if that application knows how to open the file type).
I can't seem to recall the name for this amazing UI feature; but here is technical documentation about how it works: NSWindow setTitleWithRepresentedFilename: (and window:shouldPopUpDocumentPathMenu:).
I want to say that the common name for this feature is the "active document icon" or "smart document icon" or something to that effect. If you play with the feature, it can do all kinds of other useful things related to drag-and-drop.
Nimur (talk) 00:04, 28 January 2016 (UTC)[reply]


January 28

Mouse Cursor Jumping Around

I edit Wikipedia using either of two computers, a desktop computer running Windows 7, and a laptop computer that is now running Windows 10, both using a mouse. (I don't plan to learn to use a trackpad.) On the laptop, I frequently have a problem with the mouse cursor jumping around, moving to where I don't want it. If I notice this before typing, I can move it back, and may have to undo the selection that it did while moving. If I don't notice until I type, I then realize that a considerable amount of text (that had been highlighted by the moving around) may have disappeared, or just that I have typed in the wrong place. This requires Ctrl-Z. No permanent harm done, but a nuisance. This never happens with the desktop machine. I have two possible theories as to the cause, and will listen to another. First, on the desktop, I have the mouse sitting on a brown wooden computer desk, but on the desktop, the mouse may be sitting on a white tablecloth or a white piece of paper or a magazine. Does the issue have to do with the surface, in which case would buying a small mouse pad help with the laptop? Alternatively, is the problem due to a bug or a setting in Windows 10? I knew that Windows 8, before upgrading to Windows 8.1, sometimes had mouse jump. Is there a mouse jump bug or misfeature in Windows 10? If so, this will cause me to choose not to upgrade from Windows 7 (which works fine for me) to Windows 10. Which is a more likely explanation? Is there a third explanation? Robert McClenon (talk) 00:13, 28 January 2016 (UTC) By the way, this also happens in Word. It isn't a Wikipedia thing in particular. It's a mouse thing. Robert McClenon (talk) 00:14, 28 January 2016 (UTC)[reply]

On the laptop you have a track pad that you don't use, is that right? Try actually disabling the track pad, you should be able to do this somewhere under device manager. I have seen this issue caused by track pad bugs. Vespine (talk) 04:59, 28 January 2016 (UTC)[reply]

Laserjet printer

Dear Sir

I purchased second hand an Epson Aculaser c2900 color laserjet printer.

It has full blue, red, and yellow ink cartridges, however the black is empty. I am wondering if this printer will continue to print the color black text even with an empty black cartridge by mixing the blue, red, and yellow inks together to make black. Will this printer do this?

I read on the manufacturer website that it says "auto-switching to monochrome printing when color toners are depleted" which is the reverse of what I am asking. However the issue of empty black cartridge is not addressed.

Please let me know here, I am forbidden from giving you my email. — Preceding unsigned comment added by 122.15.163.132 (talk) 00:55, 28 January 2016 (UTC)[reply]

I don't really have a reference but I'm fairly certain most printers can not print black by using the color cartridges. While no doubt it would be "technically" possible to achieve this, I believe most printers require at least the black to print anything. Vespine (talk) 02:50, 28 January 2016 (UTC)[reply]
I doubt it will. The auto switch to monochrome is the opposite - print in B&W if the color ink is out. Bubba73 You talkin' to me? 04:39, 28 January 2016 (UTC)[reply]