Wikipedia:Reference desk/Computing: Difference between revisions
→Multimedia Presentation Tool Recommendation: new section |
|||
Line 275: | Line 275: | ||
I am however a little wary of using sleep as opposed to hibernate. I have found plenty of websites that say that you don't need active cooling during sleep as the passive cooling is sufficient, but will I run into problems with the RAM overheating if passive cooling is significantly reduced (i.e. with the cupboard door closed)? Thanks in advance [[User:Equisetum|Equisetum]]<small> ([[User talk:Equisetum|talk]] | [[Special:Contributions/Equisetum|contributions]])</small> 16:36, 3 January 2013 (UTC) |
I am however a little wary of using sleep as opposed to hibernate. I have found plenty of websites that say that you don't need active cooling during sleep as the passive cooling is sufficient, but will I run into problems with the RAM overheating if passive cooling is significantly reduced (i.e. with the cupboard door closed)? Thanks in advance [[User:Equisetum|Equisetum]]<small> ([[User talk:Equisetum|talk]] | [[Special:Contributions/Equisetum|contributions]])</small> 16:36, 3 January 2013 (UTC) |
||
== Multimedia Presentation Tool Recommendation == |
|||
Hi all, |
|||
I will be presenting a group skit/play for a competition sometime in January. Last year, my team won in part by including a "live Twitter feed" of characters in the play that was really a collection of Photoshopped images in a powerpoint presentation. Looking to take it to the next level this year, I was hoping for something that could serve as an "information dashboard", hopefully including a couple modules, like the stock price of the fictional company in which the play is set, a news ticker, maybe some video, and if possible a way to live poll the audience from their smartphones and laptops (I know this is probably too much to ask, but hey, why not try?). Can anyone think of a way to do this? I'm considering building a webpage to run locally off of my laptop connected to a projector, but I'm not sure if I could do it, or if it's even possible. |
|||
Tl;dr: Anyone know a way to make a presentation to run during a skit that supports updating images, text, and video? Thanks, [[Special:Contributions/99.224.140.65|99.224.140.65]] ([[User talk:99.224.140.65|talk]]) 16:39, 3 January 2013 (UTC) |
Revision as of 16:39, 3 January 2013
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
December 28
Do you need Internet access to use a QR reader?
I am trying to create a QR code with my Wifi SSID and password in it so that friends can come over and scan my QR code and their phones will automatically connect to my home Wifi. However, after creating the code, I tried it with my Android phone and turned my phone's 4G and Wifi off and used Google Goggles to scan the code. It didn't work because the app said it needs internet access.
My question is, do QR codes have the encoded message embedded in them, or does the QR scanner device need to access the internet to be able to read the code? If the former, then is it just Google Goggle's fault? ie. would this work if I used another QR reader?
Acceptable (talk) 03:06, 28 December 2012 (UTC)
- It is Google Goggle's fault. The QR code encodes the URL, but the Google Goggles phone application only sends the image to a Google server for decoding. Thus, Google Goggles must be already connected to Internet to decode QR codes. Though you might find an app for Android, that can do the QR code decoding locally in the device without Internet connection. This application claims local decoding capability, and there might be others as well. --hydrox (talk) 04:54, 28 December 2012 (UTC)
C++ standard library list container
Does std::list<SomeObject> guarantee that each item that is added to the list will keep the same address until it is deleted? I ask because I need to store references to list-items in separate std::maps, and want to be certain that the list-items stay in place (which rules out std::vector). --NorwegianBlue talk 09:03, 28 December 2012 (UTC)
- Attempt at clarification: I need to allocate a bunch of objects, and would like them to reside in a container. In addition, I need to create various maps which reference individual objects. To make sure that such references are not invalidated because of additions of new objects to the container, I need to chose a container type which guarantees that existing objects do not move in physical memory when new objects are added. I know that std::vector would be a bad choice, because elements might move when the vector increases in size. Would std::list be a safe choice? --NorwegianBlue talk 16:01, 28 December 2012 (UTC)
- From my reading of a draft standard (n1905), I think list will be stable. The "capacity" section for std::vector talks explicitly about reallocation on size and how this invalidates references, pointers, and iterators to elements; the comparable section for std::list does not. The only things the standard mentions which invalidates references to members of a std::list are explicitly destructive operations like pop and erase, as you would expect. But neither, to be pedantic (which is all what language and library standards are about) does it explicitly say (that I can find) that references etc. are valid until the item is explicitly destroyed - it would be bonkers if they were invalidated somehow, and the standard very strongly implies that the implementation will be a doubly linked list of items (e.g. list.insert insists that this be a constant time operation, whereas vector.insert does not). Is that a guarantee? Um, er, hmm, probably... 176.250.45.76 (talk) 16:42, 28 December 2012 (UTC)
- Thanks! I got hold of a copy of "The C++ Standard Library. A tutorial and a reference" by Nicolai M. Josuttis (1999), which states (p. 167): "Lists don't provide provide operations for capacity or reallocation because neither is needed. Each element has its own memory that stays valid until the element is deleted". Not the standard, of course, but it seems reasonably safe... --NorwegianBlue talk 20:07, 28 December 2012 (UTC)
European Union v. Microsoft: Windows 7 E, the edition of Windows 7 sold by Microsoft in the European Union, does not include Internet Explorer by default. That means it is difficult to get a web browser for your computer. How do EU users deal with it? Czech is Cyrillized (talk) 14:27, 28 December 2012 (UTC)
- From memory it had a list that you selected from on install. -- Q Chris (talk) 14:38, 28 December 2012 (UTC)
- I just set up a brand new Windows 8 Acer laptop for someone, bought in the UK. It came with IE, and I had to manually go to the Chrome and Firefox sites to download them. I never saw a "which browser(s) do you want" screen on this machine (although I have seen it in the past on other machines). 176.250.45.76 (talk) 14:42, 28 December 2012 (UTC)
- Incidentally, even systems which don't feature IE really still have almost all of it, just without the IE wrapper application. They still ship with Trident (layout engine), which is used by other applications (e.g. steam) to render HTML/CSS content. Any they ship the ancillary libraries (for things like media, http[s], xml, json, svg) too. 176.250.45.76 (talk) 14:52, 28 December 2012 (UTC)
- Windows 7E was just a shot Microsoft took across the bows of the EU and it never really was sold.[1] For a while Windows 7 shipped with a "ballot" allowing a choice of browser but a regrettable "oversight" let to this getting omitted so again no choice was provided.[2] The case is again going to court to give Microsoft another opportunity for avoidance.[3]Thincat (talk) 15:29, 28 December 2012 (UTC)
What is the most windows 7 like linux distrubution (that also have live cd)? (explained version of this usual question)
What is the most windows 7 like linux distrubution (that also have live cd)??
Yes, I know people post this question alot (and not explain what they really meant) and guys answering it usually post some link with linux is not windows explanation, or just say "why you dont just use windows 7?", so I will explain, my question:
- -NO, I am not talking about interface. The distro interface can be anything as long its not super "alien"
- -This linux distribution will need to be able to be instaled in any computers/notebook/netbook that windows 7 would be able to be installed (PS:no need to work (or work well) with server machines or as OS for Embedded Systems).
- -Same performance or better.
- -Almost as easy to use or easier to use.
- -Able to be used to your average user stuff (excluding gaming obviously as this requeriment would make my entire question useless).
187.115.238.253 (talk) 19:16, 28 December 2012 (UTC)
- Any modern distro you like, plus KDE. ¦ Reisio (talk) 19:26, 28 December 2012 (UTC)
- Your requirement 2 is probably impossible to satisfy - there will almost certainly be some hardware for which Windows 7 drivers are available but linux drivers aren't. I don't think overall performance or software/hardware compatibility varies that much between recent versions of the most popular distros, and the interface can usually be customised to a great extent. Maybe look at comparison of Linux distributions? 130.88.99.231 (talk) 17:30, 2 January 2013 (UTC)
- Certainly correct, but the vital hardware that will not be supported at all will be rare indeed. ¦ Reisio (talk) 00:02, 3 January 2013 (UTC)
- Thanks, 130.88.99.231 this totally answered my question.
HELP with iphone contacts !!
Hi,I need some help for my deleted iphone contacts.
I got a new iphone , and wanted to get my contacts fom my old iphone.
I synced it with itunes. And to make the back-up for the contacts i used an option to put them in my gmail.
I had used it before with Outlook , so i did not read it carefully , and clicked merge contacts.
It deleted everything in my iphones contacts , and put there only my email contacts.
My contacts were not in my gmail , and i tried backup restore in itunes , but it only gives me my emails.
I have not used i cloud (stupid me), and cant restore them.
If someone knows the solution for the problem , or has experienced this problem before , please help.
Thank you, in advance79.106.109.203 (talk) 19:30, 28 December 2012 (UTC)
December 29
New tablets
Does anyone know exactly when manufacturers are going to publicly announce new tablet models for 2013? 74.15.143.46 (talk) 06:15, 29 December 2012 (UTC)
- Quite a number have already been launched for the 2012 holiday season. Those that will be launched only in 2013 will obviously not be launched for another two days. ¦ Reisio (talk) 23:36, 29 December 2012 (UTC)
C++ double comparison
Suppose you have this C++ function:
void func(double a, double b) {
if (a > b) do something;
else if (a <= b) do something else;
else do 3rd thing;
}
Does C++ guarantee that the "3rd thing" will never be done? Is it ever possible, due to numerical error, that neither a>b nor a<=b are satisfied? --140.180.249.194 (talk) 10:00, 29 December 2012 (UTC)
- That's an astute question; it's always wise to think hard about how floating point number comparisons work. What your question amounts to is whether IEEE double precision floating point numbers form a strictly ordered set. As the floating point article says, they almost do - " Finite floating-point numbers are ordered in the same way as their values (in the set of real numbers)". That leaves special values NaN, -ve infinity, and +ve infinity. And here's the rub. NaN is not < NaN, > NaN, or even == Nan. As the NaN article says "A comparison with a NaN always returns an unordered result even when comparing with itself". So if you pass 0.0/0.0 (which is NaN) as
botheither of the arguments to your function, it will do the third thing. -- Finlay McWalterჷTalk 10:52, 29 December 2012 (UTC)- Exactly. And in the more general case, you can overload the operators with any binary relation. It's not a good idea to overload > and <= independently, but there is nothing in the language to stop you. --Stephan Schulz (talk) 11:13, 29 December 2012 (UTC)
- Unequal comparisons are a slight variation on the rule, they are treated exactly the same as not equal so the idiom (x != x) { NaN case } works okay. Only the equals and unequal comparisons are guaranteed to not signal if a signalling rather than quiet NaN is given to them. Dmcq (talk) 13:29, 29 December 2012 (UTC)
- not terribly familiar with C++ but what happens with null/missing variables? in some languages, if either or both variables are null/missing, a comparison would fail >, <, and =, and fire the third thing. Gzuckier (talk) 04:14, 31 December 2012 (UTC)
- The only imprecision in IEEE 754 floating point calculations is in the rounding, and comparisons don't involve any rounding, so it's impossible for < and ≥ to give inconsistent answers due to numerical error. I'm not 100% sure the C++ standard guarantees this, though. -- BenRG (talk) 04:44, 1 January 2013 (UTC)
Apple business strategy
Why did Apple release a new ipad in November so soon after they releases one in March? Some say it's to capitalise on the christmas season and others say its to change to a bi-annual product release cycle to compete better with other manufacturers. But at the same time launching all their major products so close to each other towards the end of the year would take away the hype a bit. Tbo 157(talk) 22:19, 29 December 2012 (UTC)
- It's not necessarily intentional. Some projects get delayed while others are on-time. They could intentionally hold up a release to space them out better, but technology changes so rapidly, that would risk depreciating their product. StuRat (talk) 23:00, 29 December 2012 (UTC)
- Apple has a long history of releasing newer/better/improved products at an interval short enough to annoy the less wealthy consumers of their products. :p Realistically there is likely no interval that would both annoy no one and not be competitive business suicide. ¦ Reisio (talk) 23:39, 29 December 2012 (UTC)
December 30
840 million flops?
How many Tera, giga or mega flops is this? I realize it's 10 to the 6, 9 and 12 but not too confident with my ability to get the reverse conversion of the exponential right. Thanks in advance! Market St.⧏ ⧐ Diamond Way 08:07, 30 December 2012 (UTC)
- Ok I think I figured this out here [4] if I am reading this right 840 million flops = 840 mega flops?? Market St.⧏ ⧐ Diamond Way 08:13, 30 December 2012 (UTC)
Kilo = thousand Mega = million Giga = billion Tera = trillion
- So, 840 million flops = 840 megaflops. There can be a slight difference if we're doing a conversion involving binary and decimal, but it's around there, in any case. StuRat (talk) 08:15, 30 December 2012 (UTC)
- Thanks for the confirmation StuRat! Market St.⧏ ⧐ Diamond Way 10:25, 30 December 2012 (UTC)
- 1 megaflops = 1,000,000 flops exactly, see the table at FLOPS. (Binary prefixes are only used in some measurements of bytes or bits.) --Bavi H (talk) 19:49, 30 December 2012 (UTC)
Microsoft Word and Excel
Let's say (hypothetically) that I am typing a document in Word. I just finished typing on page 20 of the document. Whenever I close the document and then subsequently re-open it, my cursor is always at the top of page 1. Is there any way to get the cursor to remain at some other location within the document (let's say, for example, at the bottom of page 20, where I had last been working)? Same question for Excel. Thanks! Joseph A. Spadaro (talk) 23:51, 30 December 2012 (UTC)
- Try ⇧ Shift+F5 after opening the document. PrimeHunter (talk) 00:18, 31 December 2012 (UTC)
- Thanks, I will try that and see what happens. But, is there a way to have it automatically open up at the last place that I was in the document? So that I don't have to do anything at all, other than my opening up the document. Is there some default setting somewhere that can be changed? Thanks. Joseph A. Spadaro (talk) 01:02, 31 December 2012 (UTC)
December 31
Memory advantage of a hypothetical computer whose hardware used -1, 0, and 1
If there were a computer that used three values per 'bit' instead of binary two at the hardware level, would there be a simple third improvement as far as how many different values could be stored in a set number of memory locations of the same word size, or is it something different? 67.163.109.173 (talk) 02:32, 31 December 2012 (UTC)
- That would represent a 50% increase in the amount of data which could be stored per bit. However, existing programs wouldn't be able to fully take advantage of this. For example, characters are commonly encoded as either 7 bits (128 values) or 8 bits (256 values). Even if you were to remap this to "tri-bits", you would have to settle for either 81, 243, or 729 possible values for a character, using 4, 5, or 6 tri-bits. Conversely, there might be some operations where tri-bits are more efficient, like in comparing two values. You could use a single tri-bit to represent less than, equal to, or greater than, while you would need at least 2 regular bits to represent these 3 possibilities. StuRat (talk) 02:39, 31 December 2012 (UTC)
- The increase in capacity is actually log2 3 ≈ 1.58, or 58%, assuming the trits occupy the same amount of space as the bits. Most consumer solid state drives actually use more than two states per cell to increase density—see multi-level cell -- BenRG (talk) 04:35, 1 January 2013 (UTC)
- OK, I don't understand that at all. 3 states is 50% more than 2 states, so how do you get 58% ? StuRat (talk) 06:16, 1 January 2013 (UTC)
- With k bits you can represent 2k different states. Turning that around, to represent n different states you need log2 n bits. With k trits you can represent 3k states, which would require log2 3k = k (log2 3) bits. -- BenRG (talk) 16:55, 1 January 2013 (UTC)
- OK, I see what you're saying. Let's run some numbers:
P O S S I B L E S T A T E S n bits trits - ---- ----- 1 2 3 2 4 9 3 8 27 4 16 81 5 32 243 6 64 729 7 128 2187 8 256 6561
- If we compare 35, which gives us 243 states, with 28, which gives us 256 states, that's about the same amount of states, with 8/5 or 60% more bits than trits. StuRat (talk) 19:30, 2 January 2013 (UTC)
- As our article section on the representation of bits elaborates, current hardware uses the absolutely minimum discernable difference in some physical property - like the stored electric charge on a capacitor, or the variation in the reflectivity of a surface pit in a compact disc - to represent either 1 or 0. Almost by definition, if you introduced a different enumeration other than binary, reading each digit would require more power, or more spatial area/volume, or more sensitive electronics, or some other improvement in the resolution of the hardware, compared to what we have today: so, you'd lose energy efficiency, or your devices would be physically larger, or more expensive. From the viewpoint of logical memory addressability, each memory location would store more information; but from the viewpoint of physical memory, the storage and manipulation would be less effective and less efficient. This is the reason most of today's digital electronics opt for binary representation for all stored information. Nimur (talk) 04:01, 31 December 2012 (UTC)
- So when these people say this: "Due to the low reliability of the computer elements on vacuum tubes and inaccessibility of transistors the fast elements on miniature ferrite cores and semiconductor diodes were designed. These elements work as a controlled current transformer and were an effective base for implementation of the threshold logic and its ternary version in particular [7]. Ternary threshold logic elements as compared with the binary ones provide more speed and reliability, require less equipment and power. These were reasons to design a ternary computer." That penultimate sentence is not possibly true? I wish I could visualize or that they had pictures of how exactly ferrite cores and diodes were configured to make a ternary hardware implementation. 67.163.109.173 (talk) 14:33, 31 December 2012 (UTC)
- I read that page a few times. It uses English a bit imperfectly, but I think I finally interpret your quoted section to simply mean that ferrite cores are an improvement over vacuum tubes - whether used for binary digital logic or any other logic circuit. Were the circuits tuned to their optimal performance, the same rule would apply: binary is more information-dense, as I explained above. Needless to say, your article describes Soviet computers of the 1970s: experimental computer technologies, methods, and practices that were experimental in that era and that community generally don't apply to modern systems. For an overview, see Science and technology in the Soviet Union. Soviet computers are fascinating from a historical perspective. I have always found it interesting that the C compiler was (reportedly) unavailable in most of the U.S.S.R. until after the 1990s, which I have heard forwarded as the explanation for why post-Soviet programmers trained in the Russian education system still prefer Delphi and Pascal. This is also a reason why some experts considered Russia to be ten years behind in Computer Science on the whole. It's an interesting spin on "legacy code." Nimur (talk) 01:03, 1 January 2013 (UTC)
- Interesting what you said about the Russians not being able to get a C compiler for so long. What would have stopped them from being able to get gcc, which Stallman offered very freely to anyone who wrote? Was gcc declared non-exportable by the US gov't? 67.163.109.173 (talk) 02:09, 1 January 2013 (UTC)
- This is drifting far from the original question about ternary digital logic; but it's an interesting topic. It seems that nothing "prevented" access to GCC; but massive national/institutional inertia slowed its adoption. gcc was only first made public in the late eighties, and remember that even by 1990, there was not yet a "world wide web" nor widespread use of the Internet in Russia. HTTP had not yet been invented, let alone become widespread; and when predecessor network softwares were used, it was inconvenient and impractical for a Soviet programmer to peruse an American "website." Basic network connectivity between Soviet computer facilities was sparse. Connectivity to networks that hosted American and international content was even more rare. Long-distance international phone calls were expensive (and suspicious); "browsing" and "web surfing" were far less casual than today. Information about new software and methodology took much longer to disseminate; today, new software versions launch over timespans of just a few hours, but two decades ago, a software update rollout might take place over months or years. Furthermore, though today gcc is often the "obvious" choice of compiler, the GNU C compiler was hardly the "industry standard" in 1990 - especially in the U.S.S.R. Early GCC was buggy, largely unknown to most programmers, and only supported a few architectures. Soviet computers were built with weird processors; lucky well-funded researchers had such "personal computers" as Soviet Z80 clones or Japanese processors. IBM compatible machines, including all x86 architectures, were extremely rare. And you might take for granted that free Linux/Unix exists today; but in 1987, there was no Linux yet; and Unix was an American commercial software system designed to run on American computer hardware.
- Finally, even when free (or commercial) C compiler software was functional and available, it was still not widespread among Soviet academics or industry programmers. Consider that today, with so much free software easily available via internet, at zero cost, most people still do not have a copy of gcc on their personal computers and devices. Neither cost nor ease of acquisition are the limiting factor, today. Even if software is readily available, a community of experience and expertise is needed for software adoption to become widespread. C was a language designed and used in the United States; despite its advantages for system-programming, it did not seem to get traction in the U.S.S.R. until quite a bit later.
- To directly answer your question: I do not believe that gcc ever underwent regulatory oversight for export or ITAR considerations. Much kerfuffle was made during the 1980s and 1990s regarding free software implementations of cryptography, data compression, and hashing algorithms; but I don't think the compiler implementations ever drew much attention. Nimur (talk) 09:08, 1 January 2013 (UTC)
- Interesting what you said about the Russians not being able to get a C compiler for so long. What would have stopped them from being able to get gcc, which Stallman offered very freely to anyone who wrote? Was gcc declared non-exportable by the US gov't? 67.163.109.173 (talk) 02:09, 1 January 2013 (UTC)
- I read that page a few times. It uses English a bit imperfectly, but I think I finally interpret your quoted section to simply mean that ferrite cores are an improvement over vacuum tubes - whether used for binary digital logic or any other logic circuit. Were the circuits tuned to their optimal performance, the same rule would apply: binary is more information-dense, as I explained above. Needless to say, your article describes Soviet computers of the 1970s: experimental computer technologies, methods, and practices that were experimental in that era and that community generally don't apply to modern systems. For an overview, see Science and technology in the Soviet Union. Soviet computers are fascinating from a historical perspective. I have always found it interesting that the C compiler was (reportedly) unavailable in most of the U.S.S.R. until after the 1990s, which I have heard forwarded as the explanation for why post-Soviet programmers trained in the Russian education system still prefer Delphi and Pascal. This is also a reason why some experts considered Russia to be ten years behind in Computer Science on the whole. It's an interesting spin on "legacy code." Nimur (talk) 01:03, 1 January 2013 (UTC)
- So when these people say this: "Due to the low reliability of the computer elements on vacuum tubes and inaccessibility of transistors the fast elements on miniature ferrite cores and semiconductor diodes were designed. These elements work as a controlled current transformer and were an effective base for implementation of the threshold logic and its ternary version in particular [7]. Ternary threshold logic elements as compared with the binary ones provide more speed and reliability, require less equipment and power. These were reasons to design a ternary computer." That penultimate sentence is not possibly true? I wish I could visualize or that they had pictures of how exactly ferrite cores and diodes were configured to make a ternary hardware implementation. 67.163.109.173 (talk) 14:33, 31 December 2012 (UTC)
- I think the reason modern computers are binary is simply that VLSI transistors lend themselves to binary logic but not ternary logic. That wasn't necessarily true of other computing technologies. -- BenRG (talk) 04:35, 1 January 2013 (UTC)
- As a side note, what you're describing is known as balanced ternary. --Carnildo (talk) 04:31, 31 December 2012 (UTC)
- All true, with minor historical exceptions, for memory and logic. Communication works under different economic constraints, resulting in many kinds of M-ary transmission. Jim.henderson (talk) 14:49, 31 December 2012 (UTC)
- As well as the Setun computers you might be interested in Thomas Fowler (inventor). There's a picture of the stained glass window in St Michaels Church, Great Torrington showing his ternary calculator built out of wood at Torrington Museum Dmcq (talk) 01:50, 1 January 2013 (UTC)
- Very cool! I also found a video of someone operating an apparently cardboard implementation of the machine here. 67.163.109.173 (talk) 03:27, 1 January 2013 (UTC)
Microphone humming
When I use a microphone (inserted into the audio jack of a desktop computer), it tends to pick up a very low frequency sound. Trying to muffle the mic with cloth, various forms of tapping the mic, and moving the mic further from the computer/any power outlets or changing its direction, etc. generally have no effect on reducing this. I did find that it harmonized very well (it sounded very much like the tonic) with Preparing the Chariots from the Hunger Games soundtrack (which I believe is in B major), which leads me to believe it is 60 Hz mains hum. Two methods I've found for making the hum stop are: 1. just waiting, and sometimes, the humming will spontaneously stop being picked up (this tends to be unreliable and take over an hour) and 2. sticking it into the metal bell of my bass clarinet and clanking it around for a few seconds, after which, when I remove it, the sound is no longer picked up 95% of the time. So my question is this: am I right to believe it's mains hum? and is it possible for someone to explain how does my second method works? Brambleclawx 04:35, 31 December 2012 (UTC)
- I find it's usually something else on the same circuit. Just increasing the distance from the interference source can help, and perhaps you also did that with your 2nd method. A 2nd theory might be that there's something loose that vibrates, and shaking it around tends to move it out of position to vibrate. StuRat (talk) 04:39, 31 December 2012 (UTC)
- Just plain shaking the mic doesn't seem to do anything though. (this isn't really an issue for me since I do have a solution that works. i'm just curious as to the source/reason behind how such an unusual method seems to work). Thank you for your hypotheses Brambleclawx 15:05, 31 December 2012 (UTC)
- Perhaps there could be a build-up of static electricity in the microphone ? StuRat (talk) 22:01, 31 December 2012 (UTC)
- Is the computer grounded? If not I suggest you do this. You can also try to connect the computer to the electrical grid through an on-line U.P.S.. Ruslik_Zero 14:58, 31 December 2012 (UTC)
- I agree with Ruslik0 that proper grounding of the computer, peripherals, and anything else on the same circuit is definitely the first thing to check for. You should also make sure that the microphone cable doesn't run parallel to any power cords or near the power supply of your PC. 209.131.76.183 (talk) 18:56, 2 January 2013 (UTC)
Update for Microsoft Excel
Every morning when I start up Excel to activate a file, after I enter the password, Excel says it has stopped working and is looking for a solution. A moment later, it opens the file and asks for the password again and everything is fine. This behavior is very consistent. What's happening? I'm sure there is no hope of getting an upgrade, but is there anything I can do? --Halcatalyst (talk) 16:33, 31 December 2012 (UTC)
- Could we be dealing with two levels of passwords ? Perhaps first it asks for a network password, then, when it determines it can't work in that manner, it asks for a standalone password, and continues in that mode. StuRat (talk) 22:04, 31 December 2012 (UTC)
- No, that isn't it. But I forgot to mention it's Excel from Office 2010 running under Windows 7. --Halcatalyst (talk) 00:32, 1 January 2013 (UTC)
Pronunciation of NAS?
Is the acronym of Network-attached storage (NAS) usually pronounced like "nass" (rhymes with mass) or "naz" (rhymes with jazz)? --71.189.190.62 (talk) 19:39, 31 December 2012 (UTC)
- From my brief review of youtube, it sounds like the Brits say "nazz" while North Americans say "nass" or "noss", although in most cases people say it so quickly it's difficult to understand which; there's some of hearing whatever you want in listening to these. You probably should move this to the Language section as opposed to the computing one. Shadowjams (talk) 21:57, 31 December 2012 (UTC)
- I either say "network attached storage" or N-A-S (like F-B-I). StuRat (talk) 22:05, 31 December 2012 (UTC)
Pixels on tablets
Can you see pixels on nexus 7, ipad mini or kindle fire hd? I heard with high enough resolution you can't see pixels. 82.132.238.177 (talk) 23:51, 31 December 2012 (UTC)
- Apple’s claim is that you won’t notice individual pixels at an ordinary viewing distance. So far I’m unaware of (the likely inevitable) serious competitors, although I can’t say individual pixels have really been bothering me for the past decade or so. If you see them, you’re probably putting your eyes too close to the screen. ¦ Reisio (talk) 01:21, 1 January 2013 (UTC)
- Really? Many apple fans claim they see a big difference between retina and non retina? Is this partly just marketing hype then? There's no actual scientific research or anything on this? 82.132.218.163 (talk) 01:48, 1 January 2013 (UTC)
- There’s definitely a difference, but if it were that great of one, you wouldn’t be asking these questions. ¦ Reisio (talk) 02:06, 1 January 2013 (UTC)
- Well, being honest at normal reading distance, I can't see the difference between retina and non retina which is why I questioned it. 82.132.219.85 (talk) 02:35, 1 January 2013 (UTC)
- It might be helpful if we could get some actual numbers, in terms of pixels per inch. StuRat (talk) 03:20, 1 January 2013 (UTC)
- Ah, I see they are at the first link, which was renamed from Retina display. I typically notice pixels when a drop of water lands on the screen, acting as a tiny magnifying glass. I wonder if the Retina display can pass this test. Or, more importantly, if you draw a 1 pixel wide line on it at a 1 degree angle, will it have jaggies or look blurry ? StuRat (talk) 03:23, 1 January 2013 (UTC)
- The resolution at the center of the fovea is 30 seconds of arc (source: visual acuity), or π/21600 radians, so the pixels of a display will be invisible at distances beyond roughly 21600/(πD) where D is the pixel density in dots per unit length. The screens of the iPad Mini, Nexus and Kindle Fire HD 7", and Kindle Fire HD 8.9" are 163, 216 and 254 dpi respectively (source: list of displays by pixel density). The corresponding minimum viewing distances are 1.1 m, 0.8 m, and 0.7 m. This may be an oversimplification, though. -- BenRG (talk) 04:19, 1 January 2013 (UTC)
- Yes, what is displayed on the pixels certainly matters. If it's one solid color, it certainly will be harder to see the pixels than if it's a single-pixel-width line at a slight angle, in sharp contrast to the background. The screen brightness and contrast, as well as the ambient lighting, also matter. The screen surface also makes a difference, as some screens will blur the pixels more than others. StuRat (talk) 06:12, 1 January 2013 (UTC)
- By the solid-color test even a 100-dpi panel's pixels are invisible at a typical distance. -- BenRG (talk) 20:06, 1 January 2013 (UTC)
- That depends on the pixels. Some technologies seem to bump pixels right up against each other, while others have a black outline around them. StuRat (talk) 19:02, 2 January 2013 (UTC)
- In practical use I'm certainly not ever aware of individual pixels or of any blockiness - but then one mostly views photographic images and anti-aliased text, where you wouldn't expect to notice even much larger pixels. I made up a sample 1280x800 test image, with an individual black pixel on a white background (and some one-pixel wide lines). With my reading glasses on, the single pixel is perceptible out to about 600mm. The one pixel wide lines are perceptible out to about 2000mm. Looking at that one black pixel, which is significantly less noticeable than the dozens of tiny flecks of dust and stuff that one always finds on such a screen, it's hard to imagine a significantly smaller pixel gauge yielding a worthwhile improvement in appearance. -- Finlay McWalterჷTalk 17:51, 1 January 2013 (UTC)
- To test resolving power you should display alternating white and black lines and see how far away you can distinguish them from uniform gray. I just tried it on my 120-dpi laptop display and got a resolution of around 50 arc seconds (with prescription reading glasses to correct mild astigmatism). Based on this I probably could see the iPad Mini's pixels at a typical viewing distance, but it's marginal. My distance vision tested by a Snellen chart is a bit better, around 40 arc seconds. -- BenRG (talk) 20:06, 1 January 2013 (UTC)
- Did you try rotating the 1 pixel wide line by a degree, to see if you can detect any jaggies or fuzziness where each jag would otherwise be ? StuRat (talk) 19:05, 2 January 2013 (UTC)
January 1
Youtube "sort by" for results is gone? (relevance, view count, etc)?
When searching for videos, the normal "Sort by... " is not there. There is no way to sort the videos at all, just scroll through hoping to find the right one. Since this is a very frustrating problem, I would have thought more people would have complained about it, so I assumed it was something only I was getting. If anyone knows how to fix this, or if it's possibly a virus or something on my computer, I'd appreciate the help. Seriously, its just so f__kin ridiculous. What logic is there in removing it??? I think its a perfect example of youtube losing touch with its base...that they would remove something as important as "sort by". Venustar84 (talk) 02:14, 1 January 2013 (UTC)
- There is a button "Filters" just below the search field. When you click on it, the "Sort by..." menu item appears, together with other options. --NorwegianBlue talk 14:56, 1 January 2013 (UTC)
- When they first pushed this new design, these sorting options were missing. They only recently re-added them. Perhaps you are somehow viewing an old version? -- 143.85.199.242 (talk) 14:45, 2 January 2013 (UTC)
- It appears to have to do with language settings, see my response to the third post by VenusStar84 on the subject. --NorwegianBlue talk 22:47, 2 January 2013 (UTC)
About Cookies in IE
Does each website have access to my cookies on all websites I have vistited, or only cookies of their own website?--Inspector (talk) 05:03, 1 January 2013 (UTC)
- Only cookies on their web site. (Specifically, their domain.)—Best Dog Ever (talk) 05:33, 1 January 2013 (UTC)
- Well, that's how it's supposed to be, but there's no guarantee that somebody else couldn't access those cookies. Also, if one company merges with another, they presumably gain access to each other's cookies, meaning wider and wider access as companies combine into bigger ones. StuRat (talk) 06:08, 1 January 2013 (UTC)
- Well, how is the cookie accessed?--Inspector (talk) 06:23, 1 January 2013 (UTC)
- Any type of a virus can access anything on your computer. If cookies were sufficiently encoded, those who access them wouldn't know what info they contain, but, since the companies who store the cookies aren't all that concerned about your privacy, don't expect more than a token level of encryption. StuRat (talk) 06:35, 1 January 2013 (UTC)
- Let's just limit to the methods availible to the website. Is it true that some malicious codes on webpage can allow access to other cookies?--Inspector (talk) 09:14, 1 January 2013 (UTC)
- StuRat is describing a special case of privilege escalation. If your system is working correctly, most software should not have the privilege to view your private web-data. However, malware often attempts to exploit user-error, or software bugs, to gain unauthorized privilege. One such example would be to gain read-permissions to cookies from other websites. A correctly-designed web browser does not permit such unauthorized access. Here is an overview of Internet Explorer 9's security features, explaining how privilege is managed and how the security model works for things like cross-domain requests. Nimur (talk) 09:19, 1 January 2013 (UTC)
- Okay. I write cookies all the time for my sites. Yeah. I guess if someone broke into your house and looked on your computer, they could see the cookie, too, but he really just asked if other sites, by default, can read them. And by default, they can't. Here's why:
- Any type of a virus can access anything on your computer. If cookies were sufficiently encoded, those who access them wouldn't know what info they contain, but, since the companies who store the cookies aren't all that concerned about your privacy, don't expect more than a token level of encryption. StuRat (talk) 06:35, 1 January 2013 (UTC)
- Inspector: if you visit a site, like google.com, the web page you're on can write a cookie to your browser either using JavaScript or by sending the cookie in the response header to any of the requests you send to the site. If you return to that page sometime in the future, your browser lets the site know it has a cookie from a previous visit. In other words, your own browser notices you're visiting google.com. It notices it has a cookie from google.com from another visit. It then decides, on its own, to tell Google.com that it has a cookie from that site and then tells google.com what the cookie contains.
- The cookies on your machine can be viewed easily by you. You said you use IE. So, if you're using Windows Vista or later, by default cookies are stored by IE in C:\users\[your user name]\AppData\Roaming\Microsoft\Windows\Cookies. If you're using Windows XP, they're stored in C:\Documents and Settings\[your user name]\Cookies. Each cookie is stored as a text file by IE. Other browsers offer ways for you to view their cookies easily, as well. So, anyone who has control of your computer can see the cookies. So, yeah, if you had a virus that had control of your computer, then they can see the cookie, too.
- In JavaScript, there's an object called document.cookie. So, you can set it. Here's a vast simplification of the code you need to write:
- document.cookie = [whatever you decide to type here];
- As I said, that's a simplification, because you need to include an expiration date. Here's a more detailed discussion: [5].
- Likewise, you can retrieve whatever is in document.cookie. But notice that it's specific to the current document. In other words, there is no such command in JavaScript for retrieving any cookies other than those for the document.
- If you want, you can set the cookie by telling the server to send it in the header using a language like PHP: [6]. Here's how that code would actually cause the server to behave: HTTP_cookie#Implementation.
- So, in short: No. A web site on it's own cannot view cookies from other domains.—Best Dog Ever (talk) 09:44, 1 January 2013 (UTC)
- But this is sensitive to what constitutes a "domain". If the site x.y.z is allowed to set cookies for .y.z but not .z, then any UK business registered under co.uk can set cookies on .co.uk which will be visible to any other UK business site. Firefox had that problem until 2007. IE prevented it by an ad hoc rule, but the rule didn't cover .ltd.uk, for example. Modern browsers rely on hand-maintained lists of effectively top-level domains. -- BenRG (talk) 17:25, 1 January 2013 (UTC)
How to Count the number of References in TOTAL
Hi I belong to the Silent Hill Wikia. I'm looking for a template (or any type of code) that can help with counting the total number of references on the wiki. What I mean by this is the sum of all the references that we have on each article combined into a nice little number that we could possibly display on the front page (or least to keep track of my own articles). I'm sure if there are simple tools to count edit patterns of normal Wikipedia users then I'm sure there must be something on this for articles. So instead of counting article total references individually - which is a major pain - we want to have a good idea of how many references our wikia contains overall. This would be good for quality control. So could anyone please help? 92.22.68.237 (talk) —Preceding undated comment added 23:44, 1 January 2013 (UTC)
January 2
Model View Controller
Hi all, I've been working on a rare contribution to mainspace, and I'm hell peeved, because I've just found out it may be all wrong. I've been following a Stanford iTunes U series on iOS development, at [7], but this presents a version of the Model-View-Controller architecture that is very different to the one in our article. In the video, the Model and the View do not talk to each other, but only interact through the Controller. I have not found this design anywhere else; for example this describes a few different approaches, none of which involve separation of the View and the Model. Is the approach of the video, with strict separation between Model and View, a common variant? Does it have any special place among the different MVC design architectures? By "special", I mean in any way, so that I can characterise my contribution to the article, for example, it might be the latest approach, or an approach specific to Apple, or an approach that is a subclass of something more general. Any help appreciated, IBE (talk) 01:32, 2 January 2013 (UTC)
- Model-View-Controller implementation in Objective C tends to be very difficult to translate to other languages. This "specificity" of the MVC pattern can be even more pronounced if you're using Xcode's tools, like the interface builder. If you're struggling with the basic concepts of MVC design, the Objective-C version will also throw some curveballs at you: it will train you in using Objective C language features (like Protocols) that don't translate to, say, PHP. So, step back for a moment; review the abstract concept of MVC philosophy; and try to see how one particular language and toolset have adapted those design tenets. Then, step back again, and take a look at a totally different MVC design, like a data-driven web interface in PHP (say, MediaWiki). It is not trivial to see how these implementations share a common design. But, to some level of abstraction, they both follow a similar separation of concerns.
- As I left school and worked in real commercial software, I began to strongly view "MVC" as a software antipattern - an approach that should be avoided. Now, I'm not a UI designer, so my opinion counts for aught; but just recognize this: MVC is an approach that is interpreted, and misinterpreted, and used and misused, by thousands of different programmers and software designers. Sometimes, it gets pulled off in a stunning and simple way that makes me happy: encapsulation, separation of concerns, and all kinds of good, maintainable software. Other times, sticking to MVC as if the design is gospel forces a programmer to carry around three times as much code for a trivial task. After recognizing how frequently that occurs, I have come to appreciate the very very specific way that the Xcode toolchain imposes constraints on MVC. Interface Builder allows you to place controllers for UI elements, in specific ways, that mimic "textbook application" of the pattern; so when you start a new project, you quickly spot a lot of familiar-looking, consistently-named stuff. This forces developers to use, and not to abuse, the design pattern. Nimur (talk) 15:56, 2 January 2013 (UTC)
Karnaugh Maps and Gate optimization
Hello!
For fun, i am working on making a vending machine in minecraft, using my newly learned knowledge of computer systems from a college course. As those of you familiar with this field should know, one method of figuring out a good optimization for logic gates is Karnaugh maps.....
My maps involve 6 input variables, far more complex than the 4 input values i would see maximum in my course. I found that, depending on which order the rows and columns are arranged, you can end up with a really poorly optimized map, or a really good one.
http://i50.tinypic.com/2ly0t4g.png
Above is a picture of my work with optimizing. You can see that i made the purple output (the purple highlights all the places where output "3" is 1) as optimized as possible. It was a mess the way it was when i started! I would like to do the same for orange ("2"), but i think i have made it as optimized as it can be..... its still so messy! :(
Does anyone have a better optimization idea? :) 172.162.57.6 (talk) 02:24, 2 January 2013 (UTC)
- Karnaugh maps aren't limited to factorization in just four variables; they should reduce to the minimum logical complexity for all cases in any number of variables, always by factoring out any redundancy in any term. It's possible that your algebraic statement in six variables just doesn't have much redundancy... have you found a more optimal factorization of your expression using a different method? If so, are you sure you're using the rule correctly when wrapping around edges of the map? Nimur (talk) 15:40, 2 January 2013 (UTC)
Maybe we should look at this more directly. I have been doing a little research and i found this:
http://www-ihs.theoinf.tu-ilmenau.de/~sane/projekte/karnaugh/embed_karnaugh.html
... It Reports that an optimized formula (there may be more than one formula, but none are less complex) is:
/C1&/S3&S2+/C2&/S3&S2+/C1&/S4&S2+/C2&/C1&S2+C1&/S4&S3&/S2+/C2&C1&S3&/S2+C2&C1&/S3&/S2+C2&C1&/S4&/S2+C2&/C1&S4&S3&/S2
(I guess for this tool, "/C1" means "Not C1", and they redundantly use the and symbol... but this is fine...) I don't think there is any better way is there? I think my function happens to be that messy. :\
However.... i am looking into solving these issues algebraically, as this seems a reasonable route, but it wasnt introduced in my class.
172.162.6.57 (talk) 20:31, 2 January 2013 (UTC)
- You may be able to reduce the gate count algebraically, but the K-map solution minimizes the depth, which gives you lower propagation delay. It also is simpler to synchronize so you don't get spurious intermediate results. 209.131.76.183 (talk) 13:37, 3 January 2013 (UTC)
The sort button on youtube is no longer there
I looked in the filters and it's no longer there. please send me a link it. 03:01, 2 January 2013 (UTC) — Preceding unsigned comment added by Venustar84 (talk • contribs)
This has already been answered. :)
See the post on January 1, regarding missing "sort by" on youtube. 172.162.57.6 (talk) 03:13, 2 January 2013 (UTC)
- The "Filter" button is not displayed when you open the YouTube page. You need to first perform a search, and then the Filter button appears, just below the search box. Click it, and the options "Sort by Relevance, Upload date, View count, Rating" appear. --NorwegianBlue talk 14:52, 2 January 2013 (UTC)
The search options on youtube have changed
==I have go in to search filters but the sort options have changed: http://www.youtube.com/results?search_query=star&oq=star&gs_l=youtube.3..35i39l2j0l6j0i3j0.1453.2474.0.3531.4.4.0.0.0.0.846.1355.0j1j0j1j6-1.3.0...0.0...1ac.1.uh9d0V7pdik The search options seem to go by these: Upload date:
Anytime Today This week This month
Result type:
All Videos Channels Playlists
Duration:
All Short (~4 minutes) Long (20~ minutes)
Features:
All HD (high definition) Closed captions Creative Commons 3D Live
And not by this anymore: "Sort by Relevance, Upload date, View count, Rating". Does anyone know how I can use the old search options? thanks! Venustar84 (talk) 17:58, 2 January 2013 (UTC)
- Here's what I get with Firefox 17.0.1 when clicking the Filter button:
Upload Date Result Type Duration Features Sort by Last hour Video Short (~4 minutes) HD (high definition) Relevance Today Channel Long (20~ minutes) CC (closed caption) Upload date This week Playlist Creative commons View count This month Movie 3D Rating This year Show Live
- However, when I tried it using Internet Explorer 9 and Chrome (23.0.1271.97), the "Sort by" section was indeed missing. Since my language settings in Firefox were US English, and the settings in IE and Chrome were Norwegian, I tried switching the language settings in IE and Chrome to US English. The "Sort by" section then appeared. So, if your browser settings are anything else than US English, I suggest that you try switching to US English. Hopefully, the problem will then disappear. --NorwegianBlue talk 21:33, 2 January 2013 (UTC)
Persistent connection problems
I have been having persistent connection problems with the Internet since mid-November. This is most prominent with Wikipedia. I can view Wikipedia all OK, but trying to edit it mostly fails. Clicking on any button that sends an HTTP POST request (as opposed to clicking on a wikilink and sending an HTTP GET request) sends a couple of kilobytes, then the entire connection falls silent, causing Firefox to time the connection out, and no edit happens. This happens daily now. There are sporadic ten-to-twenty-minute intervals when HTTP POST requests work all OK, and I can edit Wikipedia as normal. Then the problems resume. This is not limited to Wikipedia - trying to even view Suomi24, much less write to it, was a huge pain, because requests to the ad servers weren't getting through. That problem was solved when I made Firefox block cookies from the ad servers. Facebook and Internet forums work OK, at least as long as I only submit text. I haven't tried uploading images yet. This is not just a WWW problem, as sending e-mail only sends about 10 kilobytes, then the connection falls silent. As a result, I can't send any e-mail over 10 kilobytes. This just happened in mid-November and has been going on ever since. What could possibly be the cause of this? Is anyone else experiencing this? JIP | Talk 18:41, 2 January 2013 (UTC)
- Firstly, the usual suspects: does this happen on multiple machines? Does it happen on a machine with a clean, unmodified linux install? Does it happen when the connection to the router is wired, or only wireless? Have you tried another NIC? Have you tried another router. Consumer grade routers are, in my experience, junk, and I've seen some recently that are sporadically misbehaving due to cheap capacitors dying. Only once you've tried all of that is it worthwhile worrying about whether the upstream connection is bad. This entertaining blog post discusses a real upstream failure in a very odd scenario; I doubt you're having this same thing, but they steps they go through to isolate their problem could be instructive for you. -- Finlay McWalterჷTalk 18:59, 2 January 2013 (UTC)
- That's a fascinating detective story, though I have to admit that I don't understand all the details. I regularly have an "edit" failure when trying to edit Wikipedia, but in my case I blame the complex series of microwave links between my house and the ISP (sometimes it fails on download, too). I don't seem to have the same problem sending files via FTP. I don't have the expertise to identify the exact location of the problem, but there are some very long "ping" and "tracert" delays when my internet connection is misbehaving. Dbfirs 19:58, 2 January 2013 (UTC)
- That edit worked OK! Dbfirs 19:59, 2 January 2013 (UTC)
- What is the manufacturer and model of your modem / router? The modems ISPs give out these days are often really terrible. I once had a router that would drop any TCP connection if it was idle for more than a few seconds. This caused, among others, the kind of problems you described: the computer would make an HTTP POST, and while the web server was processing my request, the modem would just drop the connection because it thought it had "died". Investigate the web interface of the modem, and try to troubleshoot the problem from there. If possible, put the router in "bridged mode", which disables all but the data link layer functionality of the modem. This worked in my case. --hydrox (talk) 10:21, 3 January 2013 (UTC)
How much passive cooling does RAM need during sleep?
I have a new "small form factor" PC which lives in a cupboard under my desk (my room is extremely small, so it would be awkward to have it out in the open). The case is a positive pressure design which pushes cool room air in from the front (I leave the cupboard door open when the computer is in use). The cupboard has an open back half an inch from the wall and lots of space on top of the PC. The CPU and GPU temps seem fine so far in use (this bit is to reassure the people who will inevitably say "your PC shouldn't be in a cupboard").
I am however a little wary of using sleep as opposed to hibernate. I have found plenty of websites that say that you don't need active cooling during sleep as the passive cooling is sufficient, but will I run into problems with the RAM overheating if passive cooling is significantly reduced (i.e. with the cupboard door closed)? Thanks in advance Equisetum (talk | contributions) 16:36, 3 January 2013 (UTC)
Multimedia Presentation Tool Recommendation
Hi all, I will be presenting a group skit/play for a competition sometime in January. Last year, my team won in part by including a "live Twitter feed" of characters in the play that was really a collection of Photoshopped images in a powerpoint presentation. Looking to take it to the next level this year, I was hoping for something that could serve as an "information dashboard", hopefully including a couple modules, like the stock price of the fictional company in which the play is set, a news ticker, maybe some video, and if possible a way to live poll the audience from their smartphones and laptops (I know this is probably too much to ask, but hey, why not try?). Can anyone think of a way to do this? I'm considering building a webpage to run locally off of my laptop connected to a projector, but I'm not sure if I could do it, or if it's even possible.
Tl;dr: Anyone know a way to make a presentation to run during a skit that supports updating images, text, and video? Thanks, 99.224.140.65 (talk) 16:39, 3 January 2013 (UTC)