Jump to content

Wikipedia:Reference desk/Computing

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 94.12.81.251 (talk) at 18:40, 24 January 2016 (→‎we couldn't save your file to pdf/docx this time). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the computing section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:

January 16

Software

When a courier guy delivers the package, he asks me to sign on his phone. And automatically this gets updated on their website : as package delivered. What is the software they use? — Preceding unsigned comment added by Stalson92 (talkcontribs) 02:28, 16 January 2016 (UTC)[reply]

@Stalson92: Since you haven't told us which courier company you're thinking of, I don't see how we could definitively answer this question. There are thousands of courier companies in the world. Perhaps the best course of action for you would be to just call the courier company and ask them. You may need to speak to a supervisor or manager but someone there should know the name of the software. Dismas|(talk) 14:39, 16 January 2016 (UTC)[reply]
FedEx — Preceding unsigned comment added by 103.60.68.122 (talkcontribs) Stalson92 (talk) 15:40, 16 January 2016 (UTC)[reply]
Usually, those companies are large enough to develop their own software. It may have an internal name - but I doubt that'll help you much. SteveBaker (talk) 18:04, 16 January 2016 (UTC)[reply]
Even with the OPs clarification on company, they still didn't mention a location (geolocation gives India).

I wonder whether FedEx always use the same software anyway or just need it to be able to communicate with their servers via their standard API. While I've only ever received one package from FedEx here in NZ, their system seemed to be similar to what many courier companies here use now (and have been for a few years). I didn't sign on a phone but some specialised terminal.

In some countries and places (including I believe some places here in NZ), FedEx do not have their own drivers but instead rely on deals with other courier companies. It's possible they still require the use of specialised FedEx devices, but it would seem more likely they're fine with these drivers using their own devices which either communicte with FedEx via their API or communicate with the drivers courier company which then connects to FedEx.

I guess the phone thing is useful for temporary drivers and similar and perhaps also where the only want to provide one device (particularly in the developing world where they need to keep costs down and can't assume everyone will have a phone for contact). But even then, I wonder if there are circumstances where they don't use the same software.

And we can't even be sure that the software is universally used, e.g. if it's in English perhaps the Chinese or whatever division found it easier to develop their own software (if they have any) rather than translate it. Particularly if other features don't work well in China (or wherever) for whatever reason.

I suspect the phones the OP referred to were Android; while porting Android apps to Windows is often not that hard (as I understand it), again it may be that someone would find it easier to start from scratch. Windows isn't really that popular anywhere, but if the company is providing the phones it's always possible they could choose Windows ones for whatever reason.

Nil Einne (talk) 12:07, 18 January 2016 (UTC)[reply]

Pseudocode mod of a negative integer

In pseudocode, is the mod of a negative number >= 0 (as in mathematics)? In some languages, -7 mod 5 returns -2 instead of +3. Bubba73 You talkin' to me? 02:35, 16 January 2016 (UTC)[reply]

Pseudocode isn't well enough standardized for this question to have a standard answer. As with any other mathematical notation, the author should clarify the meaning where it's ambiguous. Standard mathematical notation has no mod operator. One normally writes x ≡ y (mod p), not, e.g., (x mod p) = (y mod p). -- BenRG (talk) 02:44, 16 January 2016 (UTC)[reply]
The book has an appendix about its pseudocode, but it doesn't say. I'm going to assume that the result is non-negative, because a negative value doesn't seem to do any good. Bubba73 You talkin' to me? 02:57, 16 January 2016 (UTC) [reply]
Resolved
Real programmers rarely (if ever) use pseudocode - and then only where there is a gun being held to their heads. It's one of those things like flow charts and UML diagrams that academics fondly imagine should be a good idea and spend endless time getting excited about. In practice, experienced people can read real code (in their preferred language) much more easily than pseudocode - so we just write things in real code to start with and save an unnecessary step. In practical situations, it's almost always the case that the pseudocode contains bugs that aren't noticed until the real implementation is working (because there is no way to compile and test the pseudocode) - and then nobody goes back to fix the pseudocode and it's immediately rendered worse than useless.
That said...
In the unlikely event that you're trying to describe an algorithm in pseudocode without specifying an implementation language - then you'd need to be super-careful about things like that. IMHO, if you need to be portable and you are planning to take the MOD of a negative number - then you should explicitly test for a negative input, and in that case calculate mod(-x) and handle the result accordingly. This makes the process entirely explicit and whoever is given the thankless task of converting pseudocode into real code can choose to optimise this (or not) depending on what their underlying language supports. Either way, a bloody great comment describing what is intended and why - and warning of the anticipated portability issues - is absolutely required here! SteveBaker (talk) 18:01, 16 January 2016 (UTC)[reply]
Pseudocode is common in computer science papers. It is usually block-structured like popular programming languages, though you do sometimes see flowcharts. It is useful in the same way as other mathematical notation. It is generally easier to understand an algorithm from a pseudocode description than from a plain-English description or an executable implementation. -- BenRG (talk) 20:33, 16 January 2016 (UTC)[reply]
I was trying to translate some pseudocode in a book to a working program. (I found what I think is a typo, but fixing that and making the result of modulo >= 0 and it works.) Speaking of flowcharts, I used them routinely in real programming when I used non-structured languages (e.g. spaghetti Fortran, Basic, and a little Cobol). But when I went to structured languages in 1981, flowcharts no longer applied. Bubba73 You talkin' to me? 21:29, 16 January 2016 (UTC)[reply]
I am a real programmer and, contrary to Steve's experience, I have often found pseudocode a useful tool when drafting the basic algorithm for a procedure. I find it very convenient to write something like
    while any moles are visible {
       find nearest mole to hammer
       move hammer over mole
       whack it
    }
and then translate that into real code. A real-life example would be more complicated than that one, but still no more than around 10-15 lines of pseudocode. --76.69.45.64 (talk) 07:43, 17 January 2016 (UTC)[reply]
I agree. Bubba73 You talkin' to me? 23:06, 17 January 2016 (UTC)[reply]
Agreed as well. Though in my case, I use it more like a note that I need to come back to write code there, because I'm already in the middle of writing something else. --Wirbelwind(ヴィルヴェルヴィント) 19:01, 20 January 2016 (UTC)[reply]

GIMP 2 and animated GIFs

Passive permit traffic signaling in Göttingen, Germany
pictures from camera

I need help understanding some options for saving animated GIFs. In File + Export, with GIF selected as the output type, there are the following options dealing with animation:

Delay between frames where unspecified: ____.
□ Use delay entered above for all frames.

So, does this mean checking the box overrides the "where unspecified" part above ? If not, what does it do ? And is there a way to see which frames have their own delays specified and change them individually ? StuRat (talk) 03:27, 16 January 2016 (UTC)[reply]

GIMP converts the layers of the picture to an animated GIF picture. The delay is framerate of the flip book.[1] Use delay entered above for all frames. is a constant flip rate of the layers. GIF is compressed file format. If there's no change for longer, the best compression is not to flip the fame picture, it is more easy to choose an extra delay time till flipping the next frame. --Hans Haase (有问题吗) 12:03, 16 January 2016 (UTC)[reply]
I have only used GIMP occasionally, but here's what I remember. When opening or saving GIF files, each GIMP layer is considered a GIF frame. The frame duration is specified in the layer name in parentheses: GIMP Tutorials - Simple Animations shows examples like (1000ms) and (1500ms). Based on that, I intepret the options above to mean:
  • If the layer name doesn't specify a duration in parentheses, use this default duration: ____ milliseconds.
  • □ Ignore durations specified in the layer names and always use the default duration above.
You can probably do a test to confirm that's how they behave. --Bavi H (talk) 19:31, 16 January 2016 (UTC)[reply]

Thanks for the info ! StuRat (talk) 22:04, 18 January 2016 (UTC)[reply]

Futuristic font(s)

I'm searching for a 'list' of futuristic fonts for MS Office/Word. Something that actually 'looks' futuristic not just 'entiles' futuristic. What do you guys recommend? -- Mr. Zoot Cig Bunner (talk) 20:00, 16 January 2016 (UTC)[reply]

Just try a Google image search for 'futuristic fonts.'--3dcaddy (talk) 20:48, 16 January 2016 (UTC)[reply]
Greetings Russell.mo, At website www.dafont.com search for cyber fonts will give a good selection. Regards,  JoeHebda (talk)  17:14, 17 January 2016 (UTC)[reply]
Anyway,
3dcaddy: Checked before posting!... Thank you.
JoeHebda: I've bookmarked it. Since its a huge list, I'll look at it hopefully in the near future... Thank you.
Any recommendation guys? For example, something that will never 'go old' or 'die out'. Something that will last even after the death of our solar system...? E.g., the font "Times New Roman" is a goldie - although "Ariel" font is taking a peak for some time/years, Times New Roman font is still posh/graceful/historic/modern...if you know what I mean. I tried using Times New Roman font, but it lacks the 'forward looking' or 'futuristic/future looking' bit, if you know what I mean. -- Mr. Zoot Cig Bunner (talk) 19:04, 18 January 2016 (UTC)[reply]
3dcaddy, JoeHebda: Sorry about that guys, I did not mean to come across as rude...I used strikethrough now. Regards. -- Mr. Zoot Cig Bunner (talk) 19:28, 19 January 2016 (UTC)[reply]
If you want something that will never go old or die out and will last even after the death of our solar system, your best bet is to not write anything and just imagine/dream that you have a font that will last that long (even though we have no reason to think our current alphabet and language will last that long). Nil Einne (talk) 21:17, 18 January 2016 (UTC)[reply]
Nil Einne: Even better, then it'll look like ancient, historic, and futuristic - Alienatic - something like the Egyptian rubbish i.e. aliens came down to earth and bla, bla, bla...
I've already used my imagination. I have logo and a name - My drawing covers it nicely but I don't want my hand writing and drawing around the universe. My Father is not permitting it still, so I'm day dreaming and staying prepared...having fun in other words. -- Mr. Zoot Cig Bunner (talk) 19:28, 19 January 2016 (UTC)[reply]

January 17

Difference between ? and * in cron

At work, I have to use a .NET scheduler library that uses Unix cron expressions for scheduling. The documentation tells me that "?" means "any day/month" and "*" means "every day/month". What is the difference between these? Could anyone give a concrete example where "?" and "*" cause different schedules? Or do such conditions even exist? JIP | Talk 20:46, 17 January 2016 (UTC)[reply]

I looked in the Solaris, FreeBSD, and Linux cron(8) and crontab(5) man pages, and I don't see any of them ascribing any meaning to a ? wildcard. This seems to be specific to the fakey crontab your specific .NET thing does. -- Finlay McWalterTalk 22:31, 17 January 2016 (UTC)[reply]
Indeed; the same applies to GNU mcron; the '?' symbol is not a documented feature of either type of cron syntax supported by the GNU variant. Your best bet is to check the documentation that specifically applies to your scheduler tool or library, because it evidently uses its own variation of the standard syntax. Nimur (talk) 22:57, 17 January 2016 (UTC)[reply]
I don't understand it either. Usually a "?" wildcard in a string means any ONE character whereas "*" means any string of characters. Bubba73 You talkin' to me? 23:30, 17 January 2016 (UTC)[reply]
Well, "usually on Windows"... but even then, use caution: '?' and '*' wildcard behavior varies wildly between the ordinary Windows command interpreter, the Windows Power Shell, Windows Power Shell "cmdlets", and third-party programs that run on Windows. Here's an MSDN blog on the difference in wildcard expansion between Windows and unix...; and even that doesn't mention the special case of cron syntax.
For what it's worth, our article on cron mentions that '?' expands differently on nncron, a proprietary freeware variant of cron developed for Windows. It even self-describes its '?' expansion as "non-standard." Nimur (talk) 02:32, 18 January 2016 (UTC) [reply]
cron syntax isn't a special case of shell filename globbing. It's a totally different thing. I guess it's possible that shell syntax influenced cron's use of * for "any value", but *2 doesn't mean any value ending in 2, etc. I don't know why Bubba thought there was any connection. -- BenRG (talk) 04:10, 18 January 2016 (UTC)[reply]
I thought so because of Wildcard character, but I admit that I don't know anything about cron. Bubba73 You talkin' to me? 04:47, 18 January 2016 (UTC)[reply]
On NetBSD UNIX, the crontab format is explained in crontab(5), i.e. "man 5 crontab", and this version does have a meaning for "?". It means that when reading the crontab file, cron is to select any one value randomly from the permitted ones. So for example "20 ? * * *" specifies that the job is to run once a day at 20 minutes past some hour: for example, it might be at 3:20 or 7:20 or 23:20. "? ? * * *" would run it once a day at a random time. Also, "?" can be followed (without spaces) by a range, which asks for a random selection from that range. "?40-42 * * * *" would run the job once an hour at either 40, 41, or 42 minutes past the hour.
Note that if cron ever rereads the crontab file, a new random selection will be made. So if you edit the crontab file, even if you did not change the entry where "?" was used, then this might cause your "daily" job to switch to a different time, perhaps causing it to execute a second time during the same day/hour/etc. (if it switched to a later time), or not at all one day (if it switched to an earlier time). Similarly if cron had to be restarted for some reason.
The man page points out that this feature is useful if the same crontab entry is used on a large number of machines and would cause each of them to connect to the same host, for example. --76.69.45.64 (talk) 05:26, 18 January 2016 (UTC)[reply]

January 18

Travelling with a Google Chromecast

Will a UK-bought Chromecast still have access to UK-licenced content in other countries? Thanks! 94.12.81.251 (talk) 13:46, 18 January 2016 (UTC)[reply]

Probably not. Much regional-licenced content filters by IP location. If your chromecast is in e.g. Brazil, it will have a Brazillian IP, and as such you have no clear rights to UK content. SemanticMantis (talk) 17:55, 18 January 2016 (UTC)[reply]
In setup, You choose a language. With the YouTube app, possibly the URL of the Video is foreward to the Chromecast when the connection to the adroid phone is establishedy only. --Hans Haase (有问题吗) 20:39, 18 January 2016 (UTC)[reply]
As SemanticMantis has hinted at, it would be more useful to ensure your ChromeCast connects via proxy than it would be to buy your Chromecast in the UK. Actually, I strongly suspect it doesn't matter where you buy it. Nil Einne (talk) 21:14, 18 January 2016 (UTC)[reply]

Partial specialization on a member class of a class template

I have a (third-party) broken iterator:

template<class T> struct Container {
  struct iterator {/*...*/};
  //...
};

It's broken in that it does not define the typedefs for std::iterator_traits. I can specialize std::iterator_traits myself to supply the correct (and obvious) types, but how? Writing

namespace std {
  template<class T>
  struct iterator_traits<typename Container<T>::iterator> {/*...*/};
}

fails because T is not in a deduced context. Of course, if Container::iterator were a typedef (not a member class), and/or if Container was subject to partial specialization itself, then the compiler might have quite a time determining if each subsequent instantiation of iterator_traits matched the "pattern". But the only simpler way to refer to that nested class than to give the canonical name of its containing class would be to use an alias template, which g++ 4.9.2 rejects on the same grounds. Is it simply impossible to produce such a partial specialization? --Tardis (talk) 17:13, 18 January 2016 (UTC)[reply]

I think it's impossible. The top two answers of this question may work for you: either fully specialize iterator_traits for all the contained types that you plan to use, or define an adaptor template with the appropriate typedefs. -- BenRG (talk) 20:00, 18 January 2016 (UTC)[reply]
Further searching (based on the idea that this is a general limitation of template deduction) finds the frustrating result that g++ used to do this but stopped to adhere to the standard. It's obvious that—when the nested name is constrained to be the real name of the type—the deduction is technically possible to implement, but the standard simply says no. --Tardis (talk) 00:32, 19 January 2016 (UTC)[reply]

NuGet question

My company manages most of its internal projects with NuGet packages. This goes so deep down that different .NET C# class libraries within the same product are referenced with each other through NuGet. So I have found out that if I have to make changes to a library referenced to by other class libraries, I have to first build it, then make and publish a new NuGet package of it, and then update all the references. When I say "publish", I mean we only update our internal NuGet repository. The packages only ever go to our company internally and our customers. But I find it a hassle to go through all this every time I want to modify the code in a class library. Is there an easier way to test the changes, before making an official update? My boss suggested temporarily removing the NuGet reference and replacing it with a direct reference, then undoing all this and putting the NuGet reference back in. But I find this a hassle too. Is there any easier way? JIP | Talk 20:58, 18 January 2016 (UTC)[reply]

Does having a phone service reduce available ADSL bandwidth?

My broadband relies on copper telephone wire. Does having telephone service take some of the bandwidth away from the Internet connection? Could I ask for the phone service to be disabled and get slightly faster Internet? --78.148.108.55 (talk) 21:39, 18 January 2016 (UTC)[reply]

ADLS and PSTN coexist, they use different frequencies so they don't "use up" each others bandwidth. Vespine (talk) 22:01, 18 January 2016 (UTC)[reply]
That's the theory, and is probably true in Edinburgh. In practice, if you are at the end of a six-mile stretch of copper, a phone call can make the ADSL stop working completely (for reasons other than bandwidth). It's worth asking your ISP. Dbfirs 23:00, 18 January 2016 (UTC)[reply]
I don't dispute what you're saying (it's common knowledge that ADSL's performance drops rapidly with distance), but this is kind of orthogonal to the original question. ADSL and phone service use different frequencies (the impetus for designing ADSL was to transmit data over existing phone connections by using frequencies above the voiceband frequencies), so disabling the phone service isn't going to add additional bandwidth to the ADSL service. The issue you brought up is interference between phone and ADSL transmissions, but unless you're using the phone, nothing is being transmitted over the phone connection. So, if you don't notice any impact on your ADSL connection when you're using the phone, cancelling the phone service isn't going to improve your ADSL connection. --71.119.131.184 (talk) 00:42, 19 January 2016 (UTC)[reply]
Is it even possible to cancel phone service (as opposed to just unplugging the handset) and keep ADSL? Always thought a phone line has to be live with a number assigned to it in order to carry data. 94.12.81.251 (talk) 11:44, 19 January 2016 (UTC)[reply]
No. Naked DSL is fairly common in some places. In the past, some ISPs even recommended it for VDSL2 in NZ. The phone line does need to be live of course, but there's no need for a PSTN service to be tied to it. One advantage with removing the PSTN service is presuming you're allowed to by your telco and building owner, remove any any extraneous wiring except to the DSL jackpoint. You can do something similar by installing a masterfilter at the phone line entry, with a dedicated connection from the masterfilter to the DSL; but how common these are varies from country to country and the work may be a little more involved (depending on your existing wiring and how far ou want to go). See e.g. [2]. Nil Einne (talk) 13:41, 19 January 2016 (UTC)[reply]
Many years ago, when I had a DSL connection, I was getting a poor error rate (not quite the same thing as poor bandwidth - but the effect was much the same) - and AT&T advised inserting some filters on the phone line. They supplied them for free - and it did fix the problem. I guess the problem is that while a phone shouldn't generate the higher frequencies that DSL uses, old-school phones aren't always that well designed - so maybe it does anyway. Tossing in a low-pass filter shouldn't have a downside - and it can definitely help. SteveBaker (talk) 18:32, 19 January 2016 (UTC)[reply]

January 19

IPv6

It is well-known that the 32-bit IPv4 ran out of addresses. But IPv6 jumps to 128 bits. Wouldn't 64 bits be enough, at least for an extremely long time? That would give each person on Earth more than 2 billion IP addresses. Bubba73 You talkin' to me? 03:29, 19 January 2016 (UTC)[reply]

IPv6 is not designed to be "exhausted". As in, someone gets address 1, followed by address 2, followed by address 3.... etc.... Having a much larger address space allows addressing strategies to be implemented which are not possible with IPv4, this is mentioned here IPv6#Larger_address_space. Vespine (talk) 03:42, 19 January 2016 (UTC)[reply]
Yes, but my point is that seems like overkill. Wouldn't 64 bits be enough for a very long time? Bubba73 You talkin' to me? 03:44, 19 January 2016 (UTC)[reply]
It depends on how it's used. Even if every light switch in your house gets it's own IP address, that won't come anywhere near using them all up, but they might use a system that isn't all that efficient as far as assigning every address. For example, the first of the 8 parts might be for the nation, the 2nd part for the state or province, the 3rd for the county, etc., the 4th for the IP provider, the 5th for the institution or business, the 6th for the individual location or homeowner, the 7th for a particular local area network at that location, and the 8th for a device on that network. (I have no idea if this is how they are actually allocated, this is just an example.) So, while this isn't very efficient as far as percentage of IP's used, it is highly efficient at being able to quickly find info, like the IP provider, from the IP address directly. A similar example is a car's VIN, which is longer than it would need to be, if it was a simple serial number. But then, if it was, you couldn't find out much of anything without looking that serial number up in a database. StuRat (talk) 09:25, 19 January 2016 (UTC)[reply]
Wait, so this means the final sad end to power-cycling my router to get around paywall article limits? :-) 94.12.81.251 (talk) 11:29, 19 January 2016 (UTC)[reply]
Not sure why there's all this theoretical talk. It's not like we're still in 1997. As our article says, the standard IPv6 subnet is /64. That means it doesn't matter whether you want an IP for every electrical and electronic device in your house; or just to your phone and computer. You should still get a /64 subnet at minimum to play around with. Most commonly, the way IPv6 is assigned, even if only your router (or some other single device) is going to get its own IPv6 address, you'll still end up with /64 subnet. In fact, anyone who may ever want it, e.g. an office or simply a sophisticated home user will probably be assigned multiple subnets (perhaps a /48), to make things easier. We still get 18,446,744,073,709,551,615 subnets (less due to the various reserved etc) so it probably isn't an issue. However if it IPv6 was 64 bit and we used the same scheme, we're not really that much better off than we are with IPv4 (we'd have 4,294,967,295 subnets less reserved etc rather than addresses). I'm not sure if a 48/16 scheme would be that much worse than 64/64, however I'm not sure it's really that much better either. I presume this sort of thing is at least partly what Vespine was referring to. There's some more discussion [3], [4] & [5]. Nil Einne (talk) 13:22, 19 January 2016 (UTC)[reply]
Actually on second thoughts, 48/16 would actually likely be a bit limiting. While 65535 hosts would be enough for most use cases, there are surely some cases when it's too few. However if you used 48/16 under a similar scheme to the way IPv6 works, you'd need those hosts to be in more than one subnets. Plus it makes IPv6 address#Stateless address autoconfiguration more difficult due to the significantly higher risk of collisions (and inability to simply use something like the MAC address). I guess you could use 96 bits and 48/48 or perhaps 64/32, but I wonder even more how much advantage there is over 64/64. Nil Einne (talk) 15:28, 19 January 2016 (UTC)[reply]
I probably should add that many people question the wisdom of giving out only a single /64 subnet to even ordinary home customers and suggest /56 or /48 be the default to all customers (and I think this is also what RIR assignment policies and RFCs generally suggest or assume). See e.g. [6] [7] [8]. I think the takeaway message from all this is that IPv6 is intended to move away from the idea of IPs being a scarce resource that need to be conserved (to be fair IPv4 didn't really start off like that either even if it was like that by the time IPv6 was being worked on let alone now); to the mentality that if there's any resonable possibility they may be needed, they should be assigned to ensure routing etc works properly and in particular, to prevent incorrect usage such as effectively further subnetting a /64. Nil Einne (talk) 17:22, 19 January 2016 (UTC)[reply]
128 bits "overkill"? 64 bits "enough"??
My memory is that as IPv6 was being finalized, a bunch of us were upset that it was going to use a fixed size at all; we were hoping it would be variable-length and more or less arbitrarily extensible.
If there's one thing we've learned, it's that arbitrary, fixed limits always become confining. No matter how generous they are at first, no matter how ironclad the arguments of the form "this allocation is so plentiful that every light switch in the solar system could have both a primary and a a backup address and we'd still have a factor of three left over" seem to be, sooner or later, somebody is going to have a brainstorm which lets us do something hitherto unimaginable, the only cost being that it allocates something incredibly wastefully, but "that's okay because we've still got more than enough". And soon enough, the hitherto unimaginable becomes the absolutely necessary, and "still more than enough" becomes "just barely enough".
See also Parkinson's Law (and its generalization), of course.
It may take ten years or more, but I'd guess we'll be seeing localized "shortages" of IPv6 addresses within our lifetimes. —Steve Summit (talk) 15:52, 19 January 2016 (UTC)[reply]
Thanks for saying it, Steve. I was, and still am, a huge fan of variable-length addresses. We can always pretend, by building layers and layers and layers of subnetworks inside of deeply nested NATs...
A perfect example of why the 128-bits isn't good enough: have a look at some of the recent history in high-performance computing research. There was a serious effort, some time ago, to make the individual CPU-cores network-routable, and in fact to use ethernet as a processor bus (...and why not! Ethernet was as fast, or faster, than existing bus architectures!) One could envision a day when every memory-word on every single machine could be individually and globally addressable - if the protocol provided for an inexhaustible address-space.
I sort of remember hearing this kind of theory being kicked around for the SiCortex and for the Niagara, and in some transactional memory-over-internet-protocol research papers, and so forth; I'll try to dig some of that up. This was serious pervasive massive parallelism at its best.
Nimur (talk) 17:56, 19 January 2016 (UTC)[reply]
I'd add that getting IPv6 implemented and widely accepted was a gargantuan struggle...it marked a huge change for the underlying mechanisms of the Internet. Given how little extra the additional bits added to the average packet size - it was worth making a change that would be finally, unhesitatingly, "enough" - so we never have to go through this again. While I agree that it seems unlikely that every human on earth will need a billion IP addresses - a similar train of thought got us where we are today. In the era where a PDP-11 computer cost $10,000, it was reasonable to say "there will never be as many computers as people on earth" and hence a 32 bit address was considered more than adequate. In my home, I have 4 smart TV's, 4 Roku boxes, 3 WiFi routers, 4 laptops, 2 desktops, 2 game consoles, 4 cellphones, a printer, two laser cutters and a dozen IOT devices. Maybe 40 addresses for me, personally, at home. So you can see that we made a horribly bad assumption when suggesting that 32 bits would be a "forever good" number. We simply don't know whether there will ever be a need for that 264 addresses. Suppose we wind up with self-replicating nanotechnological machines? We could very easily wind up with more than 264 of them and want them all to be individually addressable on the Internet. That might not be going to happen - but do you really want to have to go through another round of IPvN updates if that happens?
With 128 bit addressing, we could give a unique IP address to every molecule making up planet Earth - hopefully that's "enough"...but I'm with Steve Summit here - I'd have used a "high-bit-set-means-more-bytes-to-come" approach and thereby allow the address field to be infinitely extensible. Sadly, that worried people who have to be concerned about some idiot sending a trillion byte address and causing every computer on the planet to run out of memory...or making life too complicated for IoT devices. So, yeah - I guess the 'prudent' thing was to pick an ungodly large number. Just don't blame me if/when we need to give a unique address to every quark and photon in the visible universe!
SteveBaker (talk) 18:09, 19 January 2016 (UTC)[reply]
640k addresses should be enough for anyone! (Yes, I know Bill Gates didn't actually say the original "quote".) --71.119.131.184 (talk) 18:44, 19 January 2016 (UTC)[reply]
Any addressing scheme doesn't necessarily needs to work forever, just long enough for it to become obsolete for other reasons. For example, if the VIN system for identifying personal vehicles outlives personal vehicles without "filling up", then it served it's purpose. StuRat (talk) 19:32, 19 January 2016 (UTC)[reply]

OK, thank you for your answers. Bubba73 You talkin' to me? 21:32, 19 January 2016 (UTC) [reply]

Resolved

Intensive but short programming bootcamps

If I have one month's living expenses as savings, could I learn anything useful at a programming bootcamp in that time? I mean something pretty immersive, where I'd be coding full time instead of working a job. I know some Java already but I'm open to other languages. Location is Edinburgh, Scotland. 94.12.81.251 (talk) 11:34, 19 January 2016 (UTC)[reply]

I have been teaching programming in many (MANY) different environments since 1989. My experience is that people learn to program when they need to program. The best thing to do is have something you want to do and then do it. For example, you might want to learn Ruby or PHP. Both are popular web development languages. So, come up with a project and develop it in the language you want to learn. Since you already know Java, you know how to do what you want to do in Java. You just Google for how to do it in Ruby/PHP and keep looking at the code examples until you understand what is happening (such as "Why does PHP have all those $ symbols throughout the code!?"). That is how I have become proficient in so many programming languages. I am thrown jobs that people don't want to do, such as adding a feature to a flight simulator written in Ada. It doesn't matter that I've never used Ada before. I just have to look at some references and translate how I'd do it in C into how I should do it in Ada. 199.15.144.250 (talk) 16:29, 19 January 2016 (UTC)[reply]
Once you have learned the basics, (loops, functions, if statements, arithmetic, I/O) I don't think that programming classes/bootcamps will get you very far. You need to practice. You need to write MOUNTAINS of code. You also need a reason to do that. I always suggest learning enough JavaScript to write a simple web-based game - Pong or Breakout or something like that. Most people would like to make a simple game - and that makes it a better example to work on than something that doesn't motivate you as much. Ideally, you'd also want some kind of a mentor who could gently nudge you in the right direction when you get completely stuck. SteveBaker (talk) 17:24, 19 January 2016 (UTC)[reply]
If your plan is to brush your programming skills in just one month and then work as a programmer right away because you need the income, I am afraid that your deadlines are too tight.
On a brighter note, there is something positive about your case. In general, I think some people never learn to program. That sounds pretty harsh, but yes. No matter how much effort they invest into it, their programming sucks. Since you already learned Java and are willing to keep learning, you seem to belong to the other group, the one that learns to program. However, programming at a professional level requires more time.
There are other things you could try in the same field though. Having a logical mind, you could try other IT jobs: web-master or tech support, for example. [[Glasgow], not far from you, appears to be a a rising tech hotspot [9], with many jobs available [10]. Scicurious (talk) 13:47, 20 January 2016 (UTC)[reply]
Thanks for the support :-) I actually have closer to 2 months' savings, but I was leaving myself time to find another job afterwards. I already work in tech support, but it's call centre shift work and I'd like to move away from that if possible. Probably to desktop support in the short term. So I'll keep plugging away at the job applications, and mucking about with code in my spare time. 94.12.81.251 (talk) 18:05, 20 January 2016 (UTC)[reply]

Ripping DVD movies to ISO on one computer, converting them to MP4 on another

I have a stack of movie DVDs, an old slow Windows laptop with a DVD drive, and a much faster Mac without one. I want to get the movies off the discs so I can watch them more easily while travelling. Backing them up all the way to MP4 on the Windows machine would take weeks. Is there a combination of software that will copy encrypted DVDs to ISO on Windows, and convert ISOs to unencrypted MP4 movies on Mac? Something free if possible, naturally :-) 94.12.81.251 (talk) 13:46, 19 January 2016 (UTC)[reply]

https://wiki.videolan.org/Rip_a_DVD/ explains how rip DVDs in VLC, but I have never tried it because ImgBurn works so well. --Guy Macon (talk) 14:31, 19 January 2016 (UTC)[reply]
I've been using HandBrake (which I just learned works on Windows) to convert my old DVDs to MP4s on my wife's Mac. I've been thinking of getting a cheap optical drive to plug into my Mac to speed up the process with two systems doing the conversions. Just another idea for you: Get a cheap disc drive and plug it into the Mac to keep from doing two conversions. Dismas|(talk) 18:25, 19 January 2016 (UTC)[reply]
(OP) Thanks! I settled on the 21 day free trial of AnyDVD on the Windows laptop, to to decrypt the discs on-the-fly so ImgBurn (free) can read them, and create ISO images. It takes 20-30 minutes even on this pretty old machine, so that works. 21 days is long enough to finish all the discs I've got waiting, but they have a 20% sale on until 24th January if anyone feels like buying. On the Mac I'm using Handbrake (free) to encode high-quality MKV files from the ISOs. It's fast enough to finish a movie at "veryslow" quality in about 90 minutes, and I can leave a queue running all night. Once I figured out I have to choose all my settings THEN create a custom preset to save them, I was fine :-) 94.12.81.251 (talk) 17:59, 20 January 2016 (UTC)[reply]
USB optical drives are fairly cheap now-a-days. LongHairedFop (talk) 19:36, 20 January 2016 (UTC)[reply]
Yeah I know, but I didn't feel like going out to buy one or waiting for a delivery. 94.12.81.251 (talk) 20:05, 20 January 2016 (UTC)[reply]

In a microchip, what are the physical equivalent of a head, state register, tape or finite table

In a logical description of an abstract machine (able to process information), there is an infinite tape and a head reading/writing and moving the tape left/right. There are also a state register and a finite table. What are the physical equivalents in a real microchip implementation? I suppose an approximation of the infinite tape would be RAM or HDD. And I also suppose the finite table is the instruction set. Is that right? What about the other two? --Llaanngg (talk) 15:54, 19 January 2016 (UTC)[reply]

It sounds like you're thinking of a Turing machine, but the vast majority of real processors use architectures nothing like a Turing machine, so I'd say looking for the head is futile. It might be kinda sorta similar to the program counter, but not really. —Steve Summit (talk) 16:04, 19 January 2016 (UTC)[reply]
In a classical Turing machine, the values on the type are somewhat like registers in a CPU. The registers in the CPU don't move around, so you don't need a head to read/write them. Further, the Turing machine stores more than values on the tape. It can store instructions as well. In a modern computer, the instructions are in a process control block, which is stored in memory, not the CPU. Technically, they tend to be stored in logical memory, where part is in a backing store and part is in physical memory, but it appears to be all real memory to the CPU. So, that would be like a separate Turing machine all together that sends information to the CPU. Trying to simplify a CPU down to a Turing machine requires you to ignore the complexities. However, it is good to comprehend how a Turing machine works because that is now linear programming works - which is how most people write programs. 199.15.144.250 (talk) 16:26, 19 January 2016 (UTC)[reply]
What they said. The Turing machine is an abstract model of computation, used for reasoning about computation. Real-world computer architectures are mostly practical versions of register machines, although there are some stack machines in use. --71.119.131.184 (talk) 18:37, 19 January 2016 (UTC)[reply]
A typical computer is only 'equivalent' to a turing machine in what it can do and what it can't. There doesn't have to be (nor often is) a direct correspondence between the inner functioning of one versus the other. Any computer (with sufficient time and memory) can emulate a turing machine - and a turing machine can emulate any computer. That's a demonstrable mathematical relationship - but it doesn't depend on their internal architectures. XKCD 505 has a great example of how one can imagine architectures for computation that look nothing like either a turing machine or a modern computer. THIS, on the other hand is an actual turing machine (albeit with only a finite tape) built out of Lego. But a system that's equivalent to a turing machine can be made from all sorts of elements. HERE is one built from model train tracks! SteveBaker (talk) 17:15, 19 January 2016 (UTC)[reply]
Modern computers are more Neuman machines than Turing machines. Ruslik_Zero 20:53, 19 January 2016 (UTC)[reply]

How does a webmaster program a forum?

What skills are necessary for a webmaster to program a forum? That is, create a working system that allows the end user to type something into a form and automatically see the result printed on the page? The webmaster acts as administrator and can moderate the postings, like a normal bulletin board/online forum. A webmaster may want to create a customized web forum, because the intent of the website may be different from the other types already on the market, or the webmaster may want to have full control over the look and function. 140.254.77.184 (talk) 20:14, 19 January 2016 (UTC)[reply]

Usually they would install software that has already been written, and then customise that with names, style sheets, set up users, groups etc. Drupal can do the job, but there are many others, see Comparison of Internet forum software. Otherwise the programmer will have to know about forms, POST method, user security, and databases to store the information on the server. From the comparison page you can see that the most popular language is PHP, and database MySQL. So the webmaster should learn these. Is this a homework question? Graeme Bartlett (talk) 21:13, 19 January 2016 (UTC)[reply]
Do you actually mean "program" as in writing a piece of software, or do you simply want to install existing software? There are tons of Web forum software packages that you can use "off-the-shelf". If you (or whoever) do actually want to write software, well, you need to have general programming knowledge first. Beyond that maybe Web programming will point you in the right direction. --71.119.131.184 (talk) 21:32, 19 January 2016 (UTC)[reply]
Turning over your question in my head a little more, it seems like what you might want to do is tweak an existing software package. A lot of forum software, CMSes, and the like allow you to extensively customize your installation, including the "look and function". It's unlikely you would need to write new software from scratch unless you really want to do something that's difficult to do with existing software. --71.119.131.184 (talk) 01:49, 20 January 2016 (UTC)[reply]

SQL max function question

Today at work, I spent over an hour puzzling over why my code didn't work. In the end, it turned out that I had written an SQL query in the form of select max(number) from table where this=that, which I thought was supposed to either return null or simply not return anything if there were no rows satisfying the condition this=that. But then I realised it returned 0. So I changed the query to abandon the max function and instead simply select number from table, in descending order, and changed the code to stop at the first result. Is there a way, in plain SQL, to make the query do what I thought it would? JIP | Talk 21:36, 19 January 2016 (UTC)[reply]

How about:
SELECT COUNT, MAX(W.NUMBER)
  FROM TABLE W WHATEVER
 WHERE W.THIS = W.THAT;
Then use a count greater than zero to indicate a match was found. StuRat (talk) 22:04, 19 January 2016 (UTC)[reply]
What database engine are you using. In Oracle, I get a single record with the value null. In MySQL/MariaDB, I get a null record. In MS-SQL, I get a single record with a null value. Perhaps you are using an interface like Ruby, Java, or PHP that is translating the value "null" into zero. 209.149.115.240 (talk) 13:16, 20 January 2016 (UTC)[reply]
There is a ternary operator in SQL. This should work, but I haven't tested if it's even legal SQL.
select case COUNT(W.NUMBER) when 0 then NULL else COUNT(W.NUMBER) end NUMBER from w
MS SQL Server also has the IFF function, if that's the server you are using. LongHairedFop (talk) 19:24, 20 January 2016 (UTC)[reply]

January 20

Representing bits in Magnetic Drum Digital Differential Analyzer

Magnetic Drum Digital Differential Analyzer says that it was the first machine to represent bits as voltages. I checked the reference, and it says that the machine was the first to use voltages for bits instead of pulses, as in ENIAC and UNIVAC I. I've always heard that a high voltage would represent a 1 and a low voltage would represent a 0. So I don't see the distinction between voltages and pulses to represent bits - can someone explain that? Bubba73 You talkin' to me? 19:35, 20 January 2016 (UTC)[reply]

I don't know much about it, but I think they might be trying to say that ENIAC/UNIVAC used AC pulses, and the MDDDA used DC voltage levels. Without access to the reference, I can't be sure. --Wirbelwind(ヴィルヴェルヴィント) 19:49, 20 January 2016 (UTC)[reply]
At least part of the reference is on Google books, but it has its own entry starting on page 163, which says "In contrast to ENIAC and UNIVAC, which used electrical pulses to represent bits, Maddida was the first computer to use voltage levels ..." Bubba73 You talkin' to me? 19:59, 20 January 2016 (UTC)[reply]
I think we need to revise some claims on this and related pages. The basic electronic memory device of the ENIAC[11] was the dual triode vacuum tube flip-flop, which definitely represented bits as voltage levels. In addition, ENIAC had mercury delay lines memory, which used physical pulses/waves/ripples in the mercury to represent bits. What confuses some people is the fact that digits were communicated from one unit of the ENIAC to another in pulse form. Things have not changed all that much; the computer I am writing this on has communicates with a SATA hard disk and a USB thumb drive using pulses, but the RAM and CPU use voltage levels. (BTW, in 1953, ENIAC's memory capacity was increased with the addition of a 100-word static magnetic-memory core, adding yet another way to store bits.[12]) --Guy Macon (talk) 21:39, 20 January 2016 (UTC)[reply]
ENIAC didn't have mercury delay lime memory, but most others just after it did. UNIVAC I did. I agree with a change, but I don't understand it well enough. (I stated on the article's talk page that it didn't make sense to me.) I have the book by Reilly that is used as a reference but I don't have the Annals of the History of Computing. I used to have it :-( Bubba73 You talkin' to me? 22:27, 20 January 2016 (UTC)[reply]
Thanks for catching my error. "When mercury delay lines came up for consideration as internal memory, Lt. Colonel Gillon, as the responsible supervisor for the Ordnance Department, insisted on the tried and tested decade ring counters in spite of the inherently reduced storage capacity. However, in view of the great promise of the mercury delay lines he obtained authorization for a new and separate contract calling for a new machine, using these delay lines. This machine, when completed, was the EDVAC..."[13] --Guy Macon (talk) 15:06, 21 January 2016 (UTC)[reply]

Difference between == and === in JavaScript

I have for several years thought about what exactly is difference between == and === in JavaScript. I know they are supposed to mean "equal" and "strictly equal" but don't know what this actually means in practice. Could someone give me examples where the two operators yield different results with the same operands? JIP | Talk 21:19, 20 January 2016 (UTC)[reply]

> ""==0
true
> ""===0
false
> []==0
true
> []===0
false
> false==0
true
> false===0
false

-- Finlay McWalterTalk 21:43, 20 January 2016 (UTC)[reply]

Take care though, because JavaScript is treacherous. Just look at this examples:
> 1 == "1"     // true, automatic type conversion for value only
> 1 === "1"    // false, because they are of a different type

--Scicurious (talk) 22:00, 20 January 2016 (UTC)[reply]

How does a cracking program know when an encrypted string/file has been decrypted?

If it tests a password on an encrypted message, couldn't a wrong password output another string, and only the true password output the right string? For example, if the encrypted string is "no seuabn cwdiueit d osf oidistshi", a wrong password would output "stce doiitdiiu u nofbsiho nwdessa", but the right password would output "discussion about how it is defined". That would hugely delay any brute forcing attack, wouldn't it?--Scicurious (talk) 22:36, 20 January 2016 (UTC)[reply]

It's pretty quick to check each word against a database containing the English language. One technique to stymie this is to spell words improperly, so the program doesn't think it found anything useful. StuRat (talk) 22:55, 20 January 2016 (UTC)[reply]
[citation needed]. It would be a very weak cracking program that relies on just dictionaries. See Ciphertext-only attack and this article. In a pinch, any text that has a character distribution that is far from random is a good candidate to inspect manually. --Stephan Schulz (talk) 23:06, 20 January 2016 (UTC)[reply]
What if the encrypted information is not human written text? It could well be a list of random passwords, credit cards numbers, or accounting information. --Scicurious (talk) 23:20, 20 January 2016 (UTC)[reply]
In general, the decrypted text will have a lot less entropy per data bit than a scrambled version (basically, because plaintexts are from the the small set of documents that make sense to us, while cyphertexts are typically from the much larger set of all documents). That is not always true - if you have a cyphertext that contains only have truly random passwords and no structure information and no known plaintext, there is no way to decrypt the file. Similarly, a good compression algorithm removes redundancy and hence increases entropy per data bit, which makes compressed files harder to decode. --Stephan Schulz (talk) 00:07, 21 January 2016 (UTC)[reply]
Yes, various randomness tests applied to brute-force recovered candidate plaintexts should be able to either find the correct key or winnow the match set down to some values that can practically be examined by other methods (e.g. file magic). I wrote a little program that brute-force decodes a (small) AES keyspace and does a Kolmogorov–Smirnov test (which is probably overkill) analysis on the recovered plaintext. By sorting for the result with the lowest p-value (when compared to a uniform distribution) it can comfortably distinguish the correct key - I've tried with text, jpg, mp3, and gzipped-jpg. As Stephan says, it won't work for genuinely random input data, but in practice the kind of thing that someone with the resources to bruteforcing a real problem will be looking for are really unlikely to be genuinely random. -- Finlay McWalterTalk 16:52, 21 January 2016 (UTC)[reply]
At this point, one might thing to oneself "tee hee, then I shall fill my disk with lots of random data, to thwart those brute forcers". Indeed it would, but if one is in (or visits) a country with a mandatory key disclosure law (in law, or just de facto) then one might find oneself in the invidious position of being unable to "decrypt" the random data, and unable to prove that it's actually just random garbage and that one isn't failing to comply with the authorities' "request". -- Finlay McWalterTalk 17:06, 21 January 2016 (UTC)[reply]
If you know it is text a quick check is that all the characters are printable, in fact just do a check for the characters being 0x20 or more plus some others like null, tab, newline or linefeed. If the text has more than about three times as many characters as there are bits in the key this test will start getting to only allowing the correct key through. For shorter strings one would need to do extra tests as above and you just can't tell for text which is around the same length as the key. Dmcq (talk) 00:29, 21 January 2016 (UTC)[reply]
That would apply if it's encoded in ASCII or some other code with unprintable characters, but the OP's example makes me think they are using a smaller character set, perhaps only 27 (lowercase A-Z plus space). StuRat (talk) 01:03, 21 January 2016 (UTC)[reply]
It's recommended to compress before encrypting, to make known plaintext attacks harder. So I wouldn't necessarily expect a successful decryption to yield printable characters. (Of course, with that said, one could always just attempt the decompression and, if it failed, assume that the decryption was incorrect.) —Steve Summit (talk) 23:11, 21 January 2016 (UTC)[reply]
Some encryption / decryption schemes are actually designed to have embedded headers or checksums to quickly check if a decryption attempt resulted in reasonable results. Of course whether such a quick check is present will depend entirely on the encryption approach used. Dragons flight (talk) 01:56, 21 January 2016 (UTC)[reply]
(ec) Most encrypted messages have a MAC to guarantee integrity, or failing that at least an encrypted checksum or magic number so that if you mistype the password you will get a helpful error message instead of gibberish. Otherwise, you have to know or guess something about the plaintext, which is called a crib. -- BenRG (talk) 01:59, 21 January 2016 (UTC)[reply]

January 21

Complexity of Addition in Finite Automata

I couldn't understand the finite automata shown in the Article on Finite Automata(P.65).It might be because I'm weak in binary mathematics.Addition is very simple.I can't get why the author has shown 'addition' as complex.Could anyone help me and give a brief explanation of each state given in the finite automata.JUSTIN JOHNS (talk) 08:19, 21 January 2016 (UTC)[reply]

The section (which begins at p. 53) is named "The complexity of addition". It could well be named the "simplicity of addition". In both cases the author would not be implying that binary addition is simple, or complex. He just wants to describe its complexity. As a side note, I have to say that the complexity of binary addition is lower than that of decimal addition. Scicurious (talk) 13:08, 21 January 2016 (UTC)[reply]
You'll probably get more from the course notes than from the lecture slides, unless you are actually attending lectures. The slides, by themselves, are not very instructive; they are only supplemental cues for the speaker. Nimur (talk) 15:30, 21 January 2016 (UTC)[reply]
This is really a linguistics question. For the author (I'm not sure about English-speakers in general) "complexity" is unmarked, "simplicity" is marked. Simpler instance: asking "how tall is he?" does not imply he is tall, whereas "how short is he?" does imply he is short. jnestorius(talk) 15:34, 21 January 2016 (UTC)[reply]


It also looks like there have been some rearrangements of that class syllabus ("a pretty serious overhaul", according to the lecturer); the Fall 2015 course reader does not contain a chapter on discrete finite automata. I bet this class was split in half..., so we'll have to track down notes for the subject matter you're looking for.
Question for the OP: why are you reviewing the lecture slides for 2011, in the first place?
The current syllabus for this section, covering DFAs, refers to Sipser's book, which I do not believe is available online at zero-cost. If you're having trouble following lecture slides, that book would be your best resource to help explain it. It is available for purchase.
Nimur (talk) 15:36, 21 January 2016 (UTC)[reply]

Probably I guess that there isn't any notes for 'complexity of addition' or even for 'finite automata' in 2011 eventhough slides are present.I'm reading the lectures in 2011 because I think it's the most 'stable' one.I couldn't see anything in the course notes that tell about 'complexity of addition' or even about 'finite automata'.I would just like to get an explanation of the DFA shown in the Article on Finite Automata(P.65).Could anyone help me.JUSTIN JOHNS (talk) 09:08, 22 January 2016 (UTC)[reply]

Testing an HTML script before publishing online?

Is there a way to test an HTML+CSS+JavaScript script before publishing online on one's own web server and buying a domain name? What if the webmaster also wants to do a bit of server-side scripting in order to create a database of registered users? Is there a way to test out the script before it is published online? What happens if the website becomes popular and traffic unexpectedly rises sharply? How do people generally upgrade the hardware to support the incoming traffic and keep the website running? How much does it cost to upgrade hardware? 140.254.136.149 (talk) 17:55, 21 January 2016 (UTC)[reply]

You can test your web-page in your local browser, in a server running in your computer. There is no need to go public for this. You certainly won't need a domain name for this.
You probably don't want to manage the hardware yourself. Hosting has become quite cheap and flexible. There are enough providers of hosting in the cloud that will provide their service no matter if your site gets a couple of hits or several thousand (and charge you accordingly). --Scicurious (talk) 18:38, 21 January 2016 (UTC)[reply]
You still have not answered how to test the webpage in the local browser in a server running on one's own computer. And you still haven't answered the question regarding maintaining a website on one's own server. Sometimes, one wants to have one's own web server, because one wants to do client-side and server-side scripting and allow the website to become user-friendly, dynamic, and interactive. In that case, a web host just won't do. 140.254.136.149 (talk) 19:23, 21 January 2016 (UTC)[reply]
If you're not doing server-side includes/scripting, then you can just place the files in a directory on your PC, and point the browser at it (drag the starting page from the file explorer into your web browser. For SSI, or more extensive testing, you can install the Apache (on all operating systems), or Microsoft IIS Express (MS-Windows OSes, XP or later). Apache is free and fully fledged. IIS Express is free, fully-featured, but hobbled to a few connections at a time, but you won't run into them when testing your own pages. For either server, point your browser at http://127.0.0.1/ which is localhost, an alias for your current PC. LongHairedFop (talk) 20:09, 21 January 2016 (UTC)[reply]
Back when I maintained a large website, I had a second copy of it running on one of my internal servers for testing. It wasn't publicly visible; it didn't have a public DNS entry; I think we all just hit it at its private 192 address or something. It was a certain amount of work to maintain this redundant, second instance, but it was absolutely vital for us to be able to do testing of new pages and functionality before we went live. I assume this sort of thing is common across the industry. —Steve Summit (talk) 23:04, 21 January 2016 (UTC)[reply]
Honestly - web hosting is pretty cheap these days. I pay $9/month...with a bunch of domains and the ability to SSH and SCP into my account. Do that, set up some throw-away URL that nobody will ever visit (like ssfnl429oksldf.org or something - you can get that for under $10 and it'll be good for a year) place an empty index.html at the root, and make a subdirectory with some other garbage name. Set your website up in there - rename the subdirectory every once in a while - and you can be pretty sure that nobody will come visiting unless you tell them the full URL to the page. Search engines won't spider down into the subdirectory, so google searching on your project won't get them there either. When you have it all working to your satisfaction, transfer it over to the 'real' url to make it go live. Keep your garbage site for making improvements and bug fixing, so you can check your changes without screwing up the real site. It's worth the tiny additional investment.
Sure, you can run simple sites (without server-side stuff) on your local machine - but as soon as you get into wanting an SQL server (you will) and hooks into places like PayPal or Google Analytics or whatever - you'll soon find that you need to do development on the real thing. It's also good to use the same hosting company so you can get to grips with what versions of stuff like PHP they have set up, what memory and bandwidth limits they might impose, what share of a cpu you get, etc, etc. For example, the hosting company that I use runs SQL instances on separate nodes from the Apache instances - and that's something you need to know when you design your site - they also impose limits on how you can set up various .ini files, so you might find some Apache or PHP feature you want to enable can't be enabled for whatever reason.
SteveBaker (talk) 05:17, 22 January 2016 (UTC)[reply]

@140.254.136.149: If you just need HTML+CSS+JavaScript then you don't need a server, you can simply edit those files with a texteditor (I recommend Notepad++) and view them in your web browser. If you want to create a MySQL database et cetera then I would recommend using Bitnami's WAMP stack. You can look at the list of stacks here. You can have it running in a couple of minutes. You can also try Xampp (Apache distribution containing MariaDB, PHP, and Perl). The Quixotic Potato (talk) 10:55, 22 January 2016 (UTC)[reply]

Software for paperless governmental procedures

What software do governmental agencies, in the US or abroad, use for implementing paperless paperwork? Sorry for the oxymoron. --Scicurious (talk) 22:36, 21 January 2016 (UTC)[reply]

There would be a huge range of software, from an email system to replace snail mail, to a database to replace paper customer and product records or a spreadsheet to replace paper accounting records. CAD systems replaced paper architecture and engineering drawings. Project management software eliminates paper there. Of course, many businesses find themselves printing it all out anyway. StuRat (talk) 07:43, 22 January 2016 (UTC)[reply]
I do not mean paperwork in general. But software that implements a legally binding way of sending messages. That would be for contacting the government, notices, appeal, summons, and so on.--Scicurious (talk) 14:42, 22 January 2016 (UTC)[reply]
Since most legally binding stuff requires signatures, I searched /digital signature warrant/ and got this relevant news article about CA using DocuSign and iPads instead of paper and ink signature for search warrants: [14]. I'd suggest /"digital signature" [legal instrument] [country/state]/ would be a good search avenue. Adobe has obviously got a vested interest- but here's their overview of how digital signatures are legally interpreted around the globe [15]SemanticMantis (talk) 18:35, 22 January 2016 (UTC)[reply]

January 22

How can the Internet send messages so fast?

Does it take longer to receive an e-mail from Beijing to New York City than it is to receive an e-mail from London to Paris? What affects the speed at which the messages are sent? What is actually connecting the electronic devices? Where are the Wi-Fi signals and telephone signals and radio signals coming from? Are non-human living things ever detrimentally affected by the artificially created signals? 140.254.70.165 (talk) 12:59, 22 January 2016 (UTC)[reply]

  1. Yes.
  2. Lots of things. One is, of course, the distance the messages travel, which you brought up. Other things include network congestion, how quickly every computer along the way processes the messages, and the speed of the transmission mechanisms (wireless, fiber optics, etc.) used.
  3. A series of tubes. Okay, more seriously, wires and fiber optic cables. See Internet backbone. Wireless networking is of course a thing, but because of less reliability and speed, it's generally only used for connecting users' devices (cell phones, laptops, etc.) to access points. The Internet backbone is all wired.
  4. Antennas.
  5. In general, no. The amount of energy deposited in objects by radio waves drops off rapidly the further you are from the source: the inverse-square law in action. And no organisms, as far as I'm aware, use radio waves for communication, so there's no interference issue. --71.119.131.184 (talk) 13:35, 22 January 2016 (UTC)[reply]
As a follow-up, to directly answer the question in the section title: computers are fast. Modern computers can have clock speeds over 3 gigahertz, meaning very roughly that they "do stuff" 3,000,000,000 times a second. (There are factors other than clock speed that determine the actual speed of computers, but this is fine as a very loose approximation for getting the point across.) Transmitted messages travel at the speed of light, for photons, or the slower-but-still-quite-fast-to-humans speed of electricity, for electrical signals in wires. --71.119.131.184 (talk) 22:39, 22 January 2016 (UTC)[reply]
You might be interested in reading a bit about older precursors to modern email, like FidoNet's FidoMail. It's not what we use now, but it is in some ways easier to understand. There, the goal was to link local BBSs into a network that could send mail internationally. Mail was relayed the distance of a local phone call at each step, and often steps would wait until night time when more phone lines were open. So a mail from NYC to Beijing might have taken 5 days, while Paris to London might have taken 2. SemanticMantis (talk) 16:34, 22 January 2016 (UTC)[reply]
The first time I got a reply to Internet mail within minutes, I was startled because I supposed that it took hours (not days!) to relay mail from California to New York. —Tamfang (talk) 05:19, 23 January 2016 (UTC)[reply]
Those of us whose long-distance email connections were by UUCP when the Internet wasn't available to the general public remember when it often did. You had to use email addresses with an explicit routing, in the style host1!host2!host3!host4!user/, and each of those hosts might only connect once an hour to the next one, or might not connect at all during business hours so as to save on long-distance telephone charges. --76.69.45.64 (talk) 19:27, 23 January 2016 (UTC)[reply]

Question about webcams with microphones

Is it possible to get a webcam with built-in mic, and that mic is good enough to hold its own as a microphone without using the webcam part? I want to record myself gaming with the webcam but I also occasionally want to do just voice-overs. Also, any suggestions for good webcam/mics for under 100 dollars? 2605:6000:EDC9:7B00:E017:92A9:4BB5:AD1D (talk) 19:20, 22 January 2016 (UTC)[reply]

No, that is not possible. I recommend buying a separate mic. Built-in mics are worthless. A good webcam is the Logitech C920 (or C910). Newegg sells C920s for $63.79 with free shipping, over at Amazon you pay 65 bucks. The Quixotic Potato (talk) 22:52, 22 January 2016 (UTC)[reply]
Thanks! I took your advice and bought it, also got a Blue mic to go with it, stayed within my budget of 100 dollars. 2605:6000:EDC9:7B00:6CCA:6B9A:F4EB:2251 (talk) 01:45, 23 January 2016 (UTC)[reply]
YVW. The Logitech C920 (and C910) are popular choices among YouTube/Twitch streamers. Many built-in mics in webcams are barely good enough for Skype, I cannot recommend them to anyone! The Quixotic Potato (talk) 03:03, 23 January 2016 (UTC)[reply]

January 23

Certifying that picture is not older than a certain time

How can someone certify that a picture is not older than the timestamp it has in it? Is using tamperproof specialized hardware the only option? Notice that this is different from certifying that the picture already existed at time t. --Scicurious (talk) 15:38, 23 January 2016 (UTC)[reply]

You could incorporate unpredictable information in the one of the free-form EXIF fields and then submit it to a timestamping service. The unpredictable information might be the numbers drawn in a famous lottery. Jc3s5h (talk) 16:18, 23 January 2016 (UTC)[reply]
I am afraid that this won't work, and the task at hand is impossible. You could still include the unpredictable information into the EXIF years after the picture was taken, and submit it anyway. The problem remains, there is no difference between an old bit and a new bit of information. And every bit in my machine can be changed by me at will. You could obviously take a picture of a current newspaper in what is called authentication by newspaper or newspaper accreditation. You would have to perform some digital forensic analysis on the picture to exclude a possible photoshopped image.--Llaanngg (talk) 16:32, 23 January 2016 (UTC)[reply]
The simple reason why the task is impossible is that anyone could always take a new picture of the old picture. --76.69.45.64 (talk) 23:18, 23 January 2016 (UTC)[reply]
  • If this is just a general poser, and you aren't looking only for a digital timestamp, you can date certain historical events such as a picture of Obama being inaugurated to no earlier than his inauguration, but that's not very useful when you are dealing with generic items. μηδείς (talk) 02:53, 24 January 2016 (UTC)[reply]
See Trusted timestamping, if it is a photo that you've taken recently. This will allow others to verify that the photgraph existed at the time you had it notoriesed, but they can't verify how long it had existed prior to you notoresing it - including a newspaper, etc, as LLaanngg mention, will give an earliest possible date. If the document is very sensitive, you could submit another document that outlines the original, and includes a Cryptographic hash of it. LongHairedFop (talk) 12:23, 24 January 2016 (UTC)[reply]

Pseudocode: good enough for teaching, not good enough for programming

Could a compiler for some form of pseudocode be created? Otherwise, why would it only be precise enough for teaching, but not good enough for been compiled into a program? --Llaanngg (talk) 16:34, 23 January 2016 (UTC)[reply]

It's an artificial intelligence problem. We don't know how to write a compiler that's as good as humans at filling in "obvious" gaps. -- BenRG (talk) 18:03, 23 January 2016 (UTC)[reply]
Could you cite a concrete example of pseudocode that would be an "obvious" gap for a human, but a stumbling stone for a compiler? --Llaanngg (talk) 18:25, 23 January 2016 (UTC)[reply]
Here's a random example from a problem I was just thinking about: given n "red" points and n "blue" points in general position in the plane, find a pairing of them such that the line segments between paired points don't intersect. An algorithm that works for this is "Pick an arbitrary pairing. While there are still intersecting line segments { pick a pair of intersecting segments and uncross them }." (This always terminates because uncrossing reduces the total length of the segments (triangle inequality), but that isn't part of the algorithm.) Turning this into a program requires a pretty good understanding of the problem statement and plane geometry. For example you have to figure out what "uncross" means and that there's only one way to do it that preserves the red-blue pairing. You also need to choose an input and output encoding (how you provide the points to the program and how it reports the answer). Still the pseudocode is useful because it contains the nonobvious core idea, and everything else is straightforward. -- BenRG (talk) 19:08, 23 January 2016 (UTC)[reply]
But I think many pseudocode algorithms are close enough to executable functions that you might as well just use Python as your "pseudocode". -- BenRG (talk) 19:10, 23 January 2016 (UTC)[reply]
The main issue is pseudocode is generally not a formally defined language, hence the name. "Real" programming languages have formally defined grammar and syntax. Read something like the C standard to get an idea of how much goes into doing this. This is so, ideally, every statement that can be possibly written in the language has an unambiguous meaning that can be interpreted by computer programs (here I mean "interpreted" in the general sense; I'm not specifically referring to interpreted languages). A program written in C, for instance, will, ideally, always mean the exact same thing to any standard-compliant C compiler or interpreter. Contrast this with the state of machine translation; natural languages aren't well-defined, so there's tons of ambiguity, and consequently the programs we have at present often get things completely wrong. You could consider pseudocode a kind of "natural language for programming"; it's intended to convey general ideas to other humans. If you formally define the language you're using, it's no longer pseudocode; it's a programming language. --71.119.131.184 (talk) 06:38, 24 January 2016 (UTC)[reply]

January 24

we couldn't save your file to pdf/docx this time

I am posting the below question for another user, I do not have a smart phone, and am hence clueless. μηδείς (talk) 02:40, 24 January 2016 (UTC)[reply]

"I use an iPhone 4S running iOS 8.3, and I have recently created a Google account. I downloaded the Google Docs App and I uploaded a word document to my account. It's all fine until I try to share that document with someone else. The "share" icon appears in light grey while the rest options remain black. So I cannot share the document. When I go to the convert to pdf/docx it says "we couldn't save your file to pdf/docx this time". Does anyone have any idea of what might this problem be? Is it possible that I also might need to download the Google Drive app for me to be able to share files through Google Docs?"

Yeah, sounds like you need the Google Drive app. The Quixotic Potato (talk) 05:27, 24 January 2016 (UTC)[reply]
AFAIK, a new out of the box iPhone doesn't have shared storage that all apps can read and write to. See this: [16] Each app contains its own data storage, and will only be able to share it with other apps that are designed to know it exists. Google Drive must be performing that shared-storage function for other Google apps. If you plug an un-jailbroken iPhone into your computer, the only "drive" that shows up is the camera roll. 94.12.81.251 (talk) 18:39, 24 January 2016 (UTC)[reply]