Jump to content

Wikipedia:Reference desk/Computing: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
→‎What kind of quality control does open source software have?: Is that a good thing? I've heard that the NSA has a thousand full-time developers looking for security vulnerabilities . How many does dose Russia have? How about China or North Korea?
Scsbot (talk | contribs)
edited by robot: archiving April 22
Line 7: Line 7:


= April 22 =

== What kind of quality control does open source software have? ==

At my job, in order to get code in production, we have these layers of quality control:

1. Developer unit-tests code<br >
2. Second developer does code review<br >
3. QA tester tests the code in our test environment<br >
4. Users test the code in our test environment. QA approval is required before going to the next level.<br >
5. QA tester retests the code in our integration environment.<br >
6. Users retest the code in our integration environment. Both QA and UAT approval is required to go to the next level.<br >
7. Change is presented to the Change Approval Board containing representatives from the development, DBA, QA and infrastructure teams). All coding changes must receive signoff from the board.<br >
8. Immediately after going to production, either QA or users will retest the change. QA/UAT approval is required to keep the changes, otherwise, they will be rolled back.<br >

In addition to the 8 stages of quality control:

9. Developers run automated JSLint code checks.<br >
10. We hire an third-party vendor to perform yearly security penetration tests.<br >

Despite all these checks, most developers don’t think we do enough quality control. We are currently working on adding an 11th layer of quality control using automated testing, and I am recommending to my boss that we use another automated tool (Resharper) for quality control.

My understanding is that open source only has the first two layers and possibly automated testing. Is my understanding correct? [[User:AnonComputerGuy|AnonComputerGuy]] ([[User talk:AnonComputerGuy|talk]]) 07:48, 22 April 2014 (UTC)

:It's going to vary from project to project. A lot of small ones are run by one developer, using whatever process they want. Some larger ones are sponsored by corporations that have their own quality processes in place. The Linux kernel is controlled by one person, but tons of devs work on creating and testing updates before they get added. Here's a document describing the Linux kernel patch process: [https://www.kernel.org/doc/Documentation/SubmittingPatches] [[User:Katie Ryan A|<span style="border-bottom:solid #88F">K</span><span style="border-bottom:solid #d5f">ati</span><span style="border-bottom:solid #faa">e R</span>]] ([[User_talk:Katie Ryan A|talk]]) 12:15, 22 April 2014 (UTC)

:I think a different form of quality control exists in open sourced software. After a programmer writes and tests his code, he submits it and it goes on the list of available additions to the open-source code. Various people download and install it, test it out, and report the results on a wiki they've set up for such a purpose. If it gets good reviews, more people download it, and they might include it in a package with other bits of software that got good reviews. If it gets bad reviews, few people will, and they might even remove it from the list entirely. So, it's like what you'd call customer beta testing. The hope is that more testers will ultimately make for a better product. Also, the time pressure isn't the same with open-source code, so you can spend as long developing and testing as you want, no need to rush out some serious flawed code. [[User:StuRat|StuRat]] ([[User talk:StuRat|talk]]) 15:21, 22 April 2014 (UTC)

:: I've been a programmer for over 40 years. I earn exceedingly good money doing it - and I've worked for companies with spectacularly good records for producing solid, reliable code, so I hope I speak to you with some experience.
:: There are really many problems with the approach that our OP's company is taking here:

::# It's horribly expensive. Programming is never cheap - but doing this level of scrutiny has to be making it ten times more costly.
::# It's inevitably going to cause lots of delay. That will result in urgent bug fixes struggling through the ten layers of approval appearing weeks after they would ordinarily have been released. Depending on the business you're in - that could be a disaster.
::# Programmers **HATE** it. Your company ideology may make it impossible to speak up and say it. You may say that they should suck it up and do it - or you may say that this would just be unprofessional - but the fact is that if you want to recruit the best of the best, you're going to have a VERY hard time doing it if you tie them up in ten layers of red tape and stomp any signs of creativity into the dust. The result of that is that you get crappy programmers on your team...and now you NEED all of those layers of oversight because they are writing awful code and making a ton of bad design decisions. Programming is a very unique field of human endeavor - the best programmers are easily 100 times more productive and 100 times more accurate coders than the worst...so by effectively rejecting those great talents, you're probably getting an error rate that's 50 times worse than it could be - and that's why you need all of those layers of red tape just to pull it back to something relatively sane. The biggest problem with programmers is communication between them - having ten grade A programmers instead of a hundred grade-C programmers reduces the inter-programmer communications a hundredfold...and that's going to drastically lower the opportunities for screwups.
::# Your testing is only as good as the specification documents that describe what the software should do. Unless you have at least this much scrutiny on specifications, all of this is a complete waste of time.
::# Making it this hard to get a change into code is a strong disincentive for your programmers to refactor code that's perfectly functional but inefficient or hard to understand. That means that your code will get harder and harder to understand - and this is by far the biggest cause of problems over the long haul.

:: The company I work for has one layer of QA testing and one layer of end-user testing. We employ the best programmers money can buy and spend the least we can on red tape. We have an excellent record for solid code and we can turn out changes rapidly and be very light on our feet - since our overheads are low, we are very profitable. I've also worked in shops with higher levels of red tape - and I've found it strongly counter-productive. The OpenSource model proves that. Most OpenSource code is extremely high in quality - despite having essentially zero of the steps you describe.

:: That said, it all depends on what you're doing. If you're writing video games, then a not-very-serious bug may be largely unimportant. If you're writing the flight control code for a 747 airliner or the control code for a nuclear reactor - then the kinds of scrutiny you employ is highly recommended because lives depend on there not being hidden bugs.

:: Consider the steps you've put in place here:

::# ''Developer unit-tests code'' -- (To pick a ridiculously simplistic example...) If the programmer who is writing code to calculate the square root of a number doesn't realize that you shouldn't take the square root of a negative number - so he fails to put in an error check for that case - then he's not going to include the test that attempts sqrt(-1) in his unit test data...so this approach never finds the cases he hadn't thought of when he wrote the code...so this doesn't work very well. Basically, he only writes test cases for the error cases he knows about...and those (of course) work just fine. Ideally, test cases come from some requirements document - but you need to review your requirements with at least as much oversight as you review the code that implements that requirement. If the requirements for the square root code says "Shall produce an error message if the input parameter is less than zero" - then it'll get tested for - but if your requirements also fail to note that you can't take a square root of a negative number - then the error will likely go all the way through into production when some hacker with more brains than your team wonders whether you've tried that.
::# ''Second developer does code review'' -- See [[Rubber duck debugging]]. Basically, the second developer falls asleep while the original author explains his code. Sometimes, in the course of explaining it, the first programmer finds his own bug...but it's far from certain.
::# ''QA tester tests the code in our test environment'' -- This is probably very effective at finding problems, but only if you have really good QA guys. If you're paying your QA guys a third of what you're paying your programmers - then you probably don't have good QA guys.
::# ''Users test the code in our test environment. QA approval is required before going to the next level.'' -- Who are these "users"? They probably do the routine operations they almost always do - and those (of course) work OK - the problem cases are in the unusual use patterns, which they probably won't happen until the software is used by people who are not in your focus group.
::# ''QA tester retests the code in our integration environment.'' -- If these are the same people who did step (3), they'll probably run the exact same tests - so the odds of them finding a bug that wasn't there in round (3) is small. If your "integration environment" differs greatly from the environment that your programmers originally did their own testing in - then that's something you should urgently fix! Encourage people to commit early commit often so that integration isn't a big step. If you're putting code together that has never been together before on the programmer's desk then you should expect huge problems because the programmer (who finds more bugs than anyone else!) never got a chance to experience them. A model of continuous code improvement is FAR better than alternating development and integration steps. SCRUMM-based approaches where a usable, integrated codebase is maintained more or less continuously is the modern way to do this.
::# ''Users retest the code in our integration environment. Both QA and UAT approval is required to go to the next level.'' -- Same problem as with step (5).
::# ''Immediately after going to production, either QA or users will retest the change. QA/UAT approval is required to keep the changes, otherwise, they will be rolled back.'' -- Same problem as with step (5).

:: I very much doubt that you're getting more bugs than if you just did steps (1), (3) and (4)...and I'm certain that all of the red tape is shackling the best minds you have and scaring away the best you might have. I bet that encouraging code refactoring rather than discouraging it would yield massive improvements that layering on more red tape is going to prevent.
:: [[User:SteveBaker|SteveBaker]] ([[User talk:SteveBaker|talk]]) 18:26, 23 April 2014 (UTC)

:::My background is as a semiconductor engineer who mostly used code written by others and occasionally wrote code for which no off-the-shelf application existed. Also as a disaster volunteer who has to work in conditions of limited or no infrastructure. I think the comment by SteveBaker, 'Who are these "users"?' is critical. The "users" selected for testing new software typically work in well-equipped offices with the latest computers and operating systems. They are more likely than the average business user to have administrator privileges on the computer they use for testing (but not always). When the average or below-average user gets the released software, installs it on his personal XP laptop from 2005 (the one his teenage son set up and wisely did not give Dad administrator privileges), and hauls it to a brush fire that just burned down the local cell phone tower, that's when the software will get a real workout. [[User:Jc3s5h|Jc3s5h]] ([[User talk:Jc3s5h|talk]]) 19:00, 23 April 2014 (UTC)

::::As far as Administrator privileges go, they should have an Admin login where they can set up test data, etc., and a user login with no special privileges, which they use for the actual testing. [[User:StuRat|StuRat]] ([[User talk:StuRat|talk]]) 15:24, 24 April 2014 (UTC)

:::I completely agree with this. I worked with our other software engineers as new hires, reviewing changes at first, but now I'm confident that they understand the projects they're maintaining well enough to make proper changes without review. They also understand the limits of their knowledge of the software and the systems it controls, and know how to ask the right questions when they do need help. The work they've done so far seems very high quality, with a very low rate of problems showing up later that trace back to their changes. That's how it should be when you take care to hire skilled developers and encourage an atmosphere where they feel they can learn the system and have control over how they do things. We could hire more developers for code review, but I wouldn't trust their review skills unless they were as skilled and comfortable with our code as the current developers, in which case I would rather have them working on their own tasks and getting twice the rate of bug fixes and new features rather than a very minor decrease in number of bugs that slip through in new development. [[User:Katie Ryan A|<span style="border-bottom:solid #88F">K</span><span style="border-bottom:solid #d5f">ati</span><span style="border-bottom:solid #faa">e R</span>]] ([[User_talk:Katie Ryan A|talk]]) 13:55, 25 April 2014 (UTC)

:Open-Source Software have no quality control as closed-source software have no quality control. open- vs. closed-source-software is only about the source code being accessible by everyone or not. What a programmer/company/group have as policy has nothing directly to do with open- vs. closed-source-software. [[Special:Contributions/87.78.28.247|87.78.28.247]] ([[User talk:87.78.28.247|talk]]) 21:09, 25 April 2014 (UTC)

::Opening the code up to all means more people can do code reviews, allowing for easier detection of potential problems. [[User:StuRat|StuRat]] ([[User talk:StuRat|talk]]) 13:10, 26 April 2014 (UTC)

:::Is that a good thing? I've heard that the NSA has a thousand full-time developers looking for security vulnerabilities. How many does dose Russia have? How about China or North Korea? How about professional groups such as the mob? [[User:A Quest For Knowledge|A Quest For Knowledge]] ([[User talk:A Quest For Knowledge|talk]]) 00:17, 28 April 2014 (UTC)

::::Most bugs don't cause security problems, and, for those that do, it might be better if more people know about them:

::::1) This allows users to know the code is compromised, and stop using it until a security fix becomes available.

::::2) This puts pressure on the developers to fix the bug quickly, and do a better job next time. [[User:StuRat|StuRat]] ([[User talk:StuRat|talk]]) 00:24, 28 April 2014 (UTC)

== Changing new tab default page in Google Chrome ==

When I downloaded Yahoo Instant Messenger, it apparently snuck in some crap I don't want. It set Bing to be my default search engine, home page, and the page that pops up when I open a new tab. I was able to fix most of that, but it still comes up, via something called "Conduit", when I open a new tab in Google Chrome. How do I get rid of it, hopefully replacing it with Google ? O/S is Windows 7, 64 bit. [[User:StuRat|StuRat]] ([[User talk:StuRat|talk]]) 13:32, 22 April 2014 (UTC)

:It's in: Options - Settings Click on the lines at the top right of the Chrome window to find these.[[Special:Contributions/217.158.236.14|217.158.236.14]] ([[User talk:217.158.236.14|talk]]) 14:23, 22 April 2014 (UTC)

::I went through the settings, that's how I fixed everything else. But I didn't find a setting for the page you get when you open a new tab. Where is that set ? [[User:StuRat|StuRat]] ([[User talk:StuRat|talk]]) 15:13, 22 April 2014 (UTC)

:You've probably installed some sort of malware. My wife got hit with it a few days ago through a fake Flash update. It also blocked things like system restore and Windows Defender. After removing it (used Win 8 System Reset because we didn't feel like spending time fighting the infection), the home page and search provider settings came back becasue of Chrome's cloud sync, but since the infection was gone she could set it back. [[User:Katie Ryan A|<span style="border-bottom:solid #88F">K</span><span style="border-bottom:solid #d5f">ati</span><span style="border-bottom:solid #faa">e R</span>]] ([[User_talk:Katie Ryan A|talk]]) 14:41, 22 April 2014 (UTC)

::StuRat, it's in Appearance / New Tab.[[Special:Contributions/217.158.236.14|217.158.236.14]] ([[User talk:217.158.236.14|talk]]) 08:01, 23 April 2014 (UTC)

:::I have Google Chrome version 34.0.1847.116 m, and when I go to Settings + Appearance, I don't get a "New Tab" option. I get "Get themes" and "Reset to default theme" buttons and check boxes for "Show Home button" and "Always show the bookmarks bar". Under "Show Home button" is an option to change the web page, but I changed that to Google, and it had no effect on the page where a new tab opens. [[User:StuRat|StuRat]] ([[User talk:StuRat|talk]]) 14:03, 23 April 2014 (UTC)

:::: Here are the settings I get http://i.imgur.com/jCVznPW.jpg It could be that your administrator has disabled this option, if you are on a work computer. Sorry I couldn't give you a definitive solution [[Special:Contributions/217.158.236.14|217.158.236.14]] ([[User talk:217.158.236.14|talk]]) 15:41, 23 April 2014 (UTC)

:::::Those are the same options I get. I think you misinterpreted what they do, though. They allow you to specify what the Home Page button does, one option of which is to go to the New Tab page. So that's the reverse of what I want, which is to set the New Tab page to go to the Home Page. [[User:StuRat|StuRat]] ([[User talk:StuRat|talk]]) 16:23, 23 April 2014 (UTC)

::::::There is a button that lets you just reset all browser settings. It's annoying because it will disable any extensions, clear saved passwords and reset your cookies, but it will definitely get rid of the setting. When I search for "new tab" the only hits I get are the option to open the new tab on startup and the reset button, which mentions changing the new tab page back in it's warning message. Navigating Chrome's settings has always annoyed me... [[User:Katie Ryan A|<span style="border-bottom:solid #88F">K</span><span style="border-bottom:solid #d5f">ati</span><span style="border-bottom:solid #faa">e R</span>]] ([[User_talk:Katie Ryan A|talk]]) 16:52, 23 April 2014 (UTC)

:::::::Yea, I was about to do that before I decided to post here first and see if there was a way to avoid the "nuclear option". [[User:StuRat|StuRat]] ([[User talk:StuRat|talk]]) 15:27, 24 April 2014 (UTC)

:Yahoo might have tossed an Extension in that's messing with your default preferences. Go to Tools->Extensions, and see if there's any Yahoo branded stuff there, trash it if there is.
:If that's not the case, you may have to go into the Windows Control Panel and "Uninstall a Program," then see if there's any kind of Yahoo Search application installed. If so, removing that program and restarting Windows should get Chrome back to its default New Tab page. &mdash; <b>[[User:HandThatFeeds|<span style="font-family:Comic Sans MS; color:DarkBlue;cursor:help">The Hand That Feeds You]]</span>:<sup>[[User talk:HandThatFeeds|Bite]]</sup></b> 18:48, 25 April 2014 (UTC)

::I tried both. The extensions are all things I can identify that are not related to this problem. In "Add/remove programs" I removed everything Yahoo or Conduit related, but that didn't fix the problem, even after a reboot. There are some programs I can't identify, but I'm reluctant to remove those, if I don't know what they do. [[User:StuRat|StuRat]] ([[User talk:StuRat|talk]]) 20:17, 25 April 2014 (UTC)

:::Hm. A few searches found a program called "Spigot" that seems to be doing what you describe. Also an extension called "YouTube Downloader" apparently hijacks your search to get their developer a cut of any ad links you click on. Might be worth running a spyware checker, like MalwareBytes. &mdash; <b>[[User:HandThatFeeds|<span style="font-family:Comic Sans MS; color:DarkBlue;cursor:help">The Hand That Feeds You]]</span>:<sup>[[User talk:HandThatFeeds|Bite]]</sup></b> 16:14, 26 April 2014 (UTC)

::::Yea, it plays some clip of Ellen DeGeneris and wants me to click on it. Not gonna happen. Is that the best anti-malware program these days ? [[User:StuRat|StuRat]] ([[User talk:StuRat|talk]]) 16:29, 26 April 2014 (UTC)

UPDATE: Success ! I found the bastard code under Remove Programs listed as "Search Protect". I didn't remove it previously because it sounded unrelated. But when I clicked on it, it said the publisher was "Conduit", and that showed up in the URL for the new tab page, so I removed it, rebooted, and now my new Chrome tab is back to Google, where it was originally set. Thanks all for the suggestions, I will mark this one resolved. [[User:StuRat|StuRat]] ([[User talk:StuRat|talk]]) 19:01, 27 April 2014 (UTC)

{{resolved}}

== The Flash Crash- the explanation. ==

Hi, Your explanation of the Flash Crash of May 6, 2010 is incorrect. There is no published work to use as a reference because all the answers from professors to media outlets, the sec, etc. are not true. I have sent our material to the sec, many professors, all the media outlets, investigative journalists and they all refuse to get involved. They don't want their government funding, jobs and careers to change. We have the entire explanation of the Flash Crash and the code that caused it. This is the time to clear up this issue. Many of the people who talk about the Flash Crash do not know what caused it and are just repeating what they've been told. It is a beautiful code written by a brilliant programmer.

I don't want to go into details about the code here because it's a public venue and I don't want our material stolen. One more thing, the stock market goes up and down everyday because of this code. The direction of the market is known 4-5 days ahead. The Flash Crash was broadcast to the insiders starting on Tuesday May 4, 2010.

The published papers on the crash are all incorrect and many do not answer the question. I have contacted some of these people and given them my material. Their papers are still on the internet.

I will reveal everything we have. Again we do not know who receives this information but we do know who controls the feed that delivers the code.

James Wales has a requirement on Wikipedia that your material must be backed up by published papers. That idea would be great if the published papers were reviewed by others and allowed to be criticized. It's not easy to get published. You have to know someone, have a PhD or be someone who has a respected position. I can tell you now that by doing that the public doesn't get a chance to question any explanation. If he/she said it it must be true. That's not what a free country is all about.

I look forward to hearing back from all of you. You will not be disappointed. Our documentation of the code is perfect. This information is not our original material. It is not our code so please don't use that as an excuse not to look at what we have. Also if you all don't understand the material then don't let that stop you. It's not baby food. It should be vetted out in the public by many people so the professors and all the others can't hide behind their positions.

Thank you, Patty

[[Special:Contributions/38.121.16.160|38.121.16.160]] ([[User talk:38.121.16.160|talk]]) 15:51, 22 April 2014 (UTC)

:The place for this is on the talk page for that article. However, I'm skeptical that you can consistently know the direction the market will take 4-5 days in advance, as that day's events will certainly have an effect. [[User:StuRat|StuRat]] ([[User talk:StuRat|talk]]) 16:14, 22 April 2014 (UTC)

:Wikipedia may not be used for [[WP:SOAPBOX|telling the world about your company, band, charity, religion or great invention]]. --[[User:ColinFine|ColinFine]] ([[User talk:ColinFine|talk]]) 22:46, 22 April 2014 (UTC)

== Retro-Bit USB joystick still doesn't work on Linux ==

I downloaded the fix to the Retro-Bit USB joystick adapter on Linux from [https://github.com/robmcmullen/hid-atari-retrobit this page], but it doesn't work. The module builds OK, but attempting to install it gives errors:

# rmmod ./hid-atari-retrobit.ko; rmmod usbhid; insmod ./hid-atari-retrobit.ko ; modprobe usbhid
Error: Module hid_atari_retrobit is not currently loaded
libkmod: kmod_module_get_holders: could not open '/sys/module/usbhid/holders': No such file or directory
Error: Module usbhid is in use

The readme file talks about testing the joystick with <code>jstest /dev/input/js0</code>, but even though I seem to have the <code>jstest</code> program, no device <code>/dev/input/js0</code> shows up. The joystick still works like before, I can only move right and down, not left or up. What should I do here? [[User:JIP|<font color="#CC0000">J</font><font color="#00CC00">I</font><font color="#0000CC">P</font>]] &#124; [[User talk:JIP|Talk]] 15:54, 22 April 2014 (UTC)

== Why does Java hate me ==

Sometimes I use Java-based software online (e.g. games, and not just from one site). The thing is, it loves to crash. It happens on my desktop (64-bit Windows 7), laptop (32-bit Windows 8), and same desktop when it ran 32-bit Bodhi Linux on a different hard drive. I have used three different web browsers as well (Firefox, Chrome, Midori). All three of the computers use very current hardware. The weakest link in the current desktop setup that I'm writing this from is RAM at 12GB, but that should be far more than enough.

Each time I look for help it tends to involve uninstalling, reinstalling, or updating Java. All of these have been done multiple times so, since the problem has caused me grief for at least 2 years now, this has applied to several versions of Java, including the most recent.

It doesn't always crash, and once in a while I can make it a good 45 minutes to an hour without it happening, but it happens regularly enough to be a pain. I haven't been able to tie it to any other resource heavy programs or processes running at the same time, but it certainly happens more frequently when running more than one Java program.

The only thing I can do is to kill the Java process, close the browser, and reload the page that launches Java (or kill the process of the independent [downloaded] program and relaunch it).

When I run Java with the console open, the console just freezes up, too, without giving me any information.

Ideas appreciated. --&mdash; <tt>[[User:Rhododendrites|<span style="font-size:90%;letter-spacing:1px;text-shadow:0px -1px 0px Indigo;">Rhododendrites</span>]] <sup style="font-size:80%;">[[User_talk:Rhododendrites|talk]]</sup></tt> |&nbsp; 17:18, 22 April 2014 (UTC)

:It would significantly narrow down the space of solutions if you can distinguish between two types of crashes: a crash of the [[Java VM]], and an unhandled runtime exception in the Java application or applet. Do you know how to tell these two very different problems apart? Once we know which is occurring, we can help debug your problem.

:::Basically, we need to find the crash log. If you are running Java from the command line, this will be the last few lines printed to the terminal when your application "goes away." Essentially, if the error log looks like [http://www.oracle.com/technetwork/java/javase/crashes-137240.html#gdywn this], with a bunch of # hashmarks and a statement about "Java VM" then you've hit a VM crash. We'll definitely want the text of that message. If this occurs, the bug is in Java itself (or, in a native library used by the application, applet, or plugin container).
:::Alternately, if the last lines print out a Java backtrace, the crash happened inside the application. Java backtraces are very verbose, and include a lot of symbolic package names (you'll see exactly which piece of application logic failed).
:::If you don't know where to get the Java crash log, or if you don't run the program in a terminal, check your System Event Log on Windows. [[User:Nimur|Nimur]] ([[User talk:Nimur|talk]]) 14:55, 23 April 2014 (UTC)
A possible thing you could try is to give java a bit more memory to run in. There are switches which you can set when you run from the command line. I'm not quite sure how you would set these when its a browser plugin though.--[[User:Salix alba|Salix alba]] ([[User talk:Salix alba|talk]]): 15:24, 23 April 2014 (UTC)
:Haven't found any log. From what I'm seeing it's supposed to be start with hs_err_pid. Well, I searched everywhere for that (and variations) with no luck. This means, I presume, that VM is not crashing? Also, Salix alba, do you mean the amount of temporary space I give it? It's already at the maximum. --&mdash; <tt>[[User:Rhododendrites|<span style="font-size:90%;letter-spacing:1px;text-shadow:0px -1px 0px Indigo;">Rhododendrites</span>]] <sup style="font-size:80%;">[[User_talk:Rhododendrites|talk]]</sup></tt> |&nbsp; 01:09, 25 April 2014 (UTC)
::That means it's ''probably'' an application bug, and not a Java VM bug (but, without a log, we lack proof). It ''implies'' that the fault lies with the application developer - to whom a bug report is due. It also implies that if you monitor the Java console when the crash occurs, the useful log information will be in there.
::Salix Alba suggests increasing the [[Memory management#Dynamic memory allocation|Java heap size]], as discussed here: [http://docs.oracle.com/cd/E13222_01/wls/docs81/perform/JVMTuning.html#1131866 Tuning JVM Parameters]. That is a common fix to a common problem, and helps if the application consumes a lot of memory. But, it will only make a difference if the root problem is actually because your application needs more memory than it was allocated. You would [http://docs.oracle.com/cd/E13222_01/wls/docs81/perform/JVMTuning.html#1109778 set heap size] with a command like:
$ java -XX:NewSize=128m -XX:MaxNewSize=128m -XX:SurvivorRatio=8 -Xms512m -Xmx512m
:::[[User:Nimur|Nimur]] ([[User talk:Nimur|talk]]) 14:04, 25 April 2014 (UTC)

{{hidden begin|title=Side-discussion regarding Java's history}}
I have moved all our side discussion, which regrettably is distracting from our ability to help [[User:Rhododendrites]] resolve his technical issue. [[User:Nimur|Nimur]] ([[User talk:Nimur|talk]]) 19:17, 26 April 2014 (UTC)
:<small>As an aside, I take mild [http://docs.oracle.com/javase/tutorial/essential/exceptions/ exception] to the question - because Java ''isn't'' terrible. Some of the brightest software engineering minds of the 20th century worked to create Java, but when [[Sun Microsystems]] became insolvent as a standalone business, those programmers found employment elsewhere; particularly when the Java technology platform was acquired by [[Oracle]]. A band of inept marauding hoodlums now occupy the hallowed ground of Sun Microsystems' headquarters, and they [http://mashable.com/2012/04/07/facebook-hq/#gallery/the-new-facebook-hq/521293f65198406611000738 didn't even bother to take the sign down] - they just painted over it like vandals. [[James Gosling]] [https://duke.kenai.com/gun/Gun.jpg ''barely'' survived] for almost six months inside [[Don't be evil|the evil beast]], including a [http://nighthacks.com/roller/jag/entry/i_m_alive near-death experience with a P-51 Mustang], before he realized Google was awful and was [[Dalvik (software)|killing Java]], so he bailed and moved to Hawaii to program robot Java submarines. So, Java may be suffering from [[bitrot]] on your operating system, but Java itself is not terrible.</small>
:[[User:Nimur|Nimur]] ([[User talk:Nimur|talk]]) 04:08, 23 April 2014 (UTC)
::Hmm. I don't know. What's the best way to tell the two kinds of crashes apart?
::<small>Also, fair enough. :) Heading changed.</small> --&mdash; <tt>[[User:Rhododendrites|<span style="font-size:90%;letter-spacing:1px;text-shadow:0px -1px 0px Indigo;">Rhododendrites</span>]] <sup style="font-size:80%;">[[User_talk:Rhododendrites|talk]]</sup></tt> |&nbsp; 05:40, 23 April 2014 (UTC)

::The Java browser plugin was always bug-infested, even when Sun maintained it. The security model also has [http://blog.cr0.org/2010/04/javacalypse.html serious design flaws] that mean that even a bug-free implementation would probably be unsecurable. It's a good idea to keep Java-in-the-browser disabled by default, if you install it at all.
::Regarding Gosling's short time at Google (if that's what you're talking about), [http://nighthacks.com/roller/jag/entry/i_ve_moved_again he said] "I had a great time at Google, met lots of interesting people, but I met some folks outside doing something completely outrageous, and after much anguish decided to leave Google." Since he's a living person, we should probably leave it at that unless you have incontrovertible evidence that he's lying.
::As an apparent supporter of open-source software you shouldn't have been rooting for Oracle in ''[[Oracle v. Google]]'', the case where Oracle tried to assert control over Dalvik. Any ruling in favor of software patents tends to be bad for open source, and a precedent establishing copyrightability of APIs could have been seriously problematic for, say, Linux. -- [[User:BenRG|BenRG]] ([[User talk:BenRG|talk]]) 19:53, 23 April 2014 (UTC)
:::<small>IMO, Java is quite horrible, too. I'd even call ''all'' destructorless languages "horrible", unless you just want to hack together some casual games and stuff. Trying OO programming without destructors is like driving a car without brakes. Omitting them is a disaster waiting to happen. - '''''¡Ouch!''''' (<sup>[[User_talk:One.Ouch.Zero|hurt me]]</sup> / <sub>[[Special:Contributions/One.Ouch.Zero|more pain]]</sub>) 08:24, 25 April 2014 (UTC)</small>
:::We're way off topic; but for the record, I was rooting for OpenJDK, an entity that remained unrepresented in the legal proceedings between Oracle and Google. But, there are [http://nighthacks.com/roller/jag/entry/my_attitude_on_oracle_v nuances to the issue] that are pretty complicated. And there is a reason why both I and Mr. Gosling reason that Oracle held the moral high-ground - which I will grant is a rarity - in this particular instance. Here's a direct quote: [http://news.cnet.com/8301-1035_3-57423538-94/oracle-google-trial-puts-ex-sun-execs-on-opposite-sides/?tag=rb_content;contentBody "Just because Sun didn't have patent suits in our genetic code doesn't mean we didn't feel wronged. While I have differences with Oracle, in this case they are in the right. Google totally slimed Sun. We were all really disturbed..."] [[User:Nimur|Nimur]] ([[User talk:Nimur|talk]]) 23:27, 23 April 2014 (UTC)
::::Whatever sliming may have taken place in the past, the main issues in this case were software patents and copyrightability of APIs. It would have been bad for open source if Oracle had won. I don't know why people think that revenge is such a supremely moral act that it trumps any collateral damage. -- [[User:BenRG|BenRG]] ([[User talk:BenRG|talk]]) 20:30, 24 April 2014 (UTC)
:::::BenRG, we are severely off-topic. But, if you honestly believe that Google's Android and Dalvik platforms are free and open-source software, then I challenge you to: (1) find the source-code for Android; (2) compile it; and (3) run it on a commercially-available device that would otherwise be able to run a binary distribution of Android. If you succeed at even the ''first'' of these tasks, I would be very happy to know about it; you can respond here or on my user-page. When you invariably fail, I think you might understand why the actions of Google in this matter are less-than-benevolent. They have stolen intellectual property, stolen commercial source-code from a vendor, redistributed binary versions of open-source software without making the source-code available (in defiance of the software license); and have broadcast a worldwide marketing campaign claiming that they are great promoters of free software. Yet, they won their court-case, so according to American law, Google is, for legal purposes, completely "right." [[User:Nimur|Nimur]] ([[User talk:Nimur|talk]]) 20:38, 24 April 2014 (UTC)
::::::I don't see what Android or Dalvik being or not being open source has to do with the issues BenRG has raised. It seems clear that their point was that software patents or copyrighting API is bad for open source. While obviously not everyone is going to agree on this, I think many FLOSS proponents do so it seems a valid point in relation to the discussion.
::::::Clearly if you are opposed to the idea of software patents or copyrighting APIs and you believe this is what the case was mostly about, then the idea of any 'stolen intellectual property' or 'redistributed binary versions of open-source software without making the source-code available (in defiance of the software license)' is nonsense. In fact I expect even the FSF would agree on the later issue, while they want the GPL to be enforcable when needed, it's unlikely they want this at the expense of expanding copyright. (I don't think many are going to seriously suggest APIs can be copyrighted when they that copyright is released under a FLOSS licence but they can't be when they are proprietary. ''Edit: There is of course the fact that with published source code you can simply copy the uncopyrightable bits of the source code whereas when the sourcecode isn't published you may need to reverse engineer those bits. While I suspect many copyleft FLOSS proponents think this is unfortunate, I don't think many would suggest there is any legal solution.'')
::::::The fact that Google may not be benevolent is largely besides the point. Sure their motto may be nonsense (did anyone dispute that?), but it doesn't mean it's silly to support them when you agree with their POV on the legal isues even if their motives are entirely selfish and you generally dislike them and much of what they do or even a lot of what they did which lead up to the legalcase.
::::::To put it a different way, it's entirely plausible a large majority of FLOSS supporters will support Microsoft in a case between them involving their proprietary code and some open source software developer because of the legal principles at stake.
::::::(The idea a legal case comes down to likeability or who's more 'evil' is one of the reasons American's civil trial system and its extensively reliance on juries is frequently criticised.)
::::::[[User:Nil Einne|Nil Einne]] ([[User talk:Nil Einne|talk]]) 17:40, 25 April 2014 (UTC)
:::::::Let's take further discussion of these complex issues to my talk page, or let's terminate the discussion, because it's not helping the OP. I apologize for my role in sidelining the discussion in the first place. [[User:Nimur|Nimur]] ([[User talk:Nimur|talk]]) 19:18, 26 April 2014 (UTC)
{{hidden end}}

== Is there any desktop enviroment (for pc linux) without x window? ==

Is there any desktop enviroment (for pc linux) without x windows? [[Special:Contributions/201.78.176.96|201.78.176.96]] ([[User talk:201.78.176.96|talk]]) 17:45, 22 April 2014 (UTC)
:See [[X Window System#Competitors]]. I believe that [http://www.maui-project.org/ Maui] is a Linux version with a full-featured desktop environment which does not use X. [[Unity (user interface)|Unity]] in [[Ubuntu (operating system)|Ubuntu]] will have an option (or default) to not use X in a future release (version 8 of Unity).

:It might not be what you were wanting, but [[Android (operating system)|Android]] uses a Linux kernel but not X. There are versions of Android for desktop computers.-<span style="font-family:cursive; color:grey;">[[User talk:gadfium|gadfium]]</span> 22:14, 22 April 2014 (UTC)
::Also [[Chrome OS]] works on a standard PC and its desktop environment isn't X-based, but it may still ship with X and its desktop may still be drawn in a single full-screen X window (I'm not sure). -- [[User:BenRG|BenRG]] ([[User talk:BenRG|talk]]) 18:07, 23 April 2014 (UTC)

= April 23 =
= April 23 =



Revision as of 00:28, 28 April 2014

Welcome to the computing section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


April 23

OpenSSL

Apropos of this "heartbleed" thing, how can it be that something so critical to the operation of modern society can be left to a group of "11 members, of which 10 are volunteers, with only one full-time employee", with development of critical functionality apparently left in the hands of some random developer, with obviously no proper checking whatsoever? How is it that major companies tolerate using a system developed in such a half-arsed and amateurish way? 86.128.2.169 (talk) 02:34, 23 April 2014 (UTC):To be fair, many people use alternative software products to implement secure transport (SSL and its ilk) that were not affected by the CVE-2014-0160 vulnerability. Many commercial operating systems do not use OpenSSL, and those software companies hire their own software teams to implement or integrate alternative versions of the SSL protocol. This obviously does not mean that such software is free of defects; but it's quite a mischaracterization to suggest that the volunteers at the OpenSSL team are the sole provider of this type of service. They are simply the most popular provider of a free software solution. Consider Dropbear, which is also free and open-source software.[reply]

OpenSSL is distributed under a license that expressly disclaims liability and states that the software is "as-is" with no guarantee of fitness for any purpose. This isn't just legalese nonsense - it means that any person or company who chooses to use OpenSSL is accepting the fact that its creators are not paid to provide support or to offer liability.
One advantage of commercial software - whether it is free software or not - is that a business arrangement can be made to assign liability. That means that a client can hold the software-provider accountable - and can bill them for financial damages - if the software has a defect.
Commercial software providers who accept such terms would be unwise if they started incorporating software that they can't be accountable for. Software companies hire experts, which categorically means there are more than a small team of volunteers who look over such projects.
As a perfect example: my credit union (in which I have obvious financial stake) performed a full internal audit in the wake of the Heartbleed bug; and they sent me a fantastic summary report replete with technical details. Their computer experts verified that OpenSSL was not ever used on any of our servers; and therefore our financial data was never jeopardized by the CVE-2014-0160 vulnerability. But here's the juice - as client, I don't need to care if my credit union screwed up, or if they used open-source software, or if an open-source-programmer screwed up... because if any of those screw-ups happened, then the finanical institution is liable, and I am insured (it is a federally accredited, NCUA-insured institution). If their misfeasance with software caused my money to get lost, I can legally get my money back.
But, as a stockholder in the union, though, I definitely care that they've done the right thing and taken precautions! I prefer that the credit union follows best-practices, provides transparency and accountability, and minimizes their liability, because that means that our group isn't losing money in the aggregate.
So, in this case, we have accountability at so many layers, from the financial transactions to the software vendor who provides the server infrastructure, all the way to the individual retail-banking-style members. We pool our resources to make sure we have the right technical and legal experts to protect our communal assets. Our credit union doesn't depend on ten or twelve open-source-software volunteers to watch our backs for us. I emphatically hope that everyone else's financial institutions are as diligent and transparent!
Long story short - whoever told you that "the whole world" is banking on ten or twelve volunteer open-source programmers has completely misled you.
Nimur (talk) 03:32, 23 April 2014 (UTC)[reply]
Most of the software that's powering the Internet was written (and donated for free) by unpaid programmers. Most web servers run Linux with Apache doing the web serving, MySQL doing database handling, PHP doing page generation (this is such a common combination, we use the "LAMP" acronym as a shorthand way to say it). A good chunk of people use Firefox and Chrome to view the resulting content. The software you're using right now to run Wikipedia ("MediaWiki") is entirely OpenSourced and written by volunteers.
Linux alone contains around 16 million lines of code - and it's estimated that for a commercial organization to rewrite it would cost them around $1.3 billion dollars. It is absolutely certain that there are horrible security breaches to be found there - and it's more than likely that new breaches are being created at about the rate that old ones are fixed!
But the sad fact is that software written by giant corporations is rarely much better. Recall the SPECTACULAR cost of the Y2K problem - scarcely any OpenSourced software fell vulnerable to that. Y2K cost the world around $300 billion ($400 billion at todays' money value) to clean up...heartbleed is scarcely a blip compared to that. The recent Target security breach caused 40 million credit cards to be compromised...and we're talking names, numbers, expiration dates, home addresses and the CVV codes - bad publicity lost Target 3% of their business for over a month - which is hundreds of millions of dollars in losses - other similar breaches in entirely commercial software have caused hundreds of millions of credit cards to be compromised! All of these dwarf OpenSSL's problems.
It's truly unfair to point to the authors of OpenSSL when the problem is more or less universal. Any piece of software more than a few thousand lines long is more or less certain to have bugs of some kind...many of which are remotely exploitable. The problem with commercial software is that the owner of the code may seek to cover up the problems and could take a very long time to come up with a solution. With OpenSSL, the bug was fixed within hours of being reported and the patch was available for people to download within less than 12 hours. The reason for that speed is that when the source code is available for anyone to look at and update, fixes get done rapidly and the need to upload the fixes is widely broadcast.
Consider this breach. The companies affected by the problem reported a problem with software that's used for around 40% of all VISA and MasterCard payments in August 2008 - it wasn't until they called in the US Secret Service and two companies who specialize in network security that they found the problem in mid-January 2009. In terms of potential damage, that's horrific.
Heartbleed has hit the news mostly because it's relatively comprehensible to the layman (Here is a cartoon that does a pretty good job of it: http://xkcd.com/1354 ) and it seems so obvious. But that's just 20/20 hindsight. There are millions of bugs out there just waiting for someone to exploit them - most of them would require nothing more than a one-line fix - and most could be found if only someone had the time, money and enthusiasm to seek them out. I very much doubt that any sizable piece of software that runs the web infrastructure is perfectly secure for that reason.
As security holes go, heartbleed is only patchily useful. When you write the exploit code (which is really very easy), all you get is a big pile of utterly random binary garbage back - you still have to recognize that some sequence of bytes is a security code or a credit card number or a password rather than (say) the partial contents of an image file containing a photo of the company's cat. That's decidedly not-trivial. Other bugs allow you more direct access into the target machine and are likely to be of more interest to serious bad guys.
SteveBaker (talk) 17:28, 23 April 2014 (UTC)[reply]
It's not hard to find credit card numbers or server private keys in data extracted via heartbleed. Both searches can be automated, and tools are in the wild now allowing script kiddies to do it.
Some buffer-overrun bugs are subtle. The check might be invalidated by integer overflow, or by a later change to seemingly unrelated code. Heartbleed was not subtle. It was a bare memcpy in brand-new code whose length was simply not checked at all against the size of the source buffer. If you're doing a security audit of C code that contains a memcpy, this is the first thing you look for (well, the second thing, after the destination-size check). The people who allowed this code into OpenSSL without checking it for buffer overruns shouldn't be responsible for security-critical code. This is "20/20 hindsight" in the same sense that the sudden bankruptcy of a financial institution makes you realize in hindsight that the people running it were never competent. -- BenRG (talk) 19:00, 23 April 2014 (UTC)[reply]
SteveBaker, most of Linux wasn't written by unpaid volunteers. Take a long, hard, un-propagandized look at the list of people who have commit access to the kernel. Take a look at how many of those people are on the payrolls at Intel, or IBM's Linux Technology Center, or are professors at universities who receive government grants to perform research and development on computer systems. Most of the hardware drivers available for linux, and built into linux, are produced by salaried employees at hardware vendors. A handful of projects actually are run by real volunteers - but "most of linux" is free software because certain companies believe that free software is good for business.
And even MediaWiki, which is now open-source free software - is now most actively developed by people who are salaried employees of the Wikimedia Foundation. Nimur (talk) 20:58, 23 April 2014 (UTC)[reply]
There is definite misfeasance at play here. It's irresponsible for developers to release anything to production without first having it tested by a separate group of QA testers. Developers are not QA testers anymore than they are UX experts. There's a serious problem in our industry. A Quest For Knowledge (talk) 23:59, 23 April 2014 (UTC)[reply]
The OP could equally have asked: How can a billion dollar company with thousands of employees, convince so many millions of people, to pay out good money time after time, for software that is defective by design. So globally costing its customers millions of dollars each month to mitigate its inherent vulnerabilities, only to find that then, they are then forced over to a new version and have to start all over again? Microsoft: Let’s Talk About Heartbleed® (Reported by Our ‘Former’ Security Chief) While the World Migrates From XP to GNU/Linux --Aspro (talk) 01:07, 24 April 2014 (UTC)[reply]
Fact check on that article, Aspro: the article you linked claims that Apple products use OpenSSL. Apple's official Cryptographic Services Guide on its developer webpage states: "although OpenSSL is commonly used in the open source community, it does not provide a stable API from version to version. For this reason, the programmatic interface to OpenSSL is deprecated in OS X and is not provided in iOS." If you consult Apple's opensource webpage, the last Apple operating system that shipped OpenSSL went to market four years ago. Modern operating systems (which are available at no charge) use a different implementation for cryptographic services. More recent systems use something called a shim layer - the source-code is available for inspection here: osslshim from Apple's Open Source page. That shim can help software that expected the OpenSSL API work, while using a different implementation underneath - most of which is also open-source software, available for inspection or hobby-work. However, the official developer page suggests that industrial-strength software written for Apple platforms ought to instead write directly to the cryptographic services API, which is more robust and platform-portable to iOS.
Microsoft's operating system uses SChannel, not OpenSSL. The official statement from Microsoft to its clients and developers, Information about HeartBleed and IIS, provides additional technical information about why Microsoft products are not affected by (CVE-2014-0160). Obviously this does not mean that these commercial softwares are totally free of defects, but it solidly refutes the claim that either infrastructure is vulnerable to the bug people call "Heartbleed."
While I'm nitpicking, the TechRights article primarily cites itself as its reference for a variety of other claims. I don't consider it a very reliable article. The point is, your link claims that the vulnerability affects Microsoft and Apple products - but that is an unfounded claim: neither NIST's information page nor MITRE's information page corroborate that. Your article claims that Microsoft intentionally placed backdoors in its operating systems, but cites its own publication. It makes several other inflammatory claims that are equally unfounded. I do not believe it meets the standards we set for reliable sources. Nimur (talk) 21:12, 24 April 2014 (UTC)[reply]
What exactly are you attempting to get across: “the last Apple operating system that shipped OpenSSL went to market four years ago.” Are you saying it is my fault (and many hundreds of thousands others) that my apple is more than four years old? Also, Apple has just fix their non OpenSSL; Apple Fixes Serious SSL Issue in OSX and iOS. No logo and web site created for that was there? Microsoft's Azure has lots of openSSL installations on their servers that they have encouraged their clients to instal. Next: Does software wear out after four years? Power-stations and other capital installations are still running vintage PDP 11's with original software -35 years on. There are so many pots calling the kettle black here, that is it surprising that the average Joe can't differentiate fact from FUD. Which is the very confusion that FUD sets out to achieve, in the minds of the average Jane and John Doe. Think that maybe the Jane geeks see through the FUD but are content to let the little boys argue and fight about who has the better toys. There is only one cure for this: I see, I do, I understand.--Aspro (talk) 22:19, 24 April 2014 (UTC)[reply]
Apple releases Heartbleed fix for AirPort Base Stations A Quest For Knowledge (talk) 22:07, 24 April 2014 (UTC)[reply]
Thank you for the information and the news articles. I did some homework, because I was the original nit-picker, and I always like to re-verify my previous statements for accuracy. Apple's official article on this topic is: Knowledge Base HT6203, the official release, confirms that a firmware update for AirPort Base Station to address CVE-2014-0160 was made available for products that Apple shipped in 2013. The firmware update mentions CVE-2014-0160. But it does not, however, explicitly confirm whether OpenSSL was even used on the product, let alone whether it was actually affected! Let me emphasize that as an unprivileged user without access to all the firmware source-code for AirPort Extreme, I can only speculate! But, there is strong evidence to suggest:
  • ...if OpenSSL were present on that device, it seems plausible that it would have been 0.9.8, a version of software that is so old that it pre-dates the portion of code in which "Heartbleed" existed... in other words, OpenSSL-0.9.8 did not actually contain the CVE-2014-0160 vulnerability.... (And let me re-state that I don't know what version, if any, ever actually shipped on those WiFi products. I just read the fine print in the EULA. "Certain components of the Apple Software, and third party open source programs included with the Apple Software, have been or may be made available by Apple on its Open Source web site... You may modify or replace only these Open-Sourced Components; provided that ... you otherwise comply with the terms of this License and any applicable licensing terms governing use of the Open-Sourced Components." I bet you there aren't too many programmers who actually do ! (But, I've also suspected that most people don't build their Linux from source, or compile their own operating system for their telephone... so why would most ordinary people waste their time munging their WiFi router?)
  • ...releasing a firmware update, without releasing source, we can only speculate what changes (if any) were made - all we know is that the update "addressed" the vulnerability! The release-note states that the feature was not even used in the default configuration that ships with the product: "Only AirPort Extreme and AirPort Time Capsule base stations with 802.11ac are affected, and only if they have Back to My Mac or Send Diagnostics enabled." I still scrambled to verify all my units to determine if an update is required. And I would be absolutely fascinated to see a write-up from anybody who actually attemtps to exploit an AirPort Extreme running firmware 7.7.1 or 7.7.2: what exactly do the attackers have practical ability to extract, in light of the qualified statement "information from process memory"?
I'm actually surprised that nobody brought up CVE-2014-1266 - a recently-fixed, totally unrelated SSL bug in a totally unrelated library, that affected numerous products in their default configuration.
Nimur (talk) 19:39, 26 April 2014 (UTC)[reply]
  • So at the end of the day, the biggest threat to the internet that ever existed, turns out to be just another little bug that would have passed unnoticed and fixed, if it hadn’t coincided with XP's end of life. And the security company that broke the embargo are patting the own backs for all the publicity they have gained at the expense of security. Let us prepare then (no, not you specifically but the hoi polloi), for next time Apple finds, it has a maggot or microsoft is found to have left its Gates open. Lets create a website of that vulnerability with a striking, snappy hartbleed type name and logo and invent some FUD of our own. --Aspro (talk) 15:46, 27 April 2014 (UTC)[reply]

Possible software conflicts?

Can installing an add-on JDK or JRE on Windows 8 cause Microsoft Flight Simulator X to become non-operational? If so, are there any JDK's or JRE's out there that are known NOT to have this effect? Thanks in advance! 24.5.122.13 (talk) 04:53, 23 April 2014 (UTC)[reply]

I've never had a problem having Java installed alongside that game. Palmtree5551 (talk) 16:49, 23 April 2014 (UTC)[reply]
In my case, shortly after I installed Java in order to activate a chemical drawing program I needed, FSX crashed so bad that it wouldn't even uninstall or reinstall properly, much less run -- I had to nuke and pave my system to get this resolved. But I don't know if this was because of Java, or for some other reason. 24.5.122.13 (talk) 22:27, 23 April 2014 (UTC)[reply]

Looking for recommendations for a proxy server that runs on Windows

I want to run a proxy server on my LAN for the following two reasons:

  1. To block ads. If I can block ads at the LAN level, this saves me the trouble of installing multiple ad-blocking apps across all my browsers and computers. Also, if I block ads at the LAN level, this should also block ads on my mobile devices such as my iPad and my Chromebook.
  2. To monitor all network traffic. After reading that 40% of iOS and 41% of Android banking apps accept fake SSL certificates, I want to know which of my mobile apps are using SSL and which ones aren't.

I am looking for something that runs on Windows. And since I have no experience with proxy servers, something that has a good UI. Does anyone have any recommendations? I have never setup a proxy server so I'm not sure what's good or what's commonly used. A Quest For Knowledge (talk) 22:45, 23 April 2014 (UTC)[reply]

FWIW, the Microsoft solution would be Microsoft Forefront Threat Management Gateway, but I doubt that would be practical or necessary for what you want to do. Vespine (talk) 22:53, 23 April 2014 (UTC)[reply]
I use Privoxy, but it doesn't have a configuration GUI as far as I know; you set it up by editing text files, which is not too difficult.
It sounds like you want to avoid apps that use SSL. That's probably a bad idea because whatever they use instead is likely to be worse than SSL, even SSL without certificate validation. What you really want is a proxy that will (optionally) try to mount a MITM attack on all SSL connections, so you can figure out which apps detect the attack. Privoxy doesn't do that, but this thread mentions a bunch of proxies that do. I haven't used any of them, though. -- BenRG (talk) 04:48, 24 April 2014 (UTC)[reply]
MITM linked for your convenience ;)- ¡Ouch! (hurt me / more pain) 08:17, 25 April 2014 (UTC)[reply]

April 24

Can't get my password

I have forgotten my password. When I click on the forgot password link and type in my email address, it appears to work, but I never receive the reset email. So I can't get logged in any more. — Preceding unsigned comment added by 76.184.156.59 (talk) 00:50, 24 April 2014 (UTC)[reply]

Did you register your email address when you created your account (Oh you haven't created one). If not, then probably only the NAS and GCHQ knows how to log in.--Aspro (talk) 01:24, 24 April 2014 (UTC)[reply]
Aspro, our queriant hasn't got access to his/her account, at present, so is posting anonymously. I take it, OP, that you didn't create a confirmed identity? CS Miller (talk) 12:31, 24 April 2014 (UTC)[reply]
  • ? The OP has been editing anon since 23 February 2014. Is it not reasonable to assume that they did not create and account in the first place? The anonymous OP 76.184.156.59 now appears to be up and running again, only as this time, he is anonymous 76.184.156.59 (do I hear an echo?). (Never mind. I created a new account.) [1] anonymous 76.184.156.59. Maybe an admin with more diplomatic skill than I can muster, may like to take him undertake their wing to guide him on how to create a proper account – should he so wish to. --Aspro (talk) 16:57, 24 April 2014 (UTC)[reply]
Check your spam/junk folder, just in case. —Nelson Ricardo (talk) 01:44, 24 April 2014 (UTC)[reply]

Are you or your email provider using Yahoo? Have a look at this discussion. And this link explains it. - X201 (talk) 13:22, 24 April 2014 (UTC)[reply]

If you have more than one e-mail account, make sure that you're using the right one. You'll get that message regardless of whether you used the right e-mail address. A Quest For Knowledge (talk) 13:44, 24 April 2014 (UTC)[reply]

Google custom date range disappears

Forgive me if this has been covered before. In the last day or so I appear to have lost the ability to customize dates when searching for content on Google using Opera. The menus have changed too, with a sidebar appearing on the left of the page, and some of the options, such as customise range (allowing a date specific search, eg, 2012) disappearing. Strangely though it hasn't changed on through Chrome or Internet Explorer. Is there any way to tweak Opera so I can switch back to the previous version of Google, or am I stuck with it? Thanks. This is Paul (talk) 16:28, 24 April 2014 (UTC)[reply]

April 25

Reassignment of state IP addresses

A thread at WP:AN led me to [2] and [3], in which childish vandalism from several years ago has been attributed to UK state-owned IP addresses. Dynamic IP addresses are common for residential customers and schools, but does it ever happen that an IP address could be reassigned from a home or a school to a state agency? I couldn't find anything relevant in the IP address article. Nyttend (talk) 01:20, 25 April 2014 (UTC)[reply]

Large blocks of addresses are assigned to large internet companies, to regional authorities like ARIN, RIPE NCC, etc., and (often for legacy reasons) to a few organisations like universities which were early players on the internet when it began. RIPE etc. then carve off chunks of their space (still in the tens of thousands) for ISPs and large organisations like governments. Unusually, the DWP also has its own /8 block (16 million addresses) as noted by John Graham-Cumming here (I checked RIPE's database just now and it's still assigned). Beyond that, how a large organisation like a government assigns its addresses is its own affair (it's not a public record, necessarily). Usually they'll subnet internally, with final assignments to individual machines a mixture of static (esp for servers) and dynamic (for moving things like laptops). But it's very common for such large organisations to keep their internal network (everyone's workaday machine) behind a firewall which rewrites the address space (a big Network address translation) and with geographic distinctions pretended-away (by a Virtual private network). For the UK that's the Government Secure Intranet, which most people in central government departments use. If someone in the government accesses the public internet, there need be no (permanent) relation between the IP address and an individual workstation; so in practice it will work like an ISP, and possibly (as the ISP in the UAE is) with many people NATed into a handful of public addresses. This means that if it becomes necessary (e.g. for legal reasons) to trace who did a specific action on the internet, from such an address, it would require access to the logs for the firewalls and routers to figure out who. The Guardian article talks about changes made in 2009 - it may be difficult to do that backtracking after such a delay. In any event, even if the change could be ascribed to a specific computer, that's not enough to blame the person who would typically use that machine (everyone has that co-worker who thinks he's such a witty prankster). -- Finlay McWalterTalk 08:41, 25 April 2014 (UTC)[reply]
If you're wondering whether a current GSI address was, in 2009, assigned to an ordinary public ISP - probably not. When big blocks of addresses are assigned, that's usually permanent - the addresses are rare, and organisations don't want to give them up. To delay the IPv4 address crunch, some organisations (e.g. Stanford Uni) have voluntarily given up very large chunks they were formerly assigned. To answer your question for sure, for some specific block, would require digging around in the history of RIPE's assignments to that block - but it's unlikely. -- Finlay McWalterTalk 08:45, 25 April 2014 (UTC)[reply]
I'd hope that the people investigating this case checked that before they checked anything else. If the IPs didn't belong to the government at the time of the edits, I'm sure the articles covering this story would mention that.
In Talk:Hillsborough disaster#Wikipedia edits from government IP addresses an anonymous person says "those two address are firewalls that provide internet access (via network address translation) to pretty much the entire civil service". -- BenRG (talk) 20:42, 25 April 2014 (UTC)[reply]

Login pages not using SSL

Over the past few days, I've noticed a shocking number (six so far) of websites that have login pages that aren't using SSL including two websites who ironically were sending me newsletters promoting articles/webcasts about web security. I've been notifying the websites to let them know that they have a gaping security hole, but so far, have only heard back from one. In any case, I've been wondering to myself why browsers can't detect whether a login page is using SSL. I just noticed that Firebug has such an ability, so obviously, it's possible. So here's my question. Does anyone know of any plug-ins/extensions to the major browsers (IE, GC, FF, Opera and Safari) which check whether a login page is using SSL? I'm at the Chrome store and so far, I'm not having any luck. A Quest For Knowledge (talk) 14:07, 25 April 2014 (UTC)[reply]

Update: I guess it doesn't have to be a browser add-in/extension. It could be an application or browser setting. I want something to automatically warn me if I'm at a login page not using SSL. A Quest For Knowledge (talk) 14:13, 25 April 2014 (UTC)[reply]
Maybe you're better off without it. Heartbleed. —Nelson Ricardo (talk) 08:04, 26 April 2014 (UTC)[reply]
Not really. On April 17th, 10 days after that bug was announced:
  • Top 1,000 sites: 0 sites vulnerable (all of them patched)
  • Top 10,000 sites: 53 sites vulnerable (only 0.53% vulnerable)
  • Top 100,000 sites: 1595 sites vulnerable (1.5% still vulnerable)
  • Top 1,000,000 sites: 20320 sites vulnerable (2% still vulnerable)
We're almost 20 days into the problem - I doubt there are many sites that are still vulnerable. So SSL isn't an ongoing problem - and OpenSSL is getting more scrutiny than any piece of software I can ever recall - so it's going to be the best choice going forwards.
To answer the OP's question, I wonder whether you could repurpose any of the MANY heartbleed bug detectors for your cause. After all , if the intended target isn't even running SSL, those tools ought to come up with a suitable error message - and that way, you'll know both whether it uses SSL and whether it's been patched for heartbleed - which is just as important. SteveBaker (talk) 13:48, 26 April 2014 (UTC)[reply]
@Nricardo: Login pages not using SSL is even worse than Heartbleed. If a website's login page isn't using SSL, that means that passwords are being sent as plain text. Anyone who can view Internet traffic can see theses passwords. No hacking is even required. A Quest For Knowledge (talk) 21:18, 26 April 2014 (UTC)[reply]
Not that I'm a particularly expert on web technologies, but as I understand it there is no reason why a page that was transmitted via plain http cannot use a secure channel to do authentication - especially nowadays, where many websites really are Javascript applications, not passive html documents. --Stephan Schulz (talk) 21:41, 26 April 2014 (UTC)[reply]
Yes, websites that switch to HTTPS for logged-in users will normally send credentials over HTTPS even if the login page was served over HTTP, so you're probably safe from passive eavesdropping (e.g. Firesheep). But if any part of the login page is served over HTTP, an active attacker can inject malicious HTML or Javascript that will also send the credentials to the attacker (over HTTPS, even). I think that's what Firebug's warning is for. It's a much more difficult attack, though. -- BenRG (talk) 08:12, 27 April 2014 (UTC)[reply]
@Stephan Schulz: You are correct, it's the post back that needs to be encrypted, not the load of the login page itself. However, there are two problems with that. There's no way to know if the postback is using SSL unless you a) examine the source code or b) examine the HTTP request in Firebug (or some similar tool). Either way requires extra work on my part (which I am doing because I don't know of any simpler way) but I'm looking for a solution that automatically warns me without me having to think about it. Second, the average user doesn't have the technical skills to do either. A Quest For Knowledge (talk) 12:32, 27 April 2014 (UTC)[reply]
@BenRG: You bring up an interesting question (and something that I haven't researched yet). Let's assume that I am on a free WiFi network at Starbucks. If I login to a website that sends my credentials unencrypted (ie. not SSL), doesn't that mean that anyone else on that WiFi network can steal my login credentials? A Quest For Knowledge (talk) 12:38, 27 April 2014 (UTC)[reply]
It depends on whether other people in Starbucks are in promiscuous mode. You might very well think they are; I couldn't possibly comment. Thincat (talk) 12:56, 27 April 2014 (UTC)[reply]
There are alternative encryption mechanisms other than https and transport layer security A secure encryption scheme can be implemented at the application layer, instead of at the transport layer; arguably, this is more secure for certain contexts, because the application developer can assert its integrity even when its local machine's network implementation is untrusted. This can be very important, for example, if secure data is accessed on a shared computer. (i.e., if you want to protect the application data from everybody including the local machine administrator). Think about an Automated Teller Machine - it'd be ludicrous if they used transport layer security only ! Any time a software technician installed the ATM system, he'd be able to set up a root account and eavesdrop on data before it got to the network! This kind of application must implement encryption before the data is readable by local users, let alone network users. Such an application cannot depend on https; https may be used in addition, but arguably adds no additional security.
Let's look at this another way: if you're logging into a web browser and the user-interface crashes (SIGSEGV, or whatever); and a program crash log dumps in-memory contents to a crash report - and securely transmits that crash report to the open-source programmer who gave you your web-browser for free... do you believe that https protects your sensitive data from the programmer who has access to a memory image of your application's runtime state? In that case, sensitive data needs to be encrypted before it reaches the application. Either you need some specialized hardware like a trusted computing module - and a lot of faith in its correct and diligent implementation - and a trusted external service API to encrypt data before providing it to untrusted applications - or you need to do encryption by hand, on paper, before ever entering anything into the application UI. Think long and hard about how to design software for authenticating and authorizing a missile launch! It would be insane to think that a software programmer could be permitted to handle unencrypted data in that context. There is a single boolean bit - "launch now" - that has to be protected from almost everybody - including the programmer who writes code to set that bit - not just malicious network users who learned that WireShark is a thing that exists. HTTPS is entirely superfluous for that scenario.
So when you're staring at a website that, say, authenticates you to your bank, you might need to think really carefully about why https is required. If it is used "for security," and not "for privacy," then your website might not be as secure as it should be. Nimur (talk) 14:10, 27 April 2014 (UTC)[reply]

SATA connector question

My new computer has four SATA connectors on the motherboard: white (or a very light beige), orange, light blue and dark blue. Of these, the light blue one is connected to the computer's optical drive, and the dark blue one is connected to the hard disk. The white and orange ones are free. Does it matter where I plug the second hard disk in? I intend to install Fedora 20 Linux on it, leaving the existing Windows 8 hard disk absolutely intact. JIP | Talk 15:14, 25 April 2014 (UTC)[reply]

You need to check the motherboard manual for sure. On some chipsets, some of the SATA ports can be used for a RAID, and their ports are typically coloured differently (but if you don't have a RAID configured, these ports just work as normal ports). On newer motherboards, some ports may be SATA2 and some SATA3 (e.g. this one), and again the colours and screenprinting on the circuit board should help identify which is which. And some motherboards contain two SATA controllers (even though these might both be in the same southbridge chip, and again the colours will denote that. I don't know of a standard colour scheme. But mostly you can just plug in anywhere and it'll probably work fine. -- Finlay McWalterTalk 15:26, 25 April 2014 (UTC)[reply]
It will work anywhere, but the hard disks need to be on the fastest ones. Dark blue indicates the faster SATA, so my guess is that the light blue is also fast. My guess is to but the second HD on the light blue SATA connector and move the optical drive to one of the others. Bubba73 You talkin' to me? 00:37, 26 April 2014 (UTC)[reply]

April 26

Restore Previous Versions do not work.

Hi there,

I have this set up: Windows 7==>Oracle Virtual Box==>Ubuntu VM. About a week ago I turned the machine on and found that the moment I start the VM the whole host system freezes. I posted at a Linux forum but got no resolution.

Then I decided to "Restore previous versions." I clicked on the VirtualBox folder (Oracle subdirectory) and got a list of dates some of them certainly go back far enough and I would like to restore to one of those date. However, the Restore button is dimmed. So I cannot use it.

I do backups fairly regularly and wanted to use one of the previous system images to restore the whole system to its previous state. In particular I did a complete backup last time on Apr 12, 2014 and I do have a System Repair Disk but when I tried to use the whole system restore I got only one option: Apr 23 which is just 3 days ago. I do not need that.

What is wrong with the system?

Thanks, --AboutFace 22 (talk) 15:23, 26 April 2014 (UTC)[reply]

P.S. My apology. I just clicked on a checkbox "Show More Restore Points" and got a slew of them. It is in the System Restore from Control Panel. So this problem has been canceled, however I would feel much better if I could restore only a particular folder, not the whole system. Thus the first part of the post is still valid. --AboutFace 22 (talk) 15:33, 26 April 2014 (UTC)[reply]

P.P.S. I restored the system to a date I considered well beyond the first occurrence of the freezing problem-it did not help. I reinstalled the VirtualBox (repaired it)-it did not help. So, logically the problem must be in Ubuntu. I will have to reinstall it also. --AboutFace 22 (talk) 18:42, 26 April 2014 (UTC)[reply]

OK, let us know if that works, please. StuRat (talk) 23:30, 26 April 2014 (UTC)[reply]

Notepad++ help

I have a CSS file which appears horizontal (all of it's content is written in one row, instead of breaks). is there anyway to automatically fix such a problem? thanks. Ben-Natan (talk) 23:14, 26 April 2014 (UTC)[reply]

I believe that's a problem between a line feed and carriage return. Some programs use one as the end of line character, and some use the other. I suggest you try Wordpad, I seem to recall it using the other character. StuRat (talk) 23:28, 26 April 2014 (UTC)[reply]
It's a CSS file, Wordpad can't help here... anything else you could suggest? Ben-Natan (talk) 01:14, 27 April 2014 (UTC)[reply]
I don't have Notepad handy right now, so I can't test it but wouldn't Notepad's soft wrap work to make it more legible? Dismas|(talk) 01:23, 27 April 2014 (UTC)[reply]
http://www.styleneat.com/ --  Gadget850 talk 01:39, 27 April 2014 (UTC)[reply]
Notepad++ (not to be confused with Notepad) automatically detects LF, CRLF and CR line endings, so that isn't the problem. CSS files are often optimized/obfuscated by removing unnecessary white space including line breaks. -- BenRG (talk) 07:46, 27 April 2014 (UTC)[reply]
I agree, it's more likely that the CSS file was minimised to reduce the size by removing extra spaces and line breaks. There are numerous websites which you can paste the CSS text in (such as Styleneat suggested by Gadget850 above) and have it reformatted in a multiline format, and then you can paste the output back into Notepad++. --Canley (talk) 11:41, 27 April 2014 (UTC)[reply]
Quite honestly, if you have significant and technical need for editing raw text documents, then Notepad (in any revision) is a horribly poor choice. I don't know any professional web worker or programmer who'd remotely consider using it. You might want to check out Comparison of text editors. You'll see that there are more "red" entries (indicating features that aren't supported). There are lots of superb tools out there that cost $0 and run on every kind of computer under the sun - there is absolutely no reason to continue to suffer with Notepad in any of it's various incarnations. SteveBaker (talk) 00:04, 28 April 2014 (UTC)[reply]

April 27

Notepad++

If I want to delete empty lines in a document, than, according to the second answer here, I should go to Edit>Line Operations>Remove empty lines. for some reason - I don't have such option in my Notepad++. why would that be? Ben-Natan (talk) 06:49, 27 April 2014 (UTC)[reply]

Are you using the latest version? It looks like this menu item was added in v6.2.3. -- BenRG (talk) 07:50, 27 April 2014 (UTC)[reply]
I didn't even think it would be matter, You proved me wrong! messier BenRG! thank you! 109.67.164.204 (talk) 08:10, 27 April 2014 (UTC)[reply]

iPhoto

I want to copy all my iPhotos from my Mac laptop to a flash disc whilst retaining their present positions in Groups, Holidays, Family etc. How do I do that please?85.211.136.204 (talk) 18:50, 27 April 2014 (UTC)[reply]

April 28