Jump to content

Talk:Computer insecurity

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is the current revision of this page, as edited by Cewbot (talk | contribs) at 22:17, 30 January 2024 (Maintain {{WPBS}} and vital articles: 2 WikiProject templates. Create {{WPBS}}. Keep majority rating "Redirect" in {{WPBS}}. Remove 2 same ratings as {{WPBS}} in {{WikiProject Computing}}, {{WikiProject Computer Security}}.). The present address (URL) is a permanent link to this version.

(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Suggested Merge

[edit]

I suggest that this article be merged with Computer Insecurity, for the reason that "Computer Insecurity" is not really a bona-fide term related to the industry. Urgh. With few exceptions, most of the top references coming up in search engines are related to facebook entries or (minor player) pundit blogs. Not everything necessarily has an opposite. For example, you wouldn't make a "Country Non-music" article as an opposite of County Music. If computer security really needed an opposite, a better term would be computer vulnerability or more accurately, software vulnerability. Cheers. -- Abraham Lincoln

Agreed. This article is confusing. Someone please candidate for delete and we can have a propper vote on this.
Seconded. This term is gibberish. :::Third.

Definition?

[edit]

'Computer insecurity' could be seen as the designing-in of features which make the computer insecure. This differs in concept from computer security measures. It refers to the addition of features which are fundamentally unwise or ill thought-out from a security POV, and which render the computer more likely to have vulns.

Examples:

Hidden filename extensions (User can't tell the file is an executable)
Auto-running of executables on CD or USB memory (Self explanatory)
Using C/C++ with its buffer overflow risk (Just about any other choice would be better)
Allowing websites to compile and run code on the computer (Java, etc)
Software which malfunctions unless run as Administrator, forcing the removal of security measures
Using SQL to store web form responses (Code injection risk)

None of these practices are vulns in themselves, but all lead to a high risk of unnoticed vulns existing. All could, in principle, be avoided, but rarely are. Thus, insecurity is built into the system by their presence. A computer designed with security in mind would eschew such features.

-A reasonable definition? --Anteaus (talk) 20:03, 16 February 2013 (UTC)[reply]

timeline idea

[edit]

We need to find some way to add a new category for the underground movement, having a timeline, references to Phrack, to 2600, to the BBS's etc.

already exists at the hacking article. — Preceding unsigned comment added by 71.249.208.149 (talk) 19:33, 27 August 2012 (UTC)[reply]

viewpoints

[edit]

I've been looking at this page, and I completely agree with you that it needs a major rewrite. However, I'm not sure on how to accomplish it, as what already exists is far to 'advisory' and not enough 'to the point', in addition to beeing very "morally correct" .

It would be a good thing to have a rewrite, imho.


While it's unfortunate I'm temportary moving lot of useful and well-written content to /Talk. It unfortunatelly presented one-sided view of computer security. I hope we'll made better and more balanced article. --Taw

Old article:

Computer Security refers to the measures taken to assure that only the allowed persons can have access to the data in a computer system. As computer systems hold ever more valuable data, the importance of computer security grows. As the systems are ever more complex, this objective, as security in the real world, remains forever unattainable.

A determined thief can successfully rob the best guarded of banks. A determined computer criminal can read, copy, alter or destroy data in the best secured computer. As in the real world, the best you can do is make it more difficult, changing the cost/benefit equation for the criminal. You can reduce the effects of data loss by careful backing up and insurance. And you can further change the cost/benefit equation by pursuing the criminal after the attack.

The only real difference between computer security and real world, "bank" security is that computer systems are poorly understood, as a rule. Managers have usually a firm grasp on real world security issues, like fences, walls, security personnel, alarms, police, etc. And if they do not, their insurance companies do. Computer systems are many times not insured against data theft or destruction, so this "security consulting" is lost. This lack of insurance for so potentially important a loss is in itself noteworthy. It stems probably from this same lack of knowledge, although the cause may be more complex.

A teenager wandering into a warehouse to pick up a trophy and show it to his friends, is not treated in the real world as a dangerous criminal. If such an "explorer" enters into the company computer system, the management can go ballistic, and the trespasser, if aprehended, risks prosecution. This lack of knowledge is potentially the biggest risk in a company. Of course they will have competent technical personnel, but they will tend to concentrate on the technical side of the issue. Social engineering, for example, will probably be ignored.

Of course, the parallelism between computer security and real world security is not exact, for a number of reasons. For example, vandalism is more dangerous in the computer world because it is potentially much more destructive. A vandal can cause havoc in thousands of computers systems around the world with little effort and small risk of capture. Of course the result would be not as visually satisfying as a graffitied wall, but all taken into account, what is really surprising is the small probability you have today of suffering damage from a computer virus or computer worm.


Today, computer security is composed mainly from "preventive" measures, like firewalls. We could liken a firewall to the building of a good fence around your warehouse. A good first step. But not enough if you keep the fence unguarded (no monitoring), or if you hand a copy of the key to everybody that asks for it by phone (social engineering). If, to add insult to injury, it's widely known that you won't prosecute any trespasser, we could consider the firewall installation as almost an exercise in futility. However, many computer systems are not monitored, and the number of computer criminals to be really brought to justice is abysmally low. In that situation, it's no wonder you have no insurance; the policy would be enormous.

Along the same lines of reasoning, it's good to have an antivirus program, but rather pointless if your users open any and all of the executable attachments they receive by e-mail. Opening an executable attachment is the same as opening the door to your system, with your user privileges, to anybody that sent you that attachment.

In short, lack of computer security today is a multi-pronged menace to which a multi-faceted defense is the only response. Buying an off-the-shelf software package is no substitute for a careful evaluation of the risks, the possible losses, the counter-measures and the security policies, done at a high enough company level.

Interceptions

[edit]

With respect to the risk that "your users" can lack the education, training, and good habits to not open executable attachments delivered by e-mail, the above discussion describes a problem where the reality is that the end users of a computer, or a computer network, have direct access to all the sins of the Internet. The reality is that this is what you find on a system that was insecure from the get go, and never fixed.

  • My work place has a computer network through which our scores of users connect to get to the Internet, and our business data bases, that handle all kinds of applications like engineering drawings, quality documents, payroll, back office accounting, each of which has different security business rules.
    • When an individual PC connects to the network, a script runs on the network to inspect that the PC meets certain standards. For example, suppose it is a laptop that has been out in the cold cruel world and picked up some viros that we not want jumping the network to some co-worker PC. Well the script finds this and blocks on behalf of the whole network, with a log to IT administrators to inform us who is using a PC that appears to be infected with what.
    • When e-mail arrives, before it goes to the individual addressees, there is anti-virus, anti-spam, etc. protection at the network level, where tens of thousands of such unwanted e-mails get intercepted, and viruses quarantined. The end users might only get 2-3 false negatives a day, at most.
  • My ISP filters spam and virus attachments before they get to me. There is a risk of false-positives causing stuff that I might have wanted to get from reaching me. Before they implemented this, I did have anti-spam anti-virus etc. on my e-mail, and it was a constant struggle to adjust the rules to avoid false positives and deal with new kinds of unwanted e-mail that the protections not intercepting. The effect of my ISP service has been to reduce the unwanteds significantly from hundreds of unwanteds a day to 2-5 that slip past. If I ever move to another ISP, what they are doing in this area will be one of the essential criteria in selection.
  • Where different people in some household have access to the same computer, security settings are generally available so parents can set controls on the trouble young children can get into. Given the rapid changes in the computer world, this stuff should really be managed by teenagers in the family.

AlMac|(talk) 20:06, 6 October 2005 (UTC)[reply]

can correctness be proven ?

[edit]

I toned down the claims that code correctness can be proven, by adding the qualification that critics of the so-called Star Wars projects, among others, claim that correctness cannot be proved. In any event, to write from a NPOV, statements of belief or allegations are not sufficient. If you think correctness can be proved, provide links to arguments in support of this that a reasonable qualified person could accept. Graham Chapman

In theory, correctness can be formally proven. However, the number of systems which have undergone a formal proof of correctness numbers fewer than the number of thumbs I have. This maps to the A1 level of security in the old Orange Book. -- Tall Girl 21:59, 28 May 2006 (UTC)[reply]

types of attacks

[edit]

Of the 4 basic attacks, the article focuses on the 3 that are hardest to do. Social engineering, eavesdropping, and even denial of service are extremely expensive. The one that's very easy is code exploits.

Why is that? Well, it's certainly not because we lack "correctness proofs" of codes. Formal proofs are pretty much useless. They're fine for NASA but nobody else can afford them. And they're certainly not error-free since specifications of software are hard to do. Yeah, you know that the program is equivalent to the specification but do you even know what the specification means? What's the point of proving software if you can't trust the proof?

The basic problem with computer security is that there are no security semantics. We still use access control lists, which were proven insecure way back in the 70s or 80s. Not only that, but they were also proven to be deceptive. Access control lists give the illusion of a kind of security which it is provably impossible to ever guarantee.

There are good security semantics, they're called capabilities, and they've been in research OSes for the past two decades. They haven't made it into commercial systems because capabilities make managers nervous; they don't provide the illusion of (provably impossible) control which ACLs do.

By the way, good security doesn't just mean preventing bad people from doing to your stuff what they have no right to do. It also means that you have the ability to do what you have every right to do. Otherwise, the most "secure" computer is one locked in a vault, turned off.

Oh, and "giving someone a program to run which then takes over their computer" is a type of attack. If you consider this attack social engineering then it means social engineering attacks can be prevented (which actually they can). And if it's a code exploit then that too can be prevented (with caps, not proofs). The distinction between a trojan horse attack and a "code exploit" should also be explained. They're not the same as far as the user is concerned. -- Ark

Your statements about proofs are true, but misleading. Complete program correctness does not need to be proven in order to gain security, only a core set of operations need to be guaranteed to behave correctly. Example: Using a proven capability system is a proof of security. Even if I've missed something there, my main point that proofs don't have to be all or nothing still stands. If some software uses proven components, whole classes of exploits could not happen.
And I did get it wrong. A program could still give a cap to something that it shouldn't, but that feels like a simpler problem to diagnose and fix than the current crop of exploits.

phraseology distortions

[edit]

While I was fixing some links, and adding to the wonderful section on financial costs, I noticed some unfortunate phraseology.

Then I added some subcategories to make it easier to navigate to specific threads I wished to comment on.

The reality is that 99% of the computers in the world are delivered to their customers with little or no security, installed by many people who do not realize security implications, and then we embark on a lifetime task of trying to improve the security. There is much work needed to further improve the tools available to those of us in this reality.

However, perhaps 1% of the computer systems in the world are designed by the hardware and software vendors to be secure from the get go. How often do you see IBM AS/400/iSeries operating system listed with vulnerabilities? Perhaps once every 2-3 years, compared to new vulnerabilities every few days for the other systems used by 99% of the world. Once upon a time there were IBM PCs like that, but because including effective security and quality engineering adds to the purchase cost, and because the marketplace demands lower and lower priced computer and software, these highly secure systems that almost never crashed, they were driven out of business by the stuff that has to be rebooted daily. IBM is still very much in business, but their PC division is totally gone, because IBM was unwilling to abandon their commitment to quality, but the only way to compete in the PC business was to deliver products whose quality was in the toilet, so IBM got out of that business.

Anyhow, there are systems out there that are extremely secure in comparison to the 99% reality. The challenges for them is how to cope with the kind of nonsense we have to deal with when we network those highly secure systems to the reality that the 99% computer users dwell in.

But with respect to this article, there are a number of places where people talking about the 99% reality and say something to the effect that a lot more work has to be done, or that the science of computer security is in its infantcy. That is not a valid statement. What is in its infantcy is the challenge of adding good security to a computer system that was delivered without it.

There are in fact several areas of challenge.

  • What I call outsiders ... people who are not authorized to be doing things to the system, the source of the malware, hacking, phishing etc. You deal with this by using anti virus and more sophisticated firewall security suites.
  • What I call insiders ... people who are authorized to be using the system for certain purposes but who use it in an unauthorized way. This calls for security within the software applications to control who may do what, and to conduct computer security audits to make sure the internal controls have been properly implemented.
    • This reality is important on both the 1% of computers that are delivered with good security, but may not have been implemented wisely, and on the 99% of computer systems for which security is something to be added on after purchase.
  • There are mixtures, like insiders who fall for outsider scams like the Nigerian scam where the purpose of the scam is to get your banking info and drain everything from your bank account. Well a trusted employee might use the company account number.

AlMac|(talk) 19:23, 6 October 2005 (UTC)[reply]

PC vs. PA

[edit]

When we first get a Personal Auto, we buy based on cheapest purchase price, for many reasons. Then we discover that during the life of having the car, the purchase cost is a small portion of the total ownership cost, as more and more money is spent on maintenance costs, wear and tear, insurance, impact of gasoline prices on miles per gallon.

We see around us, other kinds of autos, and we educate ourselves such that the next car we get has a higher purchase cost, but lower overall ownership cost, because it needs maintenance much less often. We learn that by changing inexpensive oil more often, the engine needs less costly maintenance. We learn more and our cost of ownership from car to car gets less and less.

With computers, there is much less exposure to what other people are using, so there is not this continuing education for users after first purchase, that there is other stuff out there where by spending a higher purchase price you can get a system whose rate of needing repairs is microscopic compared to the cheapest purchase price.

I have to reboot my Microsoft Windows home PC regularly because all kinds of stuff gets messed up ... it goes with the system, I have the AS/400 where I work, scheduled to reboot at 2 am every Sunday when no one on the system. Actually it only needs to be rebooted every six months or so, but it is easier to program to do it on a day of week, than something like "First Sunday of July and December".

Bottom line, the vast majority of computer users never get the education that most car users get, with respect to the relationship between buying quality up front, and fighting quality issues for the lifetime of the ownership. AlMac|(talk) 19:37, 6 October 2005 (UTC)[reply]

Buffer overflow peer-review=

[edit]

Hey, looking for reviewers for this article:

http://en.wikipedia.org/wiki/Wikipedia:Peer_review/Buffer_overflow

It would be great to have lots of input from different sources. - Tompsci 19:16, 7 January 2006 (UTC)[reply]

Added some information

[edit]

I think my edit manages to convey the difference between "vulnerability" and "exploit", and how exploit code relates to trojans and viruses. I also added information about rootkits, which are today very commonly used even by low-skill attackers. I'm not entirely sure about the order in the "denial of service" part, but i added info on distributed denial of service attacks (and moved the "zombie computer" explanation to there, as well as adding a snippet about denial of service exploits against specific applications.)

Victor Fors 05:27, 15 June 2006 (UTC)[reply]

Added information about bulk scanning, another common source of public misconception, and usage of anynomizing systems instead of the more common zombie proxies.

Victor Fors 05:41, 15 June 2006 (UTC)[reply]

The explanation of network firewalls was blurry; cleared up the information about their role in intrusion prevention.

Victor Fors 05:53, 15 June 2006 (UTC)[reply]

Added information on practices regarding attacks on "properly secured" computer systems. No need to propagate the myth of the allmighty hacker who can access all of the worlds secret databases at a whim. The practice of auditing code and applications for vulnerabilities in order to penetrate specific systems can be debated, however, the skill, time and manpower required to do this is significant and involves a large factor of luck. Discuss?

Another issue i noted, is in the following section: "Computer security is a highly complex field, and is relatively immature, except in the area of designing computers that are secure. Because such computer systems are significantly more expensive than those with little or no security, the market place has driven several such secure systems out of the PC business, IBM for example. The ever-greater amounts of money dependent on electronic information make protecting it a growing industry and an active research topic."

Arguably the most analy secure system out there, OpenBSD, is free, as is several other "mature" operating systems such as Solaris. These systems are secure by design, and vigilant auditing/maintainance. I thus consider that sentence false or at least not standing on a logical basis. Comments? (Constructive, non-operating-system-holy war-comments, that is.)

Victor Fors 07:40, 15 June 2006 (UTC)[reply]

Added in some comments to balance out the Microsoft-centrism of the article. The topic of Microsoft and viruses/spyware versus other operating systems is a controversial one (esp. with all the recent discussion about OS X security), but i think that the opinion that it is "largely a Microsoft Windows-related problem" is widely accepted. Feel free to disagree and discuss with me on this one though.

The part about Microsoft XP having bad local security is based on my experience in the security field, esp. with regards to the total lack of access privilege separation in the signaling system allowing code execution within a higher-privileged process, and thus local privilege escalation. (See information on the shatter attack for examples on what can be done with this. This particular attack vector is now used by practically everyone, as it is very simple to execute). Microsoft has publicly stated that they will not fix this flaw (possibly because it would require that they rework the entire signaling system, which would probably break a lot of third-party applications). Also, removed the part about secure computer systems being more expensive for the reasons stated above.

Victor Fors 22:53, 17 June 2006 (UTC)[reply]

"Unsecure" & "Unsecured" are the Correct Professional Terms in the Field of Computer Security

[edit]

The title of this article is incorrect. There is no such thing as: "Computer Insecurity". People are "Insecure". Computers are "Unsecure". As a system administrator, and former security professional I cringed when I saw the title of this article. Any professional or government information security text will clearly state that the correct terminology is: "Unsecure"/"Unsecured". In fact, when someone uses the terminology "Insecure", when speaking of computer security, it is considered a laughable social blunder that only a novice would make, first, because of the ignorance involved, and secondly because of the humorous implied gross anthropomorphism.

I suggest that this article should be merged with "Exploit_(computer_security)" & "Vulnerability_(computing)", because the information in the article primarily deals with those topics, and both of those topics could use more content.

UNIXCOFFEE928 (talk) 04:29, 16 October 2010 (UTC)[reply]

Rate of attacks

[edit]

"The sheer number of attempted attacks is so large that organisations cannot spend time pursuing each attacker (a typical home user with a permanent (e.g., cable modem) connection will be attacked at least several times per day, so more attractive targets could be presumed to see many more). Note however, that most of the sheer bulk of these attacks are made by automated vulnerability scanners and computer worms."

seems to be a popular myth, possibly perpetrated by firewall vendors that like to report innocent events as possible attacks. In 2000-2001 I left a windows 98 machine online for a year with no security software whatever. When the year was up, no malware of any kind could be detected. Tabby (talk) 10:37, 1 March 2011 (UTC)[reply]

Virus Infects Computers in Japan's Parliament; ... The discovery comes a month after Japanese defense contractors revealed that they had also been targets of cyberattacks, which may have ... October 26, 2011 by Martin Fackler 97.87.29.188 (talk) 22:58, 26 October 2011 (UTC)[reply]

Cyberwarfare, Cyberespionage, ... ? 99.190.85.15 (talk) 03:35, 27 October 2011 (UTC)[reply]
Here is an excerpt from the October 2011 article ...

... Japanese Navy officer was arrested in 2007 for leaking classified data on the American Navy’s advanced Aegis radar system. Japanese officials have apparently struggled to identify the source of the earlier attacks, which came from servers scattered across several nations, including China, Hong Kong and the United States. However, the assumption here seems to be that they originated in China, especially after media reports that investigators had found digital traces that one of the screens used to begin the attacks was written in the simplified Chinese characters used in mainland China.

Related U.S. Expresses Concern About New Cyberattacks in Japan by Hiroko Tabuchi published: September 21, 2011 ... excerpt

An online assault on defense contractors including Mitsubishi Heavy Industries, which builds F-15 fighter jets and other American-designed weapons for Japan’s Self-Defense Forces, began in August, but only came to light this week, prompting rebukes from Japanese officials over the timing of the disclosure. The IHI Corporation, a military contractor that supplies engine parts for fighter jets, may have also been a target, the Nikkei business daily reported. The breach came less than two weeks after a Japanese air traffic controller was questioned for posting secret American flight information on his blog. The data including detailed flight plans for Air Force One last November, as well as data on an American military reconnaissance drone, officials said.

99.181.138.228 (talk) 04:11, 2 November 2011 (UTC)[reply]
Maybe this would be useful in other wp locations? 99.109.125.146 (talk) 00:50, 4 November 2011 (UTC)[reply]
Also consider China Singled Out for Cyberspying; U.S. Intelligence Report Labels Chinese 'Most Active' in Economic Espionage; Russia Also Named; see China (Blue Army, not Blue Team (U.S. politics)) and Russia (example 2007 cyberattacks on Estonia, 2008 Georgia–Russia crisis) in November 4th, 2011 WSJ by Siobhan Gorman. 99.109.125.101 (talk) 23:04, 4 November 2011 (UTC)[reply]

Citation for claim?

[edit]

It is claimed in the article under "Difficulty with response" that the typical home user with a permanent internet connection 'will' be attacked 'at least' several times a day. Is there any citation for this? I somehow feel that this has been exaggerated. Also in the citation I hope they are specific with where they gather their information and not have a computer security company just claim they did a study.Vladashram (talk) 11:37, 15 June 2012 (UTC)[reply]