Talk:Magic number (programming)

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
WikiProject Computing (Rated C-class, Mid-importance)
WikiProject iconThis article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
C-Class article C  This article has been rated as C-Class on the project's quality scale.
 Mid  This article has been rated as Mid-importance on the project's importance scale.
WikiProject Computer science (Rated C-class, Mid-importance)
WikiProject iconThis article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
C-Class article C  This article has been rated as C-Class on the project's quality scale.
 Mid  This article has been rated as Mid-importance on the project's importance scale.

Magic numbers in text files[edit]

Does anybody know if there is a possibility to differentiate text files and binary files by using the magic number?

Well, yes and no: At a very basic level, no, because there is technically no difference between a binary file and a text file - all files are stored as binary data, and if you interpret them as ASCII (or Unicode, or whatever) you can display them as text. A magic number is simply a part of the data of a file, not an external property, so it is only a judgement of what the file looks like, not a definite sign of what it is. OTOH, the file(1) command (which, were I to rewrite this article, which I might, would have a much greater mention) uses a set of tests powerful enough that it will tell you if a file is pure ASCII (i.e. it has no bytes that would be non-displayable if interpretted as ASCII), and even I think what language it is likely to be (based on relative frequency of different letters, possibly, I'm not sure). So while this is pushing the definition of a magic number somewhat, tools based on the concept can indeed differentiate "text files" from other binary data. - IMSoP 15:48, 20 Apr 2004 (UTC)
Source code is 'text', so is HTML. Text with accents in it is 'text' too. It really depends on what you mean by 'text', but text files generally do not have a magic number Elektron 11:02, 2004 May 6 (UTC)
Indeed, but as I say, it depends what you mean by "magic number" as well - utilities like file(1) basically just check magic numbers, but can also make judgements like "is there anything in this file that would be crazy if interpretted as text". As such, HTML files are a subset of text files; JPEG files, however, aren't - they contain things that you couldn't possibly interpret as meaningful ASCII. - IMSoP 21:48, 6 May 2004 (UTC)
Unicode text files (UTF-8 less often than others), typically include magic numbers known as byte order marks. -- intgr 13:29, 14 November 2006 (UTC)

Magic strings and stuff[edit]

Should we include magic strings (such as "$1$" used to identify a md5 password)? Is the md5 init data (0x67452301,0xefcdab89,0x98badcfe,0x10325476, which is really 0123456789abcdeffedcba9876543210 as four little-endian longs) a magic number? Elektron 11:02, 2004 May 6 (UTC)

It's arguable that you can consider a string to be a number of sorts (Consider shebang as a weak example). How is "$1$" used? How is the init data used? Is that set of longs always used in md5? (I'm not familiar with it). I would say yes, though... Dysprosia 11:10, 6 May 2004 (UTC)

A computer looks at everything as a sequence of 1's and 0's. How those are interpreted is up to you (as a user or programmer). If you want to see a sequence of 1's and 0's as a piece of music, a picture, a text string, or simply a number, that's up to you. In the case of magic numbers, well you need to provide some kind of number. So how about (for instance) picking a number that happens to have the same bits as a string? It's easier to remember. :-) Kim Bruning 18:37, 6 May 2004 (UTC)
Well, in some cases that isn't quite the causal order of things, but yes. I mean, "<html>" and "<?xml>" could both be used as magic numbers, but they were strings first, with meaning to a text-based interpretter system. Still, comes to the same thing - a string can be seen as a number, a number can be seen as a string. - IMSoP 21:51, 6 May 2004 (UTC)
I wouldn't call <HTML> 'magic', since you can use <hTmL>, <html >, and its location isn't fixed in the file (you can prefix it with a <!DOCTYPE>). I also doubt that <?xml (3c3f786d6c) requires case-sensitivity or that it occurs at the very beginning of the file, and could equally be in UTF-16 (feff 003c 003f 0078 006d 006c). Of course, FEFF (or FFFE if you order the bytes wrong) can probably be considered a magic number for UTF-16, and efbbbf for UTF-8. Elektron 04:37, 2004 May 8 (UTC)
The set of longs is used in MD5_Init so that it has some bits turned on at the beginning. Its choice isarbitrary, and would be like calling the standard CRC32 polynomial a 'magic number'. It fits the description we've given (a number chosen for a specific purpose), but doesn't fit the common use of magic numbers (to mark data). "$1$" is used in what OpenSSL calls "the MD5 based BSD password algorithm 1" (I don't think it's been formally named, and the function is just crypt_md5). Such hashed passes look like "$1$salt$hash", as opposed to the UNIX crypt() which looks like "cDr5vRCSFWdnM" (two characters salt, and then the hash). There's also the 'new'-style crypt, which starts with an underscore, and isn't widely supported. Elektron 04:37, 2004 May 8 (UTC)

Move here from article:

(This must have been used somewhere? It's perfect..)

Kim Bruning 10:55, 27 Jun 2004 (UTC)

One of my former colleagues used 0xC0FFEE as a magic number in company-internal tools. JIP | Talk 06:33, 15 Apr 2005 (UTC)


This has the category "Anti-patterns" and is linked from Anti-pattern but no justification is provided in the article as to what's wrong with it. --Random|832 01:09, 2004 Dec 16 (UTC)

I presume this refers to the "Magic numbers in code" section, which begins
The term magic number also refers to the bad programming practice of using numbers directly in source code without explanation.
That sections goes on to explain why the practice is a bad idea. I've fixed Anti-pattern to link to that heading for now, but in general I still wonder if there's some reorganisation to be done here (i.e. this page split, and some of the parts potentially merged with things elsewhere). - IMSoP 11:39, 16 Dec 2004 (UTC)
Sounds like hard code to me. --Astronouth7303
Are there any plans to split this page then? Maybe Magic number (file formats), Magic number (debuggers), and Magic number (antipattern)? Ojw 18:12, 12 August 2005 (UTC)
I would hesitate to split this page due to the small size of the resulting articles. Deco 22:19, 12 August 2005 (UTC)
As a programmer, the concepts of hardcoding numbers into programs, designing debuggers, and designing file formats, seem like totally different subjects for me. Ojw 22:29, 12 August 2005 (UTC)
That's nothing compared to pages like fragmentation that discuss uses of the term in several totally different fields. If the sections grow to the point where this article is getting too large, then I think some kind of split is appropriate. Deco 21:25, 1 September 2005 (UTC)

Shebang = 0x2321 or 0x2123[edit]

After reading this article I fooled around with hexdump, dumping the first bytes from various files on my filesystem. But when dumping some shell scripts, I found that '#!' was 0x2123, and not 0x2321 as the article says. So I "corrected" the article. But reading the Shebang article, it says 0x2321 too, and googling around dosen't make me any wiser. So I'm a bit confused now, which one is correct? -- RoceKiller 13:23, 14 Apr 2005 (UTC)

The line about shebangs (#!) in Unix shell script files was recently edited from 0x2321 to 0x2123. Actually, either is equally valid, as this a question of endianness. Little-endian machines like x86 boxes use 0x2123, but big-endian machines like Sun SPARC computers (yes Virginia, there are Unixen for those too) use 0x2321. JIP | Talk 13:24, 14 Apr 2005 (UTC)

Hexdump is silly, and it thinks that you care about things that are 'word'-aligned, in the day when words were 16 bits. Most GUI hex editors display the hexdump in byte order, though they group them into shorts. Most hex editors also show a side-by-side local-8-bit-character-set text representation (I'll get around to coding a better hexdump sometime...). Where byte order matters, big endian should be preferred since it extends easily to, say, 3 byte magics (like C0FFEE), and it's the order it appears in the file. This both confuses less people, and means misaligned magics are easier to spot. --Elektron 22:44, 2005 May 30 (UTC)

hexdump is not silly and the order on both little and big endian machines should be exactly the same. I am sitting on a little endian machine (Fedora Core running on an AMD processor). The hex code for '#' is 0x23, and the the one for '!' is 0x21. Endianess (if there is such a word) only applies to Integer and Float / Double data types. If you have a string of characters, they are just a string of characters. Are you sure the file you were looking at didn't have them as !# (first person)? The correct sequence is 0x2321. From the magic file (no copyright in file except for description strings) here is a sample shell descriptor:

0 string/b #!\ /bin/sh Bourne shell script text executable

/usr/share/file/magic for some nix machines, /etc/file/magic for others, look for shell --hhhobbit 20:57, 16 November 2006 (UTC)

The correct SEQENCE is 0x23 0x21, interpreting that sequence as a single 16 bit number will result in either 0x2321 or 0x2123 depending on the endianess it is interpreted with. Plugwash 14:43, 16 January 2007 (UTC)

Magic constants and variables[edit]

Some people who only remember "literals in code are generally bad" seem to think that just putting their magic number in a variable or constant (especially a global one in languages that support them) like "FIVE" or "INT_SEVEN" or even a self-describing text string (accessDenied = "Access Denied." -- and that's one of the more useful examples) is a good solution to the problem.

I think we should include at least one paragraph to explain that this approach is the same shit with different icing -- the point of avoiding magic numbers in code is to put them into more descriptive variables that are not related to their content (or content type) as much as to their purpose. LITTLE_PIGGIES_COUNT is okay, INT_FOUR isn't. Especially when the value of INT_FOUR might be changed (there IS production code out there where constants like EIGHT have later been set to 16 -- I'm not kidding). -- Ashmodai 07:10, 3 April 2006 (UTC)

Non-magic numbers[edit]

This article should mention that sometimes a number isn't magic. These are usually limited to 0, 1, -1, and sometimes 2. I say this because I've seen well-intentioned code that looks like:

const double zero = 0.0;
double x = zero;

which is useless since it adds layer of indirection but not a layer of abstraction. —Ben FrantzDale 15:01, 1 May 2006 (UTC)

Constants in program source and magic numbers[edit]

Are all hard-coded constants really considered magic numbers? I always thought that the term "magic number" was limited to usages where arbitrary numbers were used as uniquely distinguishing identifiers. The section on coding style seems well-intentioned, but out of place to me. Using the deck of cards analogy, a coding example that showed 1 = spades, 2 = clubs, etc. would seem more applicable than an example showing a constant for the number of cards in the deck. Andrwsc 17:27, 10 May 2006 (UTC)

I've always understood it to mean all literal numbers that are made more readable by symbolisation. PhiTower 15:09, 25 March 2007 (UTC)

I think it's somewhat misleading to describe the only acceptable literal numbers as 0 and 1. This totally ignores the real issue, which is the programmer symbolising a literal number if and only if it makes the code more understandable/manageable. In this way 0 and 1 should be symbolised sometimes. On the other side, it totally ignores the presence of 2 in numerous algorithms that involve doubling or halving such as binary search, reversing an array, etc. Symbolising the 2 there would be silly. PhiTower 15:09, 25 March 2007 (UTC)

I dare oppose to "most programmers would concede that the use of 0 (zero) and 1 are the only two allowable...". In fact, most old-school programmers use a lot of magic numbers for very good purposes. The people who fire the "antipattern shotgun" at everything are not always right. Sure, people do abuse magic numbers, like they abuse anything... but that does not make them wrong. Magic numbers can for example be used to give constants a meaning much like an "ordinary" enum does, but still visible in the binary code and the debugger. In most respects, a magic number has no disadvantages over any other number (the one notable exception is the switch() statement, because a compiler can implement switch() more efficiently using a jump table if continguous numbers (such as from an enum) are used). As an example for a magic number being used in a file format (or data block), the Microsoft byte-order-mark is a good example of a sensible appication. If you ever have to deal with documents coming in different encodings, BOM makes your life both as programmer and as user a lot happier, I so wish the Unicode Consortium had thought of something similar from the beginning. —Preceding unsigned comment added by (talk) 12:06, 15 October 2007 (UTC)

In the spirit of being bold I've rewritten the entire section on acceptable use of magic numbers. I've added some more examples of common usages of magic numbers in code (drawn from my own programming experiences). I've mentioned the 0 and 1 as True/False, but added a note about the macro definitions in stdlib.h (or cstdlib in the C++ world). Likewise for null pointer use of 0. 10:30, 16 November 2007 (UTC)

I thought's re-write was very good and have added some more wikilinks, some text formatting and some info from outside of the C/C++ world too for good measure. I wasn't sure about the section heading any more either, so I've tried to improve the tone of that as well. ---- Nigelj (talk) 18:57, 16 November 2007 (UTC)
I made the edits by, but apparently wasn't logged in at the time. I've made a few further changes (mostly stylistic things, and a couple of typos). I think this section is much better-rounded now than it was before. Ve4cib (talk) 23:37, 17 November 2007 (UTC)

Magic numbers in protocols[edit]

I have added the request for expansion template, as this section is virtually empty. I was hoping to see a discussion about things like RFC1700 assigned numbers and their equivalent for other protocols, etc. Andrwsc 17:27, 10 May 2006 (UTC)


I don't consider all the numbers listed under "Magic debug values" to be notable enough to be worth mentioning here. I'd like to remove all the numbers with no mention about where they are used, as well as the ones where the use mentioned is not notable (I don't consider the string some random person has been using as MAC address or chat nick to be notable). Kasperd 09:47, 13 January 2007 (UTC)

Agreed - be bold! --Nigelj 13:08, 14 January 2007 (UTC)
for the ones without a mention of where they came from I think it would be even better if someone who knew where those came from added the info instead of having incomplete stuff be deleted before having time to be fixed --TiagoTiago (talk) 18:09, 18 December 2008 (UTC)

nintendo magic number[edit]

someone please append it;) Xchmelmilos (talk) 02:26, 12 April 2008 (UTC)

NPOV: magic constants[edit]

While I agree that magic constants are generally bad, it's not an NPOV statement to put in an encyclopedia article. Wikipedia is not a coding style handbook. I think that that section should be changed to reflect a neutral point of view. CapitalSasha ~ talk 04:29, 20 July 2008 (UTC)

I agree.--Avl (talk) 18:20, 9 October 2008 (UTC)

It's hard to be totally neutral when something is widely acknowledged to be negative. See the Wikipedia section on Spaghetti code, for instance. Or you may want to peruse the entries in the "Anti-pattern" category. That said, I'm gonna rewrite the intro to be a little less prescriptive and I'll also add refs. Am not the original author. Leemeng (talk) 04:12, 10 December 2008 (UTC)

Accepted use of magic numbers[edit]

Anyone else thinks this section is slightly strange?

It first says that magic numbers are acceptable in some contexts. Then it says such acceptance is subjective. Yet it goes on with rather detailed prescriptions: "It should be noted that while multiplying or dividing by 2 is acceptable, multiplying or dividing by other values (such as 3, 4, 5, ...) is not, and such values should be defined as named constants."

If the acceptance of magic numbers is subjective, should wikipedia really be giving such detailed advice?

I just think this section reads like a textbook, with some weasel wording ("While such acceptance is subjective"). And I happen to think that what it describes is not generally accepted truth. But it may just be me... --Avl (talk) 18:19, 9 October 2008 (UTC)

Agreed. Wikipedia is not prescriptive and this "detail" is not sourced. The problem such recommendations run afoul of (and this one does too) is that it fails to take into account context.
The most common reason why 2 is acceptable, for example, is because it's the base of the binary system, which is natural to computers. But if I'm developing a library that uses base 3 calculations extensively, 3 enjoys the same status, and it would be unnatural to declare a constant for it. This is especially true if the algorithms themselves rely on calculations being done in base 3 (i.e. you cannot declare a constant BASE = 3 and then modify this constant afterwards without the algorithms breaking).
I've deleted the specific statement you mentioned, as it added nothing and was demonstrably disputable. The rest of the section, while still in need of citation, doesn't run contrary to common usage as far as I can tell. (talk) 13:17, 22 November 2008 (UTC)
Actually, your example in comment above is NOT a case where you should code the 2 (or 3) directly as a number. The point of declaring a named constant is to answer the question "why is this value what it is"? Has nothing to do with whether it is meaningful to alter it. If I have an algorithm that only makes sense in base 3, I would declare "const int BASE3 = 3;" Then when a formula includes "BASE3" I know WHY that is a value 3. (Note that I would not name it "BASE", if it should not be changed from 3 - a wisely chosen name, or a comment on the declaration, addresses the objection that you raise to overuse of named constants.) IMHO, there are few situations, outside of test code, where it is a good idea to put values other than 0, 1, or 2 directly into code, rather than declaring a named constant. Well, there are various math formulas using other constant numbers, that I might use directly, with a reference to the source that explains the numbers. But in most "general purpose" programming, in my experience, numbers directly in a line of code is less clear than naming the constant. "Single responsibility principle" applies here - usually, over time, that number becomes needed somewhere else. Better to give it a name on first appearance, than to risk you or someone else copying that number later. ToolmakerSteve (talk) 19:11, 4 July 2017 (UTC)

Bytes vs Words[edit]

I replaced most of the instances of word values (e.g., (0x474946383961) with byte sequences (47 49 46 38 39 61), because the file formats discussed specify a certain byte sequence, not a certain integer word value. This distinction is especially important in light of byte endianness, since most of the magic number sequences are the same regardless of the underlying hardware reading the files. Where endianness is an issue for a given magic number (e.g., the TIFF file format), it is mentioned as such. | Loadmaster (talk) 16:30, 5 December 2008 (UTC)

Array Indices[edit]

Can someone please explain this line: the use of 0 and 1 while indexing through an array? I don’t believe that using hard-coded indexes is generally acceptable.

How does it differ for say using first and last or front and back? If it is idiomatic, it is OK, if not, not. The point should be think of the person coming after you, will they understand it? SimonTrew (talk) 17:12, 24 February 2009 (UTC)

Separate articles[edit]

Shouldn't it be better if the three concepts mentioned here had their own articles? I think the main article should rather be a disambiguation page with links to others.

Mariano -- 02:42, 31 December 2008

Agree... AnonMoos (talk) 04:10, 19 February 2010 (UTC)

AAAAEBAJ use in Google Patents URLs[edit]

For some reason, when you Google "AAAAEBAJ" (w/o the quotes) [or search wikipedia for it for that matter] it will almost always return a reference to a Google Patents URL. Would this be considered a magic number? —Preceding unsigned comment added by (talk) 05:03, 24 February 2010 (UTC)

It looks to me like this is just the index number for this patent in the database. If so, this would not be a magic number.Boardhead (talk) 16:43, 31 March 2010 (UTC)
Well, yes and no. (I'm the same user that posted the previous comment, BTW.) The 'index number' as you say, almost always ends in AAAAEBAJ. Some examples:
In fact, I personally have yet to encounter an example of a URL with a format like that where the index number doesn't end in AAAAEBAJ. (talk) 13:17, 1 April 2010 (UTC)
The letter J does not represent any number in hexadecimal. Hex only uses letters A-F. (talk) 22:17, 16 January 2019 (UTC)

Programming Example[edit]

I think that better programming examples are needed. Currently, the examples encourage another poor programming practice that could lead to buffer overruns if the array is shorter than deckSize. For example:

   function shuffle (int deckSize)
      for i from 1 to deckSize
          j := i + randomInt(deckSize + 1 - i) - 1
          a.swapEntries(i, j)

Would be better written as

   function shuffle ()
      for i from 1 to a.length
          j := i + randomInt(a.length + 1 - i) - 1
          a.swapEntries(i, j) (talk) 16:23, 28 April 2010 (UTC)

What you say would be true, except that your second example has no realistic place where a new 'magic number' would normally appear (only a dozo would hard-code a.length so that wouldn't be a magic number problem but a dozo's error). We're not teaching programming here, but giving an example where a problem-domain fact (i.e. 52) is hard-coded implicitly (without declaration and naming), twice (once as 53). If you can think of a better magic number example, or better still, find one in a reliable source, please let's hear about it. Improving the same example further, as you suggest, actually doesn't make the point at all. --Nigelj (talk) 17:31, 28 April 2010 (UTC)
Maybe in the third example, where the code has been extracted into a parameterised function, we could add the following line as the first line in the function? Is this adding unnecessary complexity?
if (deckSize > a.length) throw new Exception("Deck array is shorter than deckSize")
--Nigelj (talk) 17:37, 28 April 2010 (UTC)
Hey, how about we leave it alone, and just assume that all the necessary protection is built into the unspecified swapEntries() function? --Nigelj (talk) 17:40, 28 April 2010 (UTC)
I think the idea is that some unmanaged languages (such as C) have neither built-in protection against undefined behavior nor an easily accessible .length property on arrays passed as arguments. Nor does it have .swapEntries(). I'd recommend adding a swap() global function to the pseudocode standard library that takes two lvalues by reference and swaps their values: swap(a[i], a[j]) --Damian Yerrick (talk | stalk) 04:56, 13 March 2011 (UTC)
While everybody stumbles over the code example being good or not, I've noticed the following:
It makes the code more complex, adding 25% to the LOC in this example --> Going from 3 to 4 lines adds 25%? Is 25 perhaps a magic number here? :-)
-- (talk) 15:00, 4 February 2011 (UTC)

Not playing with a full deck[edit]

Magic number (programming)#Unnamed numerical constants states:

An increase in complexity may be justified if there is some likelihood of confusion about the constant, or if there is a likelihood the constant may need to be changed. Neither is likely for a deck of playing cards, which has been well-known to be 52 cards for several hundred years.

I don't think this is the best example we could give. The size of the playing card deck was changed when Euchre (24 cards), Pinochle (48 cards), Uno (108 cards IIRC), Mahjong and Mahjong solitaire (144 tiles), and casino Blackjack (208+ cards) were invented. I see a substantial likelihood that the number of cards will change, even if only to facilitate code reuse in a program implementing other card games. --Damian Yerrick (talk | stalk) 23:18, 12 March 2011 (UTC)

Changing "MZ" to "ZM" in a PE32 causes problems[edit]

Magic number (programming)#Examples says the following:

  • MS-DOS EXE files and the EXE stub of the Microsoft Windows PE (Portable Executable) files start with the characters "MZ" (4D 5A), the initials of the designer of the file format, Mark Zbikowski. The definition allows "ZM" (5A 4D) as well, but this is quite uncommon.

However, changing "MZ" to "ZM" in a PE32 causes modern versions of Windows to puke. This is independently verifiable with a hex editor. Maybe this information is old/out of date? (talk) 18:12, 2 November 2012 (UTC)

Don't know about Windows, but MS-DOS, PC DOS and DR-DOS actually support both, 5A4Dh (4Dh 5Ah = "MZ") and 4D5Ah (5Ah 4Dh = "ZM"), whereby the second one appears to be the older variant. --Matthiaspaul (talk) 18:53, 2 November 2012 (UTC)


I removed this entry from the 'Magic debug values' table: "CEFAEDFE || "Feed face", Seen in Intel Mach-O binaries on Apple Inc.'s Mac OS X platform (see FEEDFACE)". I see that CEFAEDFE does say FEEDFACE if you read the 2-hex-digit bytes backwards, but I'm still not happy with the explanation we had. The Apple PowerPC hardware booted up in big endian mode, but could switch to little endian at any point. Mach-O object files could contain either PowerPC or x86 code - presumably with either endianness. Also what is the word Intel doing in that sentence? Weren't the chips in big endian Apple machines made by Motorola? What debugger would it be, running on what hardware, that would display FEEDFACE as CEFAEDFE? And what kind of Mach-O file would have to be being displayed? One meant for the other kind of hardware? Why did the developer type the magic number in in such a way that it would display garbled in his/her debugger? I'm sorry, this entry raises far more questions than it answers. If anyone can explain what it's meant to illustrate here, then at the very least we need to rewrite the entry so that we can all share the joke. --Nigelj (talk) 20:54, 18 January 2013 (UTC)

IDEs and named constants[edit]

Regarding this edit, I just want to say that some IDEs will show the value of a named constant, for example by hovering the mouse over a mention of it, or looking in an 'immediate' window, even if it is defined in a completely different file elsewhere in the project. Just saying. I don't know about the mention - it didn't make that clear, and I'm not sure it matters. But it wasn't completely insane by any means. --Nigelj (talk) 22:12, 29 July 2013 (UTC)

"Bjarne Stroustrup on Educating Software Developers" reference in Unnamed numerical constants section[edit] has removed a reference to on the basis that it does not support the point in the text, namely that use of magic numbers "makes it more difficult for the program to be adapted and extended in the future". From the second page of the link:

Take a simple example: A friend of mine looked at the final projects of a class of third-year CS students from a famous university. Essentially all had their code littered with “magic constants.” They had never been taught that was bad style – in fact they had never been taught about programming style because the department “taught computer science; not programming.” That is, programming was seen as a lowly skill that students either did not need or could easily pick up on their own.
I have seen the result of that attitude in new graduate students: It is rare that anyone thinks about the structure of their code or the implications for scaling and maintenance – those are not academic subjects. Students are taught good practical and essential topics, such as algorithms, data structures, machine architecture, programming languages, and “systems,” but only rarely do they “connect the dots” to see how it all fits together in a maintainable program.


The “magic constant” example is indicative. Few students see code as anything but a disposable entity needed to complete the next project and get good grades. Much of the emphasis in teaching encourages that view.

both address the impact of poor practices, specifically citing the use of "magic constants" as an example poor practice, on code maintenance and longevity, which is just another way of saying "adapted and extended in the future". Rwessel (talk) 05:54, 8 February 2014 (UTC)

Unnamed numerical constants[edit]

'It helps detect typos' is disingenuous. Obviously in the example cited the names variable will need to be initialized, and that initial value may just as easily be mistyped. It could be argued that you now have the opportunity to mistype the initial value of the named variable and the variable name itself when referenced. (talk) 15:53, 30 June 2014 (UTC)Phil Short

Not really. It's reasonably likely that mistyping the variable name will lead to a compilation error. And while the actual definition can certainly be pooched, writing the definition, where only the definition is in play, likely concentrates the mind on that task, leading to a higher chances of being correct (IOW, when you're defining the constant pi, you're just doing that, not also trying to work the formula for the volume of a sphere into your application at the same time). Ad if you do manage to use the constant more than once, you've only got once chance to mess it up rather than several. Finally, having the constant clearly separated makes it easier to verify later. Rwessel (talk) 16:18, 30 June 2014 (UTC)

Bad coding practices 101[edit]

This article needs a link to another article listing bad coding practices. — Preceding unsigned comment added by (talk) 16:13, 8 June 2017 (UTC)

External links modified[edit]

Hello fellow Wikipedians,

I have just modified 2 external links on Magic number (programming). Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

As of February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete the "External links modified" sections if they want, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{sourcecheck}} (last update: 15 July 2018).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 14:46, 10 December 2017 (UTC)

Extended Justification for Most Recent Edit[edit]

Some four years ago I added a note to 0xFDFDFDFD that certain debug versions of Win32 functions will flood fill 0xFD during their operation. I chose to add this note to Wikipedia because I had spent significant time looking for what function was responsible for flood filling this magic number in an application I was working on; it is not particularly easy to discover this fact as it is buried halfway through a very large article on this function and does not get mentioned in most places that discuss magic numbers, and I wanted to spare the next poor individual the same agony.

Today I was trying to remember what functions those were, and when I went to the article expecting to see my edit, I found out that it had been removed within about 10 minutes after having been made. I will assume on good faith that the person who removed my edit thought I was being unconstructive. The person in question does not appear to have programming experience (based on reviewing their contributions), so I do not know what criteria that user judged my edit by. The function I listed does not allocate or deallocate memory, so it is entirely separate from the malloc() comment for the same entry. My static IP has made other edits that were fixing typos and had no prior evidence of vandalism. I added a specific citation to MSDN regarding the function in question in the hopes that it isn't reverted again, but I note that most entries do not have any citations and do not end up reverted.

Furthermore, I noticed the entry for D15EA5E had a comment to the effect "Should this be D15EA5ED "Diseased"" It just so happens that I have extensive experience with the Nintendo Wii hardware platform. I know for a fact that the entry is 0D15EA5E; there is no D at the end, and it begins with 0. Look at address 32 (hex 0x20) of any Wii or GameCube ROM with a hex editor.

Finally, I noticed that one entry referred to "uninitialised". While this is British spelling, Wikipedia itself has an article Uninitialized variable but lacks Uninitialised variable. Therefore, I felt justified changing the word. Cheers (talk) 22:11, 16 January 2019 (UTC)