Talk:Self-modifying code

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Computer science (Rated C-class, Mid-importance)
WikiProject icon This article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
C-Class article C  This article has been rated as C-Class on the project's quality scale.
 Mid  This article has been rated as Mid-importance on the project's importance scale.
 

TODO[edit]

  1. an example and discussion of 'high-level' self-modifying code such as in LISP.
  2. examples and discussion of traditional uses of self-modifying code, such as in graphic Blitting units, specialisation of algorithms (like sort with embedding a cmp), and in interpreter kernels.



Is a thunk and/or a trampoline (computers) also a kind of self-modifying code? --DavidCary 03:01, 18 August 2005 (UTC)


have never written any self modifying code, but example of state-dependent loop doesn't look quite right? Maybe a misplaced curly-bracket? --(AG)

I'll check the brackets, actually state depedant loops are a sort of self-modifying code I've coded a few times on 8bit machines, when state transition is not freq, esp if altering just the arg of an opcode, thus using a faster instruction (eg, on the 6502). Code-generation is 'still' relevant and useful, eg 'compiled bitmaps' during the 90's, and specific rendering code today.Oyd11 00:42, 13 June 2006 (UTC)

I suggest removing the entire Synthesis section, along with Massalin, Haeberli, and Karsh, on notability grounds. Marc W. Abel 15:12, 26 April 2006 (UTC)


  • Futurist Programming should probably be the new link, although the original author of this document should probably check as they would know whether this is the correct article.

Javascript example: really self-modifying?[edit]

It seems to me that the Javascript code example is not self-modifying. The action variable is merely a function pointer that points to two anonymous functions in the course of its life. All the code from this example could be put in read-only memory and it would execute without problems. Where am I wrong? Sarrazip 02:17, 19 December 2006 (UTC)

I agree with Sarrazip. In addition, I think the placement of the Javascript example does not go under the section of "Interaction of cache and self-modifying code".

I have removed the Javascript code example, since no one has objected for several months. Sarrazip 03:05, 21 May 2007 (UTC)

Obj-C[edit]

Possibly Obj-C code in addition to LISP? It's the only object oriented superset of ANSI C that I know of that really implements it as a base feature. [1] --Electrostatic1 08:51, 15 May 2007 (UTC)

Self-modifying code in self-referential machine learning systems[edit]

I think there should be a section on self-modifying code for machine learning along the lines of Jürgen Schmidhuber's work on meta-learning: http://www.idsia.ch/~juergen/metalearner.html Algorithms 20:54, 4 June 2007 (UTC)

Dead link[edit]

This appears to be a dead link. Which makes me sad. I was really looking forward to seeing a self-modifying kernel! Guess it's time to whip out Google.

66.93.224.21 11:24, 5 June 2007 (UTC)


- I replaced it with something found on google.

Reentrant self-modifying code is possible[edit]

Many years ago, I had to write a piece of code which was reentrant - because of a number of constraints imposed by my interrupt handling methods - but also needed to be self-modifying. To explain: an input was N, the number of a record in a file, and the assembler supplied only one type of TRAP instruction - with a constant. An earlier generation of the application used a TRAP instruction (it was on a PDP-11) thus:

:READ_N:                               ; R5 points to (unsigned) record number N (assumed <=255 and non-zero)
:       MOVB    2(R5),10$              ; Modify the TRAP instruction
:10$:   TRAP    0                      ; Read "something"
:       RETURN

and I needed to retain the mechanism, but make also to make it reentrant.

The solution was simple: have a "virgin copy" of the code available (but never called directly). When it was needed, it was copied it to the top of the stack, together with "cleanup code", where the copy was modified, and executed; finally, the cleanup wiped the stack of the defiled code. All I can say is that it worked.

My simple statement about self-modifying code is this: in bootstrap code, it's fine - but elsewhere: DON'T EVEN THINK ABOUT DOING IT! (Especially where reentrancy is a prerequisite ...) Hair Commodore 18:57, 16 September 2007 (UTC)

I've corrected the above code: the error was in the byte addressed - the low byte of a TRAP instruction was to be altered, not the high byte. (It's a long time since I've used a PDP-11 at assembler level - sorry!) Hair Commodore 20:16, 22 September 2007 (UTC)
Awww, go on, it's not all that bad. What is required is a calm attitude and appreciation of the actual environment. By using the stack working area, you ensure the avoidance of clashes in a quite proper way. This is what multi-stack designs are all about, and by writing in assembler (with proper commentary) you need not be constrained by the shibboleths of the prating orthodoxists of flabbier computer languages that constrain themselves and declare it good. In other words, I have misbehaved also, and declare it good. NickyMcLean (talk) 19:51, 18 December 2008 (UTC)

JIT?[edit]

Maybe I'm being nit-picky, but I don't think a just-in-time compiler falls into the category of self-modifying code, any more than any other compiler would. It generates some code, and then transfers control to it. It doesn't really alter its own behavior. And in the same vein, I don't think that uncompressing some otherwise static code and then running it qualifies as self-modifying, either. I would reserve the term for code that modifies its own behavior at it is running. Maybe it's a rather vague concept, though. Deepmath (talk) 11:02, 15 July 2008 (UTC)

I utterly agree with the above statement: JIT is not self-modifying. The code itself is only being generated instead of self-modified. The compiler itself or the virtual machine never gets modified. Un(de)compressing doesn't yield any self-modification. It'd be the same to say that loading dynamic libraries (or any libs for that matter) is self-modification. Running any code by an OS, thus, can be viewed as self-modification.

Bestsss (talk) 12:43, 18 December 2008 (UTC)

If I understand correctly, "Just-in-Time" compilation is equivalent to compiling the whole lot once at the start in that the resulting executed code in the part that is being executed would be the same. The advantage is presumably that no compiler effort is wasted on execution paths that will not be taken on the particular invocation, and that the compiled code will run faster than the interpretation of the text especially if there are loops. By contrast, consider a prog. whose purpose is to assess the workings of some routines for numerical integration such as Simpson's rule, etc. One requirement would be a variety of functions to be integrated and they might be incorporated via a tiresome "case" statement or similar. Otherwise, The test prog. could read from an input file the arithmetic statement defining the function, encase that text in suitable text for the definition of a function f(x) in the language of choice, pass the whole to the compiler and link to itself this new function that can then be invoked by the testing procedures as if it had been a part of the whole compilation all along at full compiled speed; no messy "case" statement and selection of function one, then function two, etc. The difference here is that arbitrary different code would be produced, depending on the list of arbitrary test functions supplied to a particular run. NickyMcLean (talk) 19:37, 18 December 2008 (UTC)
That's almost correct. JIT compiles when's needed (that for example may mean interpreting a few lines that are never executed any more, like the main method, saving time for useless compilation), JIT may recompile with eager optimizations (escape analysis, inlining, etc). It simply compiles, it doesn't modify itself ever. It can change the compiled code on-the-fly but still that's not self-modification at any rate. I see self-modification only when a program changes the initial code that has been loaded from an external media (network can be considered so) and 'already' run (so decompression doesn't fit). Bestsss (talk) 09:53, 21 December 2008 (UTC)

extremely fast operating systems and applications?[edit]

Under the heading "Henry Massalin's Synthesis kernel" it is claimed that

Such a language and compiler [based on Massalin's techniques] could allow development of extremely fast operating systems and applications.

This sounds like pure speculation to me. —Preceding unsigned comment added by 62.73.248.37 (talk) 20:10, 28 March 2009 (UTC)

Monkey patching[edit]

I think there should be reference to the article about monkey patching and vica versa. Monkey patching is a structured and formalised way to do self-modifying code in a an interpreted language. At least Monkey patching could be listed in "See also". What do you think? --Jarl (talk) 06:12, 1 May 2009 (UTC)

Should Lisp get its own section?[edit]

Lisp has self-modifying code unlike any of the other languages, in fact, a running lisp programming modifies itself the whole time. I would say that Lisp is unique in how it modifies its own code because Lisp has no boundary between data and code, data is stored in Linked lists in lisp, instructions are data with the head being the operation and the tail a list to operate on. Just 'dumping' data in the main runtime is interpreted as executing it accordingly that pattern. In that sense, unlike JavaScript or Perl, Lisp doesn't modify its own syntax or evaluates a string as if it were an expression. Lisp has no syntax, S-expressions are just a convention to encode linked lists, but anything that encode linked lists can create isomorph programs to those in S-expressions.

Therefore, if it's okay with you people I'd like to add a section on Lisp families because they treat self-modifying code in a unique way. Rajakhr (talk) 22:23, 23 January 2010 (UTC)

Interpreted languages generally have (or could have) self-modification arrangements though these are usually via some special form or modification of the disc file containing the statements. An "eval" statement is a step further away from self modification. But Snobol contains features that could be regarded as self-modification (as during pattern matches), and also contains its source statements as a text array open to manipulation. So it is not just Lisp. If you prepare some examples, explanations will be needed for non-Lispers. But would they introduce a new idea? Such as demonstrating some desirable action by routine use of self-modification? NickyMcLean (talk) 20:45, 25 January 2010 (UTC)
Well, Lisp can be compiled and have self modifying code as lisp doesn't really have code/syntax is the idea, a lisp implementation is an engine that rewrites symbols in lists, any way to specify lists will do in the end. Rajakhr (talk) 18:51, 25 February 2010 (UTC)
I wouldn't really say that Lisp uses self-modifying code in the traditional sense. Usually the code transformations happen at compile time (although run and compile times can be interleaved in Lisp) and actual runtime "code modification" happens by the same way it would in a C program eg. by pointer reassignment. You can build new code by compiling S-expressions at runtime but it's conceptually similar to (and sometimes implemented with) a C program externally compiling and linking in new code. TokenLander (talk) 20:20, 3 March 2010 (UTC)

Simplify maintenance?[edit]

Quote from the frist line: "In computer science, self-modifying code is code that alters its own instructions while it is executing - usually to reduce the instruction path length and improve performance or simply to reduce otherwise repetitively similar code, thus simplifying maintenance."

How does self modifying code simplify maintenance? It seems like it actually makes maintenance harder since it is usually more difficult to figure out what the hell is going on. —Preceding unsigned comment added by 129.65.117.57 (talk) 23:39, 21 February 2010 (UTC)

The answer to your question is in the statement quoted above 1. "usually to reduce instruction path length" and 2. "simply to reduce otherwise repetitively similar code".
If there are less instructions in a path, there is less code to verify for correctness (virtually or actually) ; if there are less repetitive lines of code, there are less instructions to check and/or to go wrong.86.142.85.194 (talk) 06:14, 28 May 2013 (UTC)

bullshit[edit]

The paragraph claiming that late binding as "can be regarded as self-modifying code" is pure, unadultereated, farcical bullshit. It is completely at odds with any useful definition of self-modifying code; that is, if virtual functions are self-modifying, *everything is*. Not only is the paragraph wrong, but it's also completely unsupported by actual citations and references to literature. I will remove it shortly if nobody objects. —Preceding unsigned comment added by Quotemstr (talkcontribs) 20:35, 17 May 2010 (UTC)

Agreed. Oli Filth(talk|contribs) 21:23, 17 May 2010 (UTC)

King John question[edit]

The article fails to explain the relation between self-modifying code and "von Neumann" computer architecture. I think any hardware that can allow self-modifying code to run in at least one operating system is identically equal to a "modified von Neumann architecture" computer? Is that right? 82.131.210.163 (talk) 17:43, 24 April 2012 (UTC)

I don't see "von Neumann" anywhere in the article, but I think that the relationship is that the von Neumann architecture envisaged a computer with a single memory space comprising both data and instructions. Self-modifying code would require that a program be able to treat a piece of memory as data and then reach it as an instruction, and would not be possible on a non-von Neumann computer with, for example, separate code and data spaces. (Who is King John?) Spike-from-NH (talk) 00:26, 25 April 2012 (UTC)

Apple II copy-protection citation?[edit]

Does anyone know if there is a citation for the anecdote of using self-mod code as a copy protection technique on the Apple II? I remember reading about it somewhere 20 years ago when I was 'into' the Apple II in high school (back when "20 megabytes" was considered "really in-humanly humungously big" LOL) Jimw338 (talk) 21:35, 14 March 2013 (UTC)