Talk:Garbage collection (computer science)/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1 Archive 2 Archive 3

Copying collection

"...determine what data objects in a program cannot possibly be needed; throw them away."

Is this correct? My understanding of copying collection is that nothing is actually thrown away, but rather that reachable objects are explicitly kept. If that is the case, then wouldn't that phrasing be a bit misleading? --maw

The storage used by the objects that are collected is reclaimed by some means, presumably to be used again for the creation of fresh objects. Thrown away is too imprecise, will edit. --drj


GC-collected languages and not

"It is a bit sloppy to talk of a garbage collected language just as it is slightly incorrect to talk of a compiled language. Being garbage collected is a property of an implementation of a language."

Surely this is incorrect for languages where the semantics specify no way of recycling cells? Certainly, most languages that use garbage collection (in fact, all other than those that can be conservatively collected through third-party tools) have no explicit free operation, and this is part of the design of the language more than an implementation detail. --Andy C King


Pseudo-code

What one Earth is the example pseudocode about? It's presumably an attempt to show how one can create a memory leak in a GC'd system, but I don't understand it at all. Any comments? —Cadr

Try translating the code into C or something you know. As it is now, it is kind of Bourne shell script code mixed with some C. --Falcon 16:48, Mar 25, 2004 (UTC)
I agree that this example seems quite obtuse and requires more explanation (to say nothing of copy-editing). Also, left to right assignment is unconventional at best. I'll fix this up sometime, if someone doesn't beat me to it. --Deco 21:56, 25 Mar 2004 (UTC)
Copy-editing? I wrote it myself! Unless, of course, you are using the term as something else. --Falcon 00:12, Mar 26, 2004 (UTC)
I'm not attacking you, but there are several language errors in the text, some of which affect comprehension or flow. Here's a small listing; I hope you find it helpful.
"...a very poor way of programming; especially when... a garbage collector is"
The semicolon should be a comma, because the following part is a dependent clause.
"There is some proof to this claim."
This should be "some evidence for this claim" or something to that effect.
..."let a program be created which runs on an infinite loop, in a..."
This is rather stilted; better would be, "Consider the following program, which runs indefinitely, written in a..."
"The program above multiplies the number of bacteria by three, losing the reference to all, and keeps a count of them."
The program repeatedly multiplies the number of bacteria by three; your text seems to imply it only does this once. "losing the reference to all" doesn't make sense; maybe you meant "to all of them"; In "keeps a count of them", it's not clear what "them" is referring to.
"This program ... continiously ... which means that much garbage remains after the garbage collector is finished, thus wasted memory."
"Continuously" is misspelled; moreover, you mean continually, or repeatedly. The word "thus" should be followed by a complete sentence, not just a noun phrase, such as "thus memory is wasted". This is also a run-on, so a better fix would be "and so memory is wasted".
I'd make these edits myself, but I'm planning to rewrite the whole thing to make it simpler. I hope you're not offended. --Deco 05:28, 26 Mar 2004 (UTC)
Okay, after closer examination of this code, the example actually appears to be invalid. The references are lost, and so the objects will eventually be collected; it may take a while, and so the number will certainly increase, but it will suddenly decrease as soon as the GC is invoked. I think this might stem from a misunderstanding of the author, who believed that GC is always prompt; really, as far as I know, this is only true with acyclic reference counting, and this is a limitation of GC, but not as stated. --Deco 05:41, 26 Mar 2004 (UTC)
I'm creating a new section called "Disadvantages of Tracing Garbage Collectors, which will incorporate Falcon's example properly. I hope everyone is happy with this arrangement. --Deco 05:52, 26 Mar 2004 (UTC)


other comments

a "moving GC" is the same as a "copying GC" - right? Can we pick one consistent term, at least for this article? --DavidCary 04:23, 2 Apr 2004 (UTC)

There is a distinction. Copying GC is a moving GC, but not all moving GCs are copying GCs. "Moving", although probably not a standard term, only implies that it does its job partly by relocating objects. --Deco 22:09, 2 Apr 2004 (UTC)
"There is a distinction"? I still don't understand the distinction. Please make that distinction in the article. Thank you. --DavidCary 02:20, 21 May 2004 (UTC)
Frankly, the article makes it quite clear what each of these things is, and doesn't even suggest that they're the same. I don't see why there's a need for clarification, unless this is really a common source of confusion. I have reworded the definition of copying GC to be a bit clearer, however. --Deco 22:19, 21 May 2004 (UTC)
I must be blind then. Is there such a thing as a "moving GC" that is not a "copying GC"? Would it be OK to mention this thing in the article? --DavidCary 21:10, 22 May 2004 (UTC)
According the The Memory Management Reference, moving garbage collection applies "to copying collectors and to mark-compact collectors [and] may also refer to replicating collectors." The distinction never made much sense to me (in the use of words), but there are clearly differences of technique in how references are fixed, and in whether something (such as a broken heart) remains in the old location. --Bovlb 21:59, 2004 May 23 (UTC)


Grammar

The latest anonymous IP editor seems to have misunderstood the intent of the sentence they changed. I'll see if I can reword it better. --Deco 05:20, 7 Sep 2004 (UTC)


Odd title

Am I the only one who finds the article title Automatic garbage collection odd? I certainly don't hear this phrase used too much. Perhaps Garbage collection (computer science) would be more appropriate. --Deco 23:30, 20 Sep 2004 (UTC)

It sounds odd to me as well. While the term automatic garbage collection is well-understood, garbage collection almost always means automatic one. I mean, I have never heard of non-automatic garbage collection, even if there is such a thing. I rather am curious why automatic was put in the first place. --Taku 00:53, Sep 21, 2004 (UTC)
That's clear enough: disambiguation with the process of actually collecting real garbage (see Garbage collection). I have used this term used in this context on occasion. That's why I suggested the parenthesized category. Hopefully we can get a small consensus among this page's editors on this. --Deco 07:37, 21 Sep 2004 (UTC)
I went ahead and changed the article name. I think this is better. I set a 'bot on skipping redirects and having everything point straight to the new name. I apologise if it was a bit overzealous, and I hope everyone likes the new name. --Deco 00:25, 3 Oct 2004 (UTC)

BASIC's gcing

Does BASIC have garbage collection or does Visual Basic have garbage collection? If it's the latter, I think it should be emphasized in the article. --Silver hr 05:32, 21 Jun 2005 (UTC)

Garbage collected programming languages

TakuyaMurata recently removed the list of GCed languages. Rather that dispute that edit, I am instead proposing the creation of a new sub-category of Category:Programming languages called something like "Garbage collected programming languages" or "Programming languages that rely on automatic memory management". This category would then consist of the list removed in this edit, and would be referenced under "See also" as Category:Memory management software is now. Comments? I'm happy to do this, but I'd like some feedback on the name first because it's a pain to rename categories. --Bovlb 01:29:30, 2005-08-25 (UTC)

I agree with the caveat that the category page should explain the technicality that for many of these languages, GC is not part of the language per se but part of the most widespread implementations of that language. --Deco 04:33, 25 August 2005 (UTC)
Hmm. For most or all of the languages listed, GC (or, to be precise, automatic memory management) is essentially required by the language definition, there being no manual way to free memory. Can you give an example of a language where it's not required, but is commonly provided? I would prefer to put that in a different category. --Bovlb 04:54:08, 2005-08-25 (UTC)
Managed C++ and its progeny in Microsoft .NET lets you allocate both (garbage-collected) managed types and native C++ objects. It's a "glue" language that is not intended for writing large amounts of code, but maybe it's worth nothing for this reason. --Mike Lin 19:33, 5 February 2006 (UTC)

Collection cycles

A collection cycle can occur at any time and can require an arbitrarily long amount of time to complete.

A collection cycle will only occur at a point where the program allocates dynamic memory, or when the program explicitly invokes garbage collection. It will not occur "at any time".

A collection cycle will not take an "arbitrarily long time". If we ignore environmental issues (e.g. VM paging, time-slicing), the time taken by a collection cycle is a typically a linear function of 1) the number of live objects, 2) the number of bytes of live objects, 3) the number of pointers in live objects, and 4) the number of dead objects. All of these numbers are bounded by the program's heap size. (For a copying or generational collector, you can ignore 4. For a non-copying, non-compacting collector you can ignore 3). --Anon

I should hope a typical garbage collector does not collect only when forced to or explicitly requested to. Process images would grow very large. This is only really acceptable if it's a machine-wide garbage collector. "Arbitrarily long time" isn't exactly true (indeed, every terminating program's runtime is trivially bounded by the number of machine states); more accurate might be "can cause a noticable pause that interrupts time-critical events". --Deco 03:01, 9 Feb 2005 (UTC)

Process images only grow while allocating memory. Therefore is in fact sufficient to perform collection cycles when a program allocates memory, as anon said. The process image won't grow arbitrarily large. However, from a programmer's point of view, GC may still happen at virtually "arbitrary" times, since (a) allocation is such a frequently used operation, and (b) even while a particular code doesn't make any allocations, another thread might allocate memory and cause a GC cycle. -- oefe

What if an application allocates objects and then executes for some long period of time without allocating anything more? Some of those objects allocated initially may no longer be in use, and yet the memory is not being freed, because no new objects have been allocated in the meantime and no explicit call has been made to actually free the now unused memory. 206.190.139.254 20:50, 27 February 2006 (UTC)

It usually doesn't matter. If you're on a virtual memory system, provided you don't exhaust virtual memory then the extra memory can get swapped out and performance is maintained (although the GC may be painfully slow if/when it finally occurs). If the GC is triggered regularly, then you're only marginally increasing the time/space footprint of the application. In addition, in some, but not all systems, the applications share a memory space, so the other other applications can trigger GC and tidy it up. If you don't have multiple applications running then it doesn't matter.WolfKeeper 17:02, 28 February 2006 (UTC)

This is an inherent property of tracing GC, so I'm not quite sure what you're getting at. You can argue that the problem you describe exists unless you GC every time you overwrite a reference, at which point reference counting would be a far better solution. Once you back away from that, it's all a matter of degrees :-) --Mike Lin 05:22, 28 February 2006 (UTC)

It's not inherent. Some real time garbage collectors run periodically, or you can sometimes (depending on the platform) trigger them to do that with a loop.WolfKeeper 05:55, 28 February 2006 (UTC)

It's inherent that objects will not be deallocated as soon as they are dereferenced under tracing GC. --Mike Lin 15:32, 28 February 2006 (UTC)

That's true, but not necessarily important. The fact that the objects are not being dereferenced doesn't usually matter unless something else needs the memory the unreferenced objects occupy, if something does want it, that should normally/ideally trigger a garbage collection (although may not in practice, if the application that needs it doesn't share the memory space; but if you're running on that sort of system, that's a good reason to run the GC sometimes even if memory isn't full.)WolfKeeper 17:02, 28 February 2006 (UTC)

My point was really just that I'd rather allocate and deallocate memory myself than rely on some background process to do it for me at some arbitrary time (if I'm explicitly calling the GC's 'free memory' function, why not just manage the memory myself?).  :)

Because you only call the GC from one part in the program, rather than thousands? Anyway, I've got a bead on you now, you're a masochist. Please stay away from me :-)WolfKeeper 18:31, 1 March 2006 (UTC)

However, ultimately, the engineer must look at the advantages and disadvantages of any particular approach and decide for his genderless self. In particular, relying on an automatic GC would make development faster, and also perhaps less prone to memory leaks, if you are unable to figure out how to deallocate things on your own.. 216.239.183.254 18:10, 1 March 2006 (UTC)

It's not a question of 'being able to', it's a question of having to. It's general experience that code that is written under a GC is maybe 30% shorter; and more reliable as well. In the past I wrote C++ code that automatically tidied up after its self, and those parts of the system that used it had very few leaks indeed (apart from one 'stellar' engineer whose code used the provided C compatibility interfaces interfaces incorrectly and reliably leaked; still even this is much better than unreliably leaking :-).) WolfKeeper 18:31, 1 March 2006 (UTC)

Performance information problems

In the performance paragraph, it is stated that the performance is "many" times higher for the C# case, allocating a lot of (small) objects, but there are no real-world results or any indication how to generate any. Furthermore, the reasoning is also flawed: heap managers for non-GC languages will also allocate in a linear fashion, and may have a linked list of free items for small sizes, so O(1). Added to that, this may cause memory to be used from CPU cache, which is much faster than cold memory. Also, the global (desktop) experience of programs having a lot of allocated memory but unused shouldn't be neglected. --Anon

I ran the prescribed experiment using MS Visual Studio 2005, with the C# version targeting .NET 2.0 and the C++ version compiling to native code, and default compiler options. (Please note this constitutes original research and so is ineligible for inclusion in the main article according to WP policy.)
$ time ./AllocTestSharp.exe
DING!

real    0m14.791s
user    0m0.015s
sys     0m0.015s

$ time ./AllocTest++.exe
DING!

real    4m34.446s
user    0m0.031s
sys     0m0.000s
--Cetin Sert 2007-11-01
I am a somewhat experienced C# programmer and looking at the C# source code, I would not expect any A objects to be created at all. The statement "A a = new A();" and the surrounding loop is very likely optimized away altogether: because neither i nor any A instance is referenced after the loop block so they can be optimized away without affecting the rest of the program logic. I suggest using a profiler to see whether any A instances are created at all. (Beware of the observer effect though.)
So the C++ version runs 18 times slower. Maybe there is something wrong with the Microsoft C++ allocator (this would explain a lot).
--Mike Lin 19:22, 5 February 2006 (UTC)
I agree with Anon, the explanation doesn't make sense. This sample code constantly allocates and frees a single block, so any reasonable memory manager is going to find the same block immediately and just return that. Microsoft's implementation may be broken, I don't know. But the proffered test doesn't measure what it's claiming to measure and is therefore irrelevant. And tthe claim advanced about superior performance is not factual and not NPOV; it's debated. See, for example, http://it.slashdot.org/comments.pl?sid=164766&cid=13751468
Let me illustrate my point about the test not testing what it claims. I reimplemented it in C (make A a struct type, and a static function for the constructor). Thus I am supposedly comparing the performance of C's malloc to C++'s new. I get the following results on a 3Ghz x86 processor (times are extrapolated from run counts that are 1/100th that in the original sample code):
languageoptimized?run in IDE?approx time (s)
C++yesno300
Cyesno400
C++yesyes2400
Cyesyes2500
C++nono300
Cnono700
C++noyes2600
Cnoyes3200
You can see there's enormous variation here due to various other parameters changing, which means any comparisons between different systems are going to be doubtful in the first place; there's not necessarily any such thing as "the same settings"; e.g. perhaps C# optimizes much more by default. The non-optimized C vs. C++ were off by a factor of 2, which is difficult to explain. (The slowdown from running in the IDE is, I believe, because when running in the IDE a different memory allocator is used which initializes allocated memory to a known value.) Nothings 03:31, 10 March 2006 (UTC)
Clearly, the test is inherently flawed as a controlled comparison. And if the speedup was a factor of 2 or 3, I would agree; one could make a strong argument that externalities completely forbid drawing any meaningful conclusions from such an observation. I don't think the same is true with a handily order of magnitude speedup; given what the program is doing, I think you just have to blame this on the memory allocator, although it could just happen to be a crappy memory allocator for whatever reason.
Anyway, this debate aside, it does seem like the language in this section of the article should be revised, since the issues that one can raise with this example are a distraction from what is generally a valid and relevant point. Can someone propose a better illustrative example? Mike Lin 04:03, 10 March 2006 (UTC)
First of all, I actively disbelieve it's a valid and relevant point. There may be some cases where the memory management wins, but for most programs, using a good garbage collector in one and a good memory manager in the other, I don't think it's true. There have been lots of citations of 10% of the time in programs spent on garbage collection, and I've never seen the same claim about malloc/free. So personally I'd just kill the whole thing; it's a controversial claim, not a factual one.
Second, though, I'm much more concerned with seeing the bogus test deleted. Now, maybe there is a real performance difference here due to memory management. But given that the constructor has bogus code that any optimizer would reduce, it does not seem like a careful experiment to me. Perhaps C# always optimizes (somewhat), and the C++ was compiled non-optimized? (Note you said "default compiler options"; most IDE/compilers I used default to non-optimized.) Maybe the C# compiler does a reachability analysis and actually allocates the object on the stack. There are a lot of possiblities. Let me suggest a slightly altered comparison. I don't actually speak C#, so you'll have to tweak this to match. I hope you agree that this is not attempting to test anything the original test did not do, just make it much more unlikely for a compiler to optimize away the memory management.
C#
  class A {
  	private int x,y;
  	public A(int i) { x = i; y=i; }
  }
  
  class Example {
  	public static void Main() {
               A array[16];
               for(int i = 0; i < 16; i++)
                       array[i] = new A(i);
  		for(int i = 0; i < 1000000000; i++) {
                       array[i & 15] = new A(i);
  		}
  		System.Console.WriteLine("DING!");
  	}
  }
C++
  #include <iostream>
  
  class A {
  	int x, y;
  public:
  	 A(int i) { x = i; y = i; }
  };
  
  int main() {
       A *array[16];
       for(int i = 0; i < 16; i++)
               array[i] = new A(i);
  	for(int i = 0; i < 1000000000; i++) {
  		delete array[i & 15];
  		array[i & 15] = new A(i);
  	}
  	std::cout << "DING!" << std::endl;
  }
If somebody can fix the C# as needed and compare those (benchmarking an optimized build outside the IDE), I'd be much more willing to believe that the results are evidence of MSVC's crappy malloc implementation, rather than just artifacts of other optimizations. Of course, even so, this is still a best (unrealistic) case for GC, since it has to follow almost no data to clean things up. Keeping a few hundred thousand instead of 16 will change it further. (Bottom line: synthetic benchmarks are irrelevant.) Nothings 11:31, 9 April 2006 (UTC)
It's only moderate hyperbole to claim that any program that makes a lot of mostly small (object-sized) allocations/frees will spend less time in heap management under a modern GC than malloc/free. But that's assuming a literal translation of the algorithm. Much subjectiveness in a practical comparison arises because good C++ (as an exemplar language) programmers tend to "instinctively" avoid rapid, small heap allocations and frees. The programming style changes when object allocation is cheap -- so the real reason that direct comparisons are not practically relevant is that you wouldn't write the same program in C# that you would in C++.
That's why the example is meant for conceptual illustration, not as a benchmark or test. Of course, no single, short example can prove anything. So if you feel strongly that the example is more misleading than illustrative, I don't object to blowing the whole section away and replacing it with a vanilla "sometimes it's better and sometimes it's worse" schpeel. It would just be a lot less interesting :-) Mike Lin 18:34, 9 April 2006 (UTC)
I probably will just kill it, then. I was hesitant to do so since you seemed to think there was some substance to the test given the 10-to-1 performance ratio. Actually, I probably could have just fixed things by considering the 'heroic' comment to be NPOV and cleaning that section up, but I think I'll be happier killing the whole thing. It seems excessively detailed in context, anyway. (Much better would be a link off-wikipedia to something demonstrating this, or attempting to.) Nothings 09:20, 10 April 2006 (UTC)
No, I find this makes a valid point. Modern languages really do allocate objects and GC very quickly; much more quickly than malloc/free can, everything else being equal. Malloc in particular has to do a search for object space through the heap, and free is slower too. Malloc has to minimise fragmentation, wheras GC based languages can compact as needed. That makes a heck of a difference. I would expect C## to be much faster in this case.WolfKeeper 03:22, 18 April 2006 (UTC)
(Note to other editors: I deleted the code comparison discussed above and the "can actually be faster (without heroic efforts)" section and replaced it with detailed description of the specific performance issues.)
What you claim isn't true. All else being equal, mallocs are extremely fast in practice. With a well-engineereed malloc (e.g. dlmalloc, the source which is available online), most "searches" for a block pull the block off of a free list in O(1) time. All frees free data in O(1) time (either putting them on a free list, or using a tag-boundary scheme to merge adjacent free blocks in O(1) time). Larger allocations require a search, but with first-fit it's rarely that long and certainly wouldn't happen in the sample code. Also, most garbage-collection languages require some sort of malloc-like fallback for very large allocations. Certainly in the example code case, any reasonable malloc would just keep allocating and freeing the exact same block, which should actually perform better for cacheing reasons than fresh allocations from an advancing pointer in a young generation. (My current guess about the C++ example under MSVC is that it's being screwed up by the fact that the C library allocator asks for larger underlying data blocks from the system heap allocator and sub-allocates from them, and also frees those blocks back to the system if they're unused, so in this trivial test with no other allocations outstanding every new/delete pair triggers the system heapalloc/heapfree calls, which are high overhead (probably require switching to kernel mode). But I don't actually know.)
I understand that lots of people have different opinions on this subject, but WP:V requires more than an opinion or relying on common sense. The code test in particular is seriously flawed. Unless you can produce citations that (a) the code test is valid and that (b) the descriptive section is valid, those sections should be removed. Nothings 06:52, 18 April 2006 (UTC)
I'm generally supportive of the changes Nothings made, although there's probably too much detail both before and after. What would be most helpful would be a general description of the types of allocation workloads under which modern GC is likely to have advantages (and disadvantages), and why. Namely, GC will do better with rapid turnover of lots of small objects due to compaction & contiguous allocation. The example was intended to give this flavor, but obviously any short example can be shot down. --Mike Lin 09:05, 18 April 2006 (UTC)
Øøh! (The reknowned Danish kicked-in-the-stomach sound). Anyone having tried to disassemble the C# and the C++ programs to compare?? Otherwise: use ONE language allowing generational garb as well as manual ungarbed allocation. (Said rursus siderespector the very very anonymous one - lazy fewtime member of the Swedish wikipedia editorhood). (unsigned comment by 83.250.58.54 on 14 April 2006 18:38)

Running on Linux 2.6.15 AMD64, GCC C++ 4.0.3 vs Mono C# 1.1.15 the C++ version is 44% FASTER! I assumed GC is slower so having a concrete examples of increased speed was very interesting, except that it isn't true.

Running on SunOS 5.9 2x1.5GHz SPARC9 GCC C++ 3.2.3 vs the same with the delete removed and using Boehm's Garbage Collector (http://www.hpl.hp.com/personal/Hans_Boehm/gc/), the manual version ran 2.02x SLOWER, reinforcing the original point that in this simple case GC is faster. Gene.thomas 04:04, 22 June 2006 (UTC)

There's a research paper on this subject in the case of Java: http://portal.acm.org/citation.cfm?id=1103845.1094836 --NotQuiteEXPComplete 22:47, 17 August 2006 (UTC)

I have a great idea...rather than playing scientist and constructing trite, easily biased experiments, why don't you look for some existing research papers in this area? Look, there's one in the comment above!--81.86.106.14 11:19, 12 September 2006 (UTC)

There's nothing here that doesn't make sense if you know what's involved in each situation. To break it down, I'll show you exactly why the C# implementation is 18 times faster than the C++ implementation in this case, as well as why it will be even faster in more realistic cases. Firstly, for the C++ case, every time it allocates a new object it must: 1.Look at the free list and find the first available slot. 2.Compare and determine if the slot is greater than or equal to the required size for the object. 3.If not, check the next freelist entry. 4.If so, allocate the object and modify the freelist to reflect the new (and possibly fragmented) memory situation.

  1. 2 and #4 are fairly complex operations since in most cases #1 and 2 will have to be run many times, and #4 often requires new entries in the freelist if the object was stuck into already fragmented memory. In a compacting GC situation such as with C#, when allocating an object it must:

1.Allocate the object at the alloc pointer. 2.Update the pointer to the end of that object.

Neither of these is at all complex, and it's missing MANY of the required steps that C++ must go through. Now, in our little "create a bajillion tiny objects" program, you actually get a fairly good case for manual memory management performance. Every object is released immediately so there's no fragmentation or significant searching of the freelist, and removing an object also doesn't require much modification of the freelist, however the obscene simplicity of alloc in C# vastly outweighs the penalty for the occasional collection whereas in C++ both alloc and dealloc are rather expensive operations (only dealloc is expensive in C#). Again, keep in mind that said program will provide nearly best case performance for the C++ system; in a real program alloc/dealloc is going to create a great deal of fragmentation (unless you go to a LOT of trouble to alloc all ephemerals last) and then the cost of alloc and dealloc grows dramatically as the complexity of the freelist structure increases. With a moving GC like in C#, the performance is more or less dependent on how often it's set to collect, and remains nearly constant in comparison. In fact, worst case for GC is when there ISN'T much garbage to collect. AeoniosHaplo 04:59, 25 May 2007 (UTC)