Talk:Compare-and-swap

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Computer science  
WikiProject icon This article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
 ???  This article has not yet received a rating on the project's quality scale.
 ???  This article has not yet received a rating on the project's importance scale.
 

Bug in referenced Shann et al. paper[edit]

A warning: the cited Shann et al. paper has a bug in the algorithm, and I thought it was my own implementation that was at fault until I saw the Groves paper that tried to prove correctness of the Shann algorithm and found the bug, and presents a fix in that paper. I have added a note besides the link and hopefully save people a debugging nightmare, but I leave the proper formatting of my note to those that know their ways around here better than me. ThVa (talk) 21:38, 24 March 2009 (UTC)

"Maurice Herlihy (1993) proved that fetch-and-add is inferior to compare-and-swap." This should not be the only reason for merge. Although the FAA operation might be inferior in the context of lock free algorithms it still is a locking primitive and it's usage should be shown (somewhere). --Jyke 11:13, 26 August 2005 (UTC)


Further, Fich, Hendler and Shavit [1] have shown that, in terms of memory usage, compare-and-swap can be weaker than fetch-and-add. Implementing fetch-and-add wait-free from compare-and-swap requires at least as many memory locations as threads. Implementing fetch-and-add wait-free from fetch-and-add requires only one. --Chris Purcell 12:02, 11 October 2005 (UTC)

LL/SC is better. CPUs with LL/SC only have the weak versions, but that's still good for a lot of algorithms, and also the functionality can be implemented with software if you have CAS. See a recent C. Evequoz paper. ThVa (talk) 21:38, 24 March 2009 (UTC)

Is this three way merge really necessary?[edit]

It looks like these are all distict primitives for atomic synchronization. While they each have advantages and disadvantages, they are distinct. If there is really so much overlap, then we should merge all these articles into a single article on "Hardware Synchronization Primitives" or something like that. (But that article would probably be too big, and would get split into articles like these at some point...) Jamie 01:21, 14 November 2005 (UTC)

I would agree: there doesn't seem to be any real point to merging. Chris Purcell 11:35, 14 November 2005 (UTC)

I agree too : there doesn't seem to be any real point to merging or a very good operating system writer would have to rewrite it completly . TheCric 11:35, 14 February 2006 (UTC)

Which three primitives were being discussed to not merge? RJFJR (talk) 22:00, 3 November 2011 (UTC)

CAS useful on uniprocessor systems as well[edit]

The article mentions that CAS is only used on multiprocessor systems. It is currently used on uniprocessor systems for user-space applications where disabling the interrupt is not allowed directly and invoking the OS kernel is quite expensive. Catalin 13:54, 12 April 2007 (UTC)

I added a comment about this. --JeffreyYasskin (talk) 18:45, 16 December 2007 (UTC)

CAS is both an abstraction and an assembly instruction[edit]

Both C++0x [2] and Java [3] use CAS as the name for an atomic operation on data of various sizes, including larger than a single pointer. On the other hand, the assembly instruction is only reliably available on data sizes up to the pointer size. I'd like to point out the ambiguity in the text, but I'm not sure of the best wording to use. --JeffreyYasskin (talk) 18:45, 16 December 2007 (UTC)

You are wrong. Double-pointer-sized CAS is available on at least all x86-64 processors other than very early AMD ones. If one is doing x86, unless it's on a very old processor, CMPXCHG16B is a given. ThVa (talk) 21:32, 24 March 2009 (UTC)

C code[edit]

Am I mistaken or is register a reserved keyword in C? Will that code even compile? .froth. (talk) 00:06, 17 June 2010 (UTC)

Is the C-code really just a pseudocode representation using c syntax? RJFJR (talk) 14:29, 11 November 2011 (UTC)

The C code is misleading. A newcomer who doesn't understand atomic ops will think that they can write CAS in C. You can't. There's no atomicity there. Of course someone who's worked with this before will know this and point out where in the article it says this but last I knew assembly programmers aren't the target audience of Wikipedia. 166.250.45.126 (talk) 17:40, 3 April 2012 (UTC)

Reworded intro[edit]

In an attempt to provide more context for people new to the CAS instruction, I reworded the intro paragraph. The first sentence now reads, "In computer science, compare-and-swap (CAS) is an atomic CPU instruction used in multithreading to achieve synchronization." I think that links to all the important contextual areas? --Chris Purcell (talk) 11:02, 23 December 2011 (UTC)

Implementation in C[edit]

Should we just delete the whole ==Implementation in C== section of the article? Is it really the best way to give a pseudo code description of the CAS operation? See the section above called 'C code' which discusses some of this. RJFJR (talk) 18:01, 13 July 2012 (UTC)

One common approach in the literature is to surround the code in an "atomically" block construction, which makes it clear something non-standard is happening. I think having _some_ form of code to present the ideas is a good thing. --Chris Purcell (talk) 16:49, 16 July 2012 (UTC)