Talk:Comparison of version control software

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Computing / Software (Rated List-class, Low-importance)
WikiProject icon This article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
 List  This article has been rated as List-Class on the project's quality scale.
 Low  This article has been rated as Low-importance on the project's importance scale.
Taskforce icon
This article is supported by WikiProject Software (marked as Low-importance).


"Basic commands" table is too wide[edit]

"Basic commands" table is too wide, even in a wide desktop screen. Couldn't/shouldn't it be split on two tables with a less columns?

Modest proposal[edit]

I was bold and included info about Rational Team Concert, which was sorely missing, in the mix. While doing so, I was plagued by a minor annoyance: is the so-called "Concurrency model" column really necessary anymore, if it ever was? Other than RCS, SCCS and mainframe-based version control tools (which are also missing from the list), I can't think of any modern version control tool that still uses a "lock" model. Can we please get rid of a useless classification? afc (talk) 18:16, 21 December 2009 (UTC)

Also, could someone who knows AccuRev review the "Platforms supported" entry for this product? I highly suspect it does not indeed support any "Java platform" (say, does the server run on zAAPs?) —Preceding unsigned comment added by Afc (talkcontribs) 19:34, 21 December 2009 (UTC)

Notable users is biased[edit]

I believe that the notable users column is a bit biased towards commercial software packages, since many/most open soruce projects do not have the resources to go around getting people to sign rights to allow them to list the names of their users companies. The editor coming along an listing all these as not suitable is just innapropriate. Everyone reading this page knows that pretty much any large organisation will have a mix of CVS, CVSNT and SVN in use somewhere in the organisation.

Arthur 07:14, 4 December 2007 (UTC)

I agree -- there is also a bias in terms of time. For example, large companies will often be much slower to adopt new version control systems, even if they are better. However, by their very nature, large, long-established companies make things "sound better", because they have good branding etc.

— Preceding unsigned comment added by (talk) 11:25, 29 September 2013 (UTC)


I guess many improvements could be apply here:

1) Add others features like in following links: berlios comparison, bitkeeper Vs subversion and Wine Hacking Tips

Also (Thoglette 07:12, 6 November 2006 (UTC))

  • support for versioning of directories
  • support for distributed development
  • branching mechanism
  • configuration management

More than happy to add, for that which I understand.

2) Add others RCS like Rational ClearCase and Visual Source Safe (see below, Open Souce )

3) And I'd like to see if any of those can convert character sets on-the-fly (not file names, which CVSNT does, but file contents). None of the presented features are of any interest to me ;) --Lam 09:19, 21 June 2006 (UTC)

OK, I'll include those on the list of things to do. I have seen the comparisons you mention, and I didn't include all the categories because it's hard to extend the comparison to many different systems, and some are a matter of opinion. (The BerliOS comparison uses words like "excellent" and "poor" for some categories that I don't think would be tenable in neutral comparison.)
It would be even better if you or someone else could help with the changes. Wmahan. 17:22, 21 June 2006 (UTC)

4) (pls 2007-05-27) I'm very interested in support for international development. In particular, is there full diff/merge support for files in a) UTF-8, b) UTF-16LE or BE, or c) support for a designated character set that differs from the system character set. It seems that despite the growing amount of international development and adoption of various Unicode forms, very few revision control systems support Unicode files.

5) Spinky Sam (talk) 17:10, 4 March 2008 (UTC) In support of point (1) above, I think file rename doesn't really cover it as far as full support for refactoring. What we want is a more comprehensive column title, such as "file or directory moves and renames" (obviously with ability to retrieve full directory/file structure for a previous release) - the trick is to check for each of these whether they include the feature!

6) Ability to delete from history can be an important feature for legal reasons; pls add to comparison chart. —Preceding unsigned comment added by (talk) 13:39, 7 June 2008 (UTC)

Adobe's Version Cue/Bridge[edit]

Adobe pitches Version Cue as Version Control. Anyone have experience with that and willing to add to the charts? It's on our option list but I really don't know much about it yet.

As a start, here's my best-guess at the variables:

  • Software: Version Cue / Bridge
  • Maintainer: Adobe
  • Development status: Not Sure (Is Version Cue still around? Is it now called Bridge?)
  • Repository model: ?
  • Concurrency model: ?
  • License: Commercial
  • Platforms supported: ?
  • Cost: ?


I made some changes in the Concurrency Lock vs. Merge column:

  • ClearCase uses the merge model by default, but many sites set a trigger to force locking on checkout which allows ClearCase to use a lock model. However, concurrency really doesn't apply to most ClearCase sites because most sites (and all sites that use ClearCase UCM), give each developer their own branch for their work. Concurrency issues only appear when the developer is ready to deliver their work to the main branch, and code is merged into the main branch.
  • Perforce and Subversion both allow for attributes to be set to force files to use the concurrency locking model. This is heavily discouraged, but some sites do insist on using concurrency locking. Concerrency locking is really only suppose to be used on files that cannot be merged like icons, JPG files, and MS-Word documents.


I know a bit about ClearCase. It seems to be the VCS that is used at many large corporations (HP, for example, uses it globally. IBM probably uses it in places too. Also, I know of several medium to large defense contractors that use ClearCase). I'll list the info that I know — I don't know several of the fields so I figure I'll post the info here rather than leave lots of question marks in the article.
  • General Information
    • Maintained by IBM. Not sure if it is actively developed or just maintained.
ClearCase is indeed actively developed, as can be seen by a quick glance at afc (talk) 18:46, 3 October 2008 (UTC)

    • The repository is client/server.
This seems incorrect in the list -- ClearCase is not distributed -- you can't access version history off-line. All you can do is hijack files and merge back when on-line. —Preceding unsigned comment added by (talk) 23:25, 5 April 2008 (UTC)
Indeed -- fixed. --Drizzd (talk) 09:25, 14 September 2008 (UTC)
  • Technical Information
    • Not sure what language it is written in.
    • The history model is snapshot. What does "patch" mean in this context?
    • The revision IDs are namespace.
    • Unsure what the repo size is bounded by.
    • ClearCase dynamic views piggyback on top of SMB (for Windows) or NFS (for Unix/Linux). The ClearCase Web Client and ClearCase Remote Client use HTTP, but don't support dynamic views.
  • Features
    • No on atomic commits.
This is debatable, see observations below. afc (talk) 18:46, 3 October 2008 (UTC)
    • File/directory renames supported.
    • Unsure on symbolic links. There are special ClearCase symlinks available, but I'm not sure if symlinks on a *nix system can be versioned.
    • Tons of hooks (triggers in ClearCase lingo) can be set up -- but only in Perl.
ClearCase triggers (or rather, trigger actions) can be written in any kind of language, scripted or compiled. You can even run a different program or script depending on whether the trigger is fired from UN*X or from Windows. afc (talk) 18:46, 3 October 2008 (UTC)
    • I'm not sure if it supports signed revisions.
This is a feature available when you use it with ClearQuest for change tracking. afc (talk) 18:46, 3 October 2008 (UTC)
    • Two modes: snapshot and dynamic. Snapshot makes a copy of the files on your local machine. Dynamic mounts a proprietary file-system that points at the repository and creates "virtual" files on the file-system. Accessing a file on a dynamic view (either editing or compiling) will cause the file to be copied across the network to the local machine.
A "file" in a dynamic view is only copied across the network when it is checked-out. Also, the copy may be to a view server, rather than the "local" machine. afc (talk) 18:46, 3 October 2008 (UTC)
    • Views: Clearcase uses views to provide an indirection between the repository and the files available to the user. A user creates a view-specification which details which files, directories, and versions are available to the user.
  • User Interfaces
    • not sure about web interface
As of version 6.15 (current is 7.0.1), ClearCase has an Eclipse-based, HTTP talking client too. afc (talk) 18:46, 3 October 2008 (UTC)
    • GUIs are available for Windows and Unix for sure; not so sure about the other supported operating systems.
There is a TSO (MVS, z/OS) based client available, though server processes must run on UN*X or Windows. afc (talk) 18:46, 3 October 2008 (UTC)
    • Stand-alone GUI for Windows and Unix as well as shell extensions to the standard Windows explorer.
  • Network protocol
    • proprietary, transactions are accomplished using many small packets which makes the system unresponsive if run over a WAN.
Transactions are RPC-based, rather than "proprietary", and this is what makes it unsuitable over a WAN. afc (talk) 18:46, 3 October 2008 (UTC)
  • Multi-site
    • Each site has a copy of the repository. By default, elements are owned by one site only. Sharing is accomplished by branching each element for each site. Periodic repository synchronizations is required to give all users the same view. Changes made at one site will not be visible to the other site until a sync takes place.
    • Ownership (mastership) of elements can be passed between sites.
I hate to be picky, but in CC, mastership and ownership are two different concepts. Mastership, as you point out, relates to the site owning the rights to modify an object. Ownership relates to the user who created that object. afc (talk) 18:46, 3 October 2008 (UTC)

ClearCase also does a decent job tracking merges (something which CVS doesn't support, and svn doesn't support yet). Perhaps add a column for merge tracking to the "features" table? It might also be nice to have information on branching and tagging (but I think all VCS listed have these capabilities?).

Slartoff 02:10, 26 July 2006 (UTC)

Additions made by Cave Mike 04:21 30 Aug 2006 (UTC)

Edits made by W. Craig Trader 19:50 23 Feb 2007 (UTC)


Would be great if someone familiar with CM+ added the necessary comparison info. Thanks!

Atomic Commits in ClearCase ?[edit]

ClearCase definitely cannot do this (allow changes to a group of files to be applied in one transaction). There is an extension called "ClearCase UCM" (with additional license cost) which supposedly provides this functionality, though the details are not clear.

Mister Farkas 23:39, 20 February 2007 (UTC)

This is an example of what makes this kind of exercise (comparing apples and oranges, as instances of an ill-defined category) ludicrous. Why is atomic commit needed? Because commit and deliver are coupled. What one wants is atomic delivery, nobody should care for atomic commit, on the contrary. The two are coupled in one case, which I call the /main/LATEST syndrome. How do you get an atomic delivery in base ClearCase? Rename a label type after you have applied it (note: this is atomic on the site where it was applied, not through replication. If you want an atomic delivery on an other site, apply an other label).

UCM doesn't need an additional cost. Everybody pays for it already. Marc Girod.

Atomic Commits in ClearCase[edit]

this is called "deliver" in UCM and will commit all changes in the UCM activity

in base ClearCase you could simply write a script: 1) use cleartool protect to prevent other users from making any modifications 2) perform all check-ins, merges, etc. while labelling all new version 3) if you want to roll-back, use cleartool rmversion on all labelled new versions 4) when you are finished use cleartool protect to allow other users access again

you could run such a script from clearcase context menu (run clearmenuadmin.exe) or queue check-ins and merges for a single transaction later

Michael Moors 00:00, 18 May2007 (GMT)

Telelogic SYNERGY[edit]

Was Continuus/CM. It should be added to this article. It's a player in this market, even though many people dislike it. This product introduced task development, if I understand correctly. Unfortunately, I don't know enough about SYNERGY to edit the article. Cernansky 00:15, 11 November 2006 (UTC)

Atomic Commits in SYNERGY?[edit]

SYNERGY does not provide atomic commits. If you interrupt your client while completing a task you'll have a partially completed task. Same when you create baselines, you will have partially created baselines... All single operations affecting multiple files/objects are not atomic in SYNERGY.


Yes, it is not the industry leader. But to ignore it is just silly Thoglette 07:13, 6 November 2006 (UTC)

But it has support for symbolic links (one file can be shared in many places, although it's impossible to share a file over different repos) Oleg Urzhumtsev 15:13, 19 September 2011 (UTC)

Symbolic links aren't the same as a shared file (particularly in the sense that shared files apply only to sharing between projects). TEDickey (talk) 21:20, 29 September 2011 (UTC)

Open source[edit]

Is this open-source only listing? If so, its title should be changed. If not, I'm going to add our product, Code Co-op to the list. -- Bartosz 19:49, 31 May 2006 (UTC)

It's not intended to be open-source only, although I'd prefer to only have notable projects so that the article doesn't get cluttered. You are welcome to add your product, but I might add more columns later, so help keeping the entry up to date would be appreciated.
If your product has any outstanding features that you think distinguish it from other systems, feel free to add columns for those. Of course, all the information should be presented in a neutral and verifiable way. Wmahan. 20:30, 31 May 2006 (UTC)

Explanation of terms[edit]

What is namespace revision IDs? -- Bartosz 19:54, 31 May 2006 (UTC)

I borrowed the terminology from [1]. Basically I was trying to indicate how a user refers to particular revision. By namespace I mean that the system uses a filename and maybe a simple version number. Other systems use hex values representing hashes of a file's contents. I realize that the concept is a little unclear as currently presented, and I'm open to suggestions. Wmahan. 20:28, 31 May 2006 (UTC)

Snapshots vs. Changesets[edit]

The table has subversion listed as using snapshots for the history model. That's a common misconception; while svn presents the history to the user as a list of snapshots, the change is actually stored internally as a changeset. If a repository uses the fsfs backend, you can see the individual changesets in db/revs/nnn.

OK. I'm not sure whether that column represents a useful distinction. I was trying to show that systems can use very different ways of storing revisions internally. But to be honest, I'm not sure understand each system well enough to come up with an accurate comparison in this area. Maybe I should just remove that column. Wmahan. 05:27, 19 June 2006 (UTC)

CVS and Snapshots / Changesets[edit]

This comes up as well with CVS, which implements its history as a cascade of reverse patches from the most recent version of each file. That's changesets, arguably, though it's on a per-file basis rather than on a per-commit basis. I would imagine that the facility of the software for dealing and distributing changesets is more important than the storage architecture inside the repository implementation, though.

I went ahead and changed the CVS column to 'Changeset', but I agree that the column is perhaps not useful or helpful. Jonabbey (talk) 17:35, 19 September 2008 (UTC)

Patch/changeset/snapshot: (1) lack of citing, (2) original research.[edit]

In some version-control systems (VCSes), the minimum unit of delta is (degenerate) a copy of all of the files whereas in others it is the whole file, whereas in others it a whole line, whereas in others it is portions of a line (and so forth). "Patch" is used ambiguously here for the file 3 variants. Because of this, VCSes that store a new file for each single-character change are described using exactly the same notation as VCSes that aggressively conserve disk space for a single-character change: O(patch). As this article has changed over time, now nearly all VCSes are described as having a repository growth-rate of O(patch), which must in part be a violation of WP:NOR as indicated by the lack of WP:Cite on the vast majority of the rows for these 2 columns. This is at best misleading (under certain definitions of "patch") or downright factually incorrect (under other definitions of "patch"). This is due to each editor/reader of this article having a different conception of what constitutes a "patch": a different whole file or a different whole line or a within-line word/token/whitespace difference. This state of affairs in the article is unacceptable; it must be changed somehow. At the very least we must have a citing, but the definition of "patch" and "changeset" are so nebulous/variable/subject-to-excessive-interpretation that I highly doubt that such citing of the current patch/changeset entries due to excessive interpretation from jamming in other wordings into this terminology. The question is how to change this. I propose either replacing the current "Repository Size" and "History Model" columns with a single column that clearly states the minimum unit of delta caused in the storage for a single-character change in the latest stable release of that VCS. The choices to appear in this replacement "Minimum Storable Delta" column could be:

  • (group of) characters (such as within a line or spanning multiple lines, where the EOL markers/separators/terminators [e.g., LF=Unix, CRLF=Multics/VMS/DOS/OS2/Windows, CR=MacOS] are merely just characters of equal standing with any other character),
  • (whole single) line,
  • (whole single) file,
  • (group of) files (such as whole subdirectory), and
  • (whole) repository. Nearly every VCS documents this design principle quite clearly via direct statements (usually as a matter of pride of correctness as an implied criticism that other VCSes got their designs "wrong"). Very few VCSes clearly document the exact and precise patch/changeset choices of the current "Repository Size" and "History Model" columns. This inhibits our ability to cite the existing columns (and thus they degenerate into original research). From this raw information, the current content of the "Repository Size" and "History Model" columns can be deduced simply by the reader. What do you think of this? —optikos (talk) 18:41, 20 February 2010 (UTC)
I do not think the "Minimum Storable Delta" is meaningful. Some VCSs (e.g. git, hg) use sliding window compression on the entire repository. Similarities between different files (at different points in history or not) and the general redundancy of text are much more important to these algorithms than the minimum delta.
Furthermore, users are typically more interested in the checkout size rather than the size of the repository, which may be stored on a server. While distributed VCSs are often superior to centralized systems in terms of repository size, not one of them can provide partial clones today. Other important aspects of repository size are data sharing between local clones, delta compression of compressed files (e.g. pdf, docx), compression performance, and maximum file size.
It is therefore not possible to deduce anything meaningful about "Repository Size" from a single column. I do think we can make something meaningful out of the "History Model", however. Let me try and define the following models.
  • per file: History is recorded separately for each file. A specific state of the repository as a whole is determined by timestamps and/or tags/labels/branches assigned to collections of files.
  • changeset: History is recorded as a sequence of changesets. Each changeset represents the differences from one state of the respository to the next. A changeset can comprise of changes to one or more files.
  • snapshot: History is recorded as a sequence of respository states, i.e. snapshots. Each snapshot contains all the files that exist in the corresponding state, even if they did not change with respect to the previous snapshot.
For the most part, the changeset and snapshot models are equivalent. Changeset based systems may record meta information such as file moves/copies, however, which has no meaning in the snapshot context. The definitions are still blurry, but I think we can attribute one of the above to each system without much trouble.
Note: I am avoiding the term patch since it is largely synonymous with changeset, but can also refer so something more specific. --Drizzd (talk) 11:15, 18 April 2010 (UTC)

email notification[edit]

There is no column specifying which products send email on various actions. I believe this is a good evaluation decider, and should be included.

To start off, I know that :
SourceSafe : No —Preceding unsigned comment added by IanVaughan (talkcontribs)

IMHO, whether the VCS directly supports email notification is not very important - if it supports triggers, emails can be sent using those. Bullestock 20:44, 28 October 2007 (UTC)


I propose to add a column indicating whether the VCS supports access control lists. I know that CVSNT does, and CVS doesn't - no doubt most of the commercial big players do as well. Bullestock 20:44, 28 October 2007 (UTC)

CVS "admin" (and RCS of course) have at least per-file lists of users who are allowed to modify a file. I've been using the feature for quite a while Tedickey 20:51, 28 October 2007 (UTC)

History model and Repository Size columns[edit]

I find the words used to describe the "history models" to be rather un-informative. The present revision of the article, for example, makes it appear as though VSS and StarTeam use the same model, which is simply not true. StarTeam stores full copies of each revision, whereas VSS uses reverse deltas. The term "changeset" implies forward deltas, but this seems impractical and is far from clear. Similarly, the terms "patch" and "changes" seem to be used more or less interchangeably in this section. How many different methods are there for storing revisions? --Craig Stuntz 15:32, 13 December 2006 (UTC)

I agree with your sentiments that apples & oranges are inappropriately conflated in the History Model column, as well as a spill-over of conflated confusion in the Repository Size column. Both messes are made possible by a general sloppiness of defining terms. Conversely, I disagree strongly that O-sub-space(patch) says anything whatsoever about forward versus reverse delta. How is the size of the delta in the forward direction any different than the size of the delta in the reverse direction? You seem to be inappropriately bringing O-sub-time(forward delta) and O-sub-time(reverse delta) into a column that is only referring to size/space of repository, not the time to retrieve from it nor store into it. I would very much like to see a clean-up of the History Model and Repository Size columns to have precisely-defined terms that reveal your O-sub-time insights in a new column for the growth-rate of merge operations in an evermore mature repository. As as example of sloppiness that devolves into meaninglessness, CVS completely lacks any concept of transactional change-set in the Perforce/atomic-commits/ sense of a related *set* of deltas across possibly-multiple files performed en masse as a single transaction, yet CVS's history model is called "changeset". In CVS, the "mental model" (as well as the RCS storage) is entirely per-file, without any sort of threaded tree of n-ary repository-wide relationships (other than tags, e.g., changeset transaction, repository-wide branch identifiers) throughout a tree of files that has been expected of nearly every VCS developed after DSSE/ClearCase and Perforce. —optikos (talk) 18:54, 16 April 2010 (UTC)
I agree. It would be far clearer to separate the history interface from the history implementation. While the revision control system might present snapshots or changesets, these are implemented as snapshots, forward delta, reverse delta (such as VSS), bidirectional delta (such as darcs), or hierarchical forward delta (such as Subversion). If we don't care about the details though, perhaps the column should just be renamed so that it clearly only refers to the interface. -- JRBruce 2007-03-30 20:18
I agree with you regarding the "history model" column, because I think that the "history model" has always been intended to be the personality presented to the user, where the emphasis is on "mental model" as abstraction. I go further with your idea and claim that we already have a column in this article for representing one half of salient part of the implementation: growth-rate of size of repository, O-sub-size. In my reply above to Craig Stunz, I claim that there should be a new column for growth-rate of time of merge operations on an ever-more-mature repository. For example for Subversion, its rather frequent snapshotting of whole files hurts it in the O-sub-space(patch + snapshot) department, but Subversion's snapshots obviously are the mitigating factor that makes the otherwise-slow O-sub-time(forward delta) practical, due to shortening the sequences of forward deltas. —optikos (talk) 18:54, 16 April 2010 (UTC)

Also, both SVK and Subversion use exactly the same system (SVK works on top of SVN), so they shouldn't have different history models. Each revision represents a snapshot, but as JRBruce says, they store deltas.--Cynebeald 06:27, 10 October 2007 (UTC)

  • I am tired of this article being the proverbial Blind men and an elephant: I was bold and intentionally kicked up the dust by declaring that Repository Size's O(patch) is synonymous with delta compression. This article needs to clearly state without weasel words which VCSes store their history via delta compression from those that store a fresh copy of a file (or, indeed, of an entire tree of files) when one byte of one file changes (per checkin or per tag-like release or branch-like release). Here "delta" is exemplified by the output of diff or any line-oriented or character-oriented or byte-oriented merge tool, and where "whole file" that changed is neither a delta nor a patch for the purposes of the Repository Size column. Further precision needs to added to the definitions of "release" and "changeset", although I hope that (with the exception of some CVS person who thinks that CVS has changesets) these will be less controversial. —optikos (talk) 18:54, 16 April 2010 (UTC)


Why is BitKeeper not represented in this comparison? Billdav 02:07, 19 January 2007 (UTC)

Because nobody has added it yet, of course. You are welcome to do so if you are familiar with it. -- intgr 11:22, 22 January 2007 (UTC)

Perforce does not operate on the merge model[edit]

You always lock the file in Perforce; it's an advisory lock, but it's a lock, and unsophisticated user don't expect anyone else to be able to check out the file. Perforce also does no automatic merging, unlike CVS and Subversion and others.

Merging is a fallback in Perforce. —The preceding unsigned comment was added by (talk) 17:19, 3 May 2007 (UTC).

It operates under merge or lock[edit]

The footnote in the article, "In Perforce, file attributes can be set to allow for the lock model. However, this is discouraged by Perforce and should only be used on files (typically binaries) that cannot be easily merged on check in." is

a) incomplete: files can be opened locked, in addition to specifying that all files of a particular type default to opened locked, and b) incorrect as far as I can tell: reading nearly all of the documentation on the Perforce website I find no recommendations against locking

I'm removing the footnote and changing the model to Merge or Lock.

Regarding a, see [2] or [3]. Regarding b, if someone can come up with a reference saying locking is discouraged, then cool, but Locking is still supported regardless.

Cheers. 06:48, 19 June 2007 (UTC)

CodeVille mentioned once[edit]

CodeVille is only mentioned once, and is not included in most of the comparisons. -Jason Espinosa

Repos size = O(entropy(x))?[edit]

A lot of revision control systems compress their data. For example, git saves commits as a pointer to a version of each file, but the versions are delta-encoded so its size is actually O(change entropy). I don't know about all systems, but shouldn't repos size for the compressed systems read O(patch entropy) for example? O(patch entropy) is quite different from O(patch), because saving a terabyte of zeroes won't take up a lot of space but saving a gigabyte of random data will. Boemanneke 17:06, 30 June 2007 (UTC)

You're right, but that's a level of detail that is unnecessary for a high-level overview like this. O(patch) means "proportional to the size of the change", with a poorly specified "change" metric. Talking about entropy doesn't change that quibble; we just talk about the model that the entropy is relative to. The column basically translates to "does delta compression?", but it's made less ambiguous by using an objectively measurable metric. So leave it as is. (talk) 11:09, 8 June 2008 (UTC)

Relationship to List of revision control software[edit]

There's no good reason to merge, since the articles deal with different aspects - comparison to highlight differences, and a list to show the range of possibilities. If the topics were small, then the merged article might save. But they're not. Tedickey 12:39, 5 September 2007 (UTC)


  • Added the price for Accurev (1,495$) which was previously set as unknown.
  • Moved the new "Razor" system to its appropriate place in the alphabet. Also changed its internal link to razor(Revision Control System) since its current link was pointing to the regular Razor page.
  • I noticed that a few more systems were out of order. Resorted them, and they are on their proper places now. Going to have a look if i can replace a few "?" from the tables with actual information.

(I'm resigning the date each time i add something, so most changes are older then the date displayed!)

Excirial 14:58, 5 October 2007 (UTC)


Informations about SVK are outdated. See -- 04:02, 1 November 2007 (UTC)--

Other names for 'revision control software'[edit]

Should these:

* (List of) Source Configuration Management
* (List of) Version Control Systems

be redirected here? Arauzo (talk) 17:08, 19 November 2007 (UTC)

Divide this article in two?[edit]

What do you contributors think about dividing this article in two, one for centralized and other for decentralized VCSs?

The rationale is that one of the very first decisions you make when making your choice of VCS, is whether it should be decentralized or it can be client-server. This is directly tied to your development model. After you have decided which model of VCS you want, then you compare the features provided by those software that provide that model.

The benefits are that the tables would be smaller and easier to read and compare them, and each article may focus on features specific to a single model (like inter-repository merging (push/pull) facilities and support, that are specific for distributed VCSs).

--Juliano (T) 20:35, 6 May 2008 (UTC)

add "language" field ?[edit]

Hello, what do you think of adding as a field the language (C, Python etc.) the application is written in ? Eric.dane (talk) 17:03, 18 May 2008 (UTC)

I see that this field has already been added, but I don't understand why anyone would care. The focus of this page appears to be for selecting which one to use. As a developer you care about details like that, but as a SCM admin (my day job), that is an unimportant detail. Performance, atomic commits, centralized vs. decentralized, ACLs, and so forth are all wonderful characteristics, but its source language isn't one of 'em. Likewise I don't care if the source comments are in French (or whatever else) because I'll never be looking at them.

I think it would make more sense to have a separate page, or at least a separate table, with characteristics that interest potential contributors. Obviously the proprietary tools could be left off of that one. —Preceding unsigned comment added by (talk) 21:25, 29 May 2008 (UTC)

Actually, I found the language information useful; if there isn't a suitable pre-built binary available, then installing the software implies building it, and if that means first installing a build system for the language, then an alternative tool might be preferable. For this aspect to be fully covered, it would however be necessary to rather make a table of external dependencies in general: scripting languages, compression libraries, networking libraries, etc. (talk) 21:26, 2 June 2008 (UTC)

Migration possibilities?[edit]

One factor which is relevant when choosing a revision control system is whether existing projects can be moved into it, and conversely moved out from it, without losing the revision history. (I'd normally use the terms import and export for these, but the Revision control page defines "export" as an operation which "creates a clean directory tree without the version control metadata".) Migration tools generally seem to be separate utilities rather than integrated parts of the various systems, but it is nontheless interesting to know that they exist and which moves are supported by existing software.

When considering whether to start using revision control system X, it is obviously important whether one's old projects currently controlled by system Y can be moved to the new system — even if this isn't something one does immediately, it is probably something one considers doing eventually, since why move unless the new system X has some attractive features not found in system Y?

The need to move out is the need to have the option to undo a decision, in case it after several months of experience turns out to have been a bad one. It might also arise because one wishes to switch from a distributed to a client–server system (or vice versa). It is of course always possible to take the tip of trunk in system X and check that back into the old system Y as just a new version, but that loses all the history that was recorded using system Y, and file renames are likely to be lost track of. (talk) 21:59, 2 June 2008 (UTC)

There's probably not a lot of material here - there are import features in various tools, mainly provided as a competitive measure. A few of those are done well; users of the target systems tend to be uninformed about the topic of import limitations since the developers of the tools tend to blame the original system. Exports (other than providing a snapshot as noted above) fare worse. Tedickey (talk) 22:07, 2 June 2008 (UTC)

Signed revisions in Perforce[edit]

The article claims Perforce supports digital signing of revisions. I can find no evidence of this either in Perforce itself or on the Internet. Does anyone know if this is accurate? MoraSique (talk) 20:13, 3 June 2008 (UTC)

Storage Method / Format[edit]

Hi, what do you think about adding a column for the storage method in the Technical Information section? For instance, TFS uses SQL Server, Subversion uses Berkeley DB / Custom (FSFS). I think that this is useful because it can have a significant impact on cost, performance, maintainability, scalability, and stability of the VCS. Rriehle (talk) 15:34, 3 February 2009 (UTC)

Table legend[edit]

Please keep the legends of the tables in one consistent form. Using the ref tag creates references you have to click on to get an explanation. A reader has to click on each reference to learn what exactly the headings of the table mean. Having the legend right below the table is consistent with tables elsewhere and much more useable for the reader. -- (talk) 08:28, 28 April 2009 (UTC)

Encoding Support[edit]

I would like to add "File Encoding" as a feature to one of the tables - either as a feature or advanced feature. One person suggested this is part of International Support and I strongly disagree. Microsoft has begun storing some files it creates in UTF-16. This is an Encoding, not a Language, and thus does not fall under international support....

limit to managable amount of entries[edit]

I would like to remove all entries which do not have a Wikipedia article themselves. I know that is not entirely fair, but we need to limit ourselves to more notable software. Right now, there is no way we can maintain or even verify most claims, because nobody has ever even heard of these systems. --Drizzd (talk) 12:20, 13 September 2009 (UTC)

agree (plus remove all external links - most of this topic is merely advertising) Tedickey (talk) 14:17, 13 September 2009 (UTC)

I started removing the corresponding entries in the first section. I'll wait for protests a little before I proceed with the rest of the article. --Drizzd (talk) 17:03, 21 September 2009 (UTC)

File timestamp preservation[edit]

The initial edits for this one immediately introduced dubious content:

  • some of the tools (CVS for example) cited as "yes", do not do this by default
  • timestamps are only part of the file-attributes (CVS is a well-known example of one which fails to maintain permissions attributes)

Tedickey (talk) 12:25, 3 October 2009 (UTC)

By the way, the table lacks a definition for this term Tedickey (talk) 12:26, 3 October 2009 (UTC)

Rereading the CVS documentation, it seems that the cell for this item (barring either some overlooked detail, or blatant bias) should be "No", since it appears to lack the ability to preserve timestamps on checkouts. A pointer to relevant information on that detail would help clear up the question of whether CVS should be marked "yes" or "no" in this cell. Tedickey (talk) 12:41, 3 October 2009 (UTC)

Please update the column if you think I inserted wrong status for some RCS. As Tedickey correctly guesses, "file timestamp preservation" is a feature where the target RCS is able to preserve timestamps of files (both readable file and directories) upon checkout. Many people regard this particular information as important and useful, if not vital, which is why I added the column. Orz (talk) 19:01, 3 October 2009 (UTC)

Preserving timestamps for directories is not often found in any CM tool. Limiting it to file-timestamps (and documenting as such in the table notes) might work. But file-permissions are just as important to developers. Tedickey (talk) 19:27, 3 October 2009 (UTC)

I think we agree that all version control systems record a time and date with every change. In particular, they can tell when a change to a file was last committed to the repository. So the information is available in any case. "File timestamp preservation" is about overwriting the last modified filesystem attribute with the commit time upon checkout. This should be explained in the article.

Whether or not the feature is useful is irrelevant to Wikipedia. We just document its existence. And I think it fits in very well next to the permissions column in the Advanced Features section, which is where I will move it now. --Drizzd (talk) 08:32, 4 October 2009 (UTC)

> Whether or not the feature is useful is irrelevant...

That's surely correct, yet I find the term "timestamp preservation" very misleading. I would expect that this denotes the ability of the software to actually *preserve* the original time and date information of last modification of a file, such as done in simple copying of a file to another storage. This is unfortunately not the case e.g. with Subversion, as I had to find out painfully. A proper name for this "feature" and a clear hint for potential users (which surely like this overview in wiki) would surely be helpful.

I agree it's misleading. I assumed timestamp preservation meant storing each file's "last modified" attribute in the repository, so that whenever a revision is checked out the timestamps can (optionally) be restored to what they were before that revision was committed.
Suppose I have a collection of files last edited a decade ago, and today I begin managing them with a VCS by committing them into a new repository. Suppose tomorrow someone else checks them out. Will the timestamps of her copies be a decade ago, today, or tomorrow?
Personally, I'd like the option of having them be a decade ago so we can determine when files were last modified. People in the VCS industry call history sacred; isn't time of last modification part of history? Modifications may relate to contemporaneous items not in the repository, for example a magazine article that inspired you to edit a file a decade ago and now you'd like to be able to find that article again and having the date would be helpful. Or perhaps you want to be able to determine whether you wrote X before someone else published X'.
It won't suffice just to store the last modified attributes in the repository and view them with log output. That wouldn't allow operations such as moving the files, with their "last modified" timestamps, to a different VCS.
So, the Advanced Features table ought to have a column indicating which VCSs offer this option. (If any. To the best of my knowledge, neither Git nor Bazaar supports it.) Even if no VCS supports it (so the column is all "No") it would help people be unsurprised when their files' timestamps are lost, or take steps to avoid losing them. Perhaps the presence of the column would encourage VCS developers to add the feature. SEppley (talk) 01:14, 12 December 2010 (UTC)

Merge or Lock vs. Lock or Merge[edit]

Is there a reason that these two distinct options exist for the concurrency model column in the first table? When sorting by column, "Merge or Lock" is at one end of the table, and "Lock or Merge" is at the other. Should only one of these terms be used for the purposes of proper sorting? 12 Centuries (talk)

Yes, and while we are at it, let's convert to lowercase "merge or lock" as well. --Drizzd (talk) 13:05, 30 October 2009 (UTC)

Rollback unsafe on non-private repostories?[edit]

The comment "unsafe on non-private repostories" in the explanation for the rollback column in the advanced commands table doesn't apply to all the revision control systems. It certainly doesn't apply to Perforce. When the comment was added, on the 16th of February 2009, only Git, Darcs and Mercurial was indicated as supporting rollback. I don't know of the comment even applies to all three of those. I would suggest adding notes for those systems where it's not considered safe and removing the comment from the explanation. --Niklas (talk) 01:06, 27 January 2010 (UTC)

Fetch vs. Pull[edit]

What is the meaning of Fetch for Git in the Basic Commands table? It is not listed in the terms below the table. Should this be changed to Pull? --BobIsch (talk) 21:54, 6 February 2010 (UTC)

No. The table specifies the commands to use. git fetch downloads commits from a remote repository. git pull does something different, it downloads them and merges them into the current branch. I added a footnote regarding this. -- (talk) 23:09, 15 February 2010 (UTC)

Partial checkout/clone[edit]

I'm changing the table where it says that Git can do a partial checkout/clone. Come on, that is a flat out lie. Git can do submodules and you can clone only one, a few, or none of the submodules, but that is not the same as saying that Git can partially clone a repository. Git has many benefits, but I don't like it when people try to lie to make things look better. I've been trying for quite a while to figure out the benefits and drawbacks of Git to see if it suits our developments, and wikipedia is nowhere near reliable on this matter. CrisLander (talk) 10:23, 27 August 2010 (UTC)

Well, it is possible to do partial checkouts (enable core.sparseCheckout), just not partial clones. So in that sense the column is ambiguous. It is also possible to download a subdirectory from a repository (using git-archive). But neither of those features works in the sense that partial checkouts from conventional centralized systems do, which is probably what this column is about. So I agree with the No in that column, although the column description should be clarified. -- (talk) 20:45, 6 September 2010 (UTC)

Supports large repos?[edit]

Is this column original research? Seeing as footnote 27 (at the time of writing this) in the Git article states:

^ Fendy, Robert (2009-01-21). DVCS Round-Up: One System to Rule Them All?—Part 2. Linux Foundation. Retrieved 2009-06-25. "One aspect that really sets Git apart is its speed. ...dependence on repository size is very, very weak. For all facts and purposes, Git shows nearly a flat-line behavior when it comes to the dependence of its performance on the number of files and/or revisions in the repository, a feat no other VCS in this review can duplicate (although Mercurial does come quite close)."

I'd say that this directly contradicts the fact that both Git and Mercurial have been listed as not supporting large repos.--Taotriad (talk) 22:11, 19 October 2011 (UTC)

I believe that the current column in the table is not useful at all as it does not contain any gauge on what parameters identify a "large repo" and how "supporting large reops" could be identified.
Given the fact that git needs to open many files in order to understand what files are part of the project, it is most unlikely that git does not degrade in performance if talking about a big project.
In any case, I cannot believe that CVS is able to support larger projects as CVS is very slow (caused by the fact that it is based on the RCS file format). --Schily (talk) 09:59, 20 October 2011 (UTC)
With repack you save git data in pack files with indexes, it doesn't have to "open many files" in that case, it also memmaps the actual pack file.
I'd say that git supports *LARGE* repositories, however the repack time will be boring... — Preceding unsigned comment added by (talk) 12:32, 5 December 2011 (UTC)

Efficiency on Large Repositories[edit]

To objectively discern on the matter, we must talk about efficiency. So it makes sense to create another table for this comparison.

For this purpose, first we must define what a large repository is.

The two main bottlenecks are I/O requests and memory usage. So we would need a graph describing memory usage peak and I/O for all basic operations. Or alternatively, a mathematical model of the implementations.

Also it makes sense to distinguish between few-big-files and many-small-files. --Ismael Luceno (talk) 06:12, 24 October 2011 (UTC)

From the research I've done while preparing to enhance SCCS, I identified the following parameters to be important:
- the total size of the repository
- the number of files in a repository
- the average file size in a repository
- the file system performance with respect to file I/O and directory operations (OS and FS specific)
- the number of expected putbacks per day and the expected lifetime of the source code control for the repository
The following implementation parameters are relevant for creating bottlenecks:
- The number of files and the amount of data that needs to be read to understand the state of the repository
- The additional amount of data that is hold in core when parsing the data structures.
- The amount of data that is read and written in order to archive the new state for a single file
- The amount of CPU time that is needed to parse the data structures and to compute chesksums and similar
Even with many modifications on the project, more than 1000 changes on a single file are of low probability.
Systems with one central changeset file (BitKeeper and the future SCCS v6) need to be able to deal with more than 100000 deltas to that file if the related project is expected to last for more than 20 years.
Systems with that record the project state in a tree of many files (git) need to be able to deal with reading more than 100000 files when managing an older project.
Peak memory consumption thus is just one of many factors and what applies to a specific system depends on its internal concepts. Only a bechnmark may be able to address all important factors and such a benchmark may give different results on different use cases even if the size of the repository seems to be comparable. --Schily (talk) 13:25, 24 October 2011 (UTC)
P.S. As the current claim (CVS supports large projects) is unsourced, we should remove this claim until we find a useful metering method. --Schily (talk) 22:28, 24 October 2011 (UTC)
There are easy to find sources that CVS was used for the Linux Kernel, GNOME and similar projects for a long time, that most people consider "large". There were other reasons involved when they switched to other systems like SVN or Git now, such as network operation, distributed operation, project revision numbers instead of different revision numbers for each file etc. -- (talk) 18:17, 25 October 2011 (UTC)

We must not do own experiments. Wikipedia does not proove things. We cite reliable sources that do such things, and if there is none, we should be extra careful at making such claims. Also, we must avoid getting involved. Schily for example considers himself the current author/maintainer of SCCS (well, the UNIXes ship a different SCCS, but don't seem to bother continuing development). As such, he loves bashing RCS. But this does not belong onto Wikipedia. It is a lot easier to produce a flawed benchmark than a fair one. As he himself indicated, there are tons of factors to play around with, and every single one can bias the benchmark towards one system or another. For example, accessing the the single HEAD version is favorable to RCS (and thus, CVS). Once you add branches, these apparently become much more expensive.

See Talk:List of revision control software#RCS performance for an attempt by Schily to promote his system over RCS with a flawed benchmark. In this case, his benchmark was biased in at least two ways: first of all, it was an append-only benchmark, and SCCS scales linearly with history size (which is linear to file size when you append only). Secondly, he used a commit/update ratio of 1:1, which is not realistic for large software projects. Not sure though, what kind of bias this introduces. As far as I gather from what was written there, SCCS checkout should be O(history-size), while RCS is O(HEAD-file-size) for HEAD, and usually head-file-size << history-size.

Anyway, I don't claim that SCCS is slower than RCS (or when). I just insist that we must not do own benchmarks but only cite reliable sources (and no, Schily, you are not a reliable source.) Either there is an independent (or at least peer-reviewed as in a scientific journal) evaluation of these systems, or there is not. -- (talk) 20:31, 24 October 2011 (UTC)

It is obvious that you do not understand the background and this is why you better should stay quiet. We don't need your bashing. --Schily (talk) 22:22, 24 October 2011 (UTC)
Typical: Instead of talking about the issues raised, you start insulting. -- (talk) 18:17, 25 October 2011 (UTC)
This is typical for you: you start an insulting campaign and complain after you have been cought.... Again: if you don't have arguments, better stay quiet instead of discrediting yourself with personal attacks. We need openminded people - not personal attacks. --Schily (talk) 20:37, 25 October 2011 (UTC)

Accurev sources[edit]

Most of those links are merely advertisements, not WP:RS TEDickey (talk) 14:25, 14 January 2012 (UTC)

Support Large Repositories[edit]

The "supports large repositories" section is somewhat misleading. SVN, for example, supports large repositories *IF* it is is a large repository of *small* files, but if you fill it full of >1GB source files it will eat itself the same afternoon. Perforce, on the other hand, is quite happy with large files. Large files isn't the same thing as large repository.

I'd really like to know which source control systems will use large files. Many people would like to use source control for binary files, and it would be helpful if that information were out there somewhere. For me personally, for example, I know Perforce does it will but it's too expensive. It would be nice if there were a less expensive or open-source alternative that was (truly) viable, but I'm having a lot of trouble finding that information! — Preceding unsigned comment added by (talk) 02:47, 28 February 2012 (UTC)

I agree that it is useful to know which VCS's support large files. As you say, a large repository can have many small files, or few large files, or both. If a VCS cannot deal with each of those situations, then it does not support large repositories in general. It is unfortunately not easy to define what constitutes a large repository, and what the requirements are to support it. Therefore, it is not possible to give a simple yes or no answer for each VCS. But if you can provide a reference which shows that SVN does not play well with large files, then we can add an appropriate notice to the article. --Drizzd (talk) 13:54, 8 September 2012 (UTC)

An issue that could be discussed is the impact large files appearing and later disappearing in a version history can have on checkouts (eg Git clone has this issue, SVN checkout hasn't). alex (talk) 09:19, 11 November 2013 (UTC)

Market share[edit]

I found some numbers from 2009Q3 and added them, but I get the impression that git market share is now considerably higher. Here is a survey of Eclipse users, who are mainly Java programmers. Any more general recent numbers would be interesting if anyone can find some. -- Beland (talk) 19:39, 3 April 2013 (UTC)

I agree that market share numbers most certainly are quite badly off. Actually, each one of these numbers should have a recent reference -- and have one for all the time into the far future. But who's maintaining them? No-one? 2009 is a loong time ago. Thus I'd actually propose to remove the market share column altogether, or relabel it "Estimated Market share in the year YYYY" if it has to stay. Or maybe, "estimated maximum market share in its lifetime so far" would be interesting. As it is now, it doesn't really have much informational value, IMHO. Neels (talk) 15:50, 13 May 2013 (UTC)
I removed the outdated market share information. Tanadeau (talk) 02:43, 6 September 2013 (UTC)

Rational Team Concert doesn't support HP-UX[edit]

Rational Team Concert never supported HP-UX.

Also, supported platforms doesn't distinguish between server and client. In some cases there are even different clients that are supported on different platrforms. For instance, RTC's command-line client is supported on more platforms than its eclipse-based client. — Preceding unsigned comment added by (talk) 12:48, 8 June 2013 (UTC)

Development status[edit]

How can it be reasonably determined whether something is being actively developed? Some of these SCMs are marked as active, but haven't had a release or a website update in years (e.g. Monotone).

I think no changes (I mean nothing at all- no updates, commits, etc., not just no new releases) in the last year is safe enough to mark as inactive. In that vein, the page could use a cleanup. Attys (talk) 08:20, 11 April 2014 (UTC)

Removed "External links" section[edit]

The article's "External links" section had been tagged for cleanup since February 2010. Half of the links tagged were dead links and the other half were outdated or written by authors of VCS systems and could be taken as promotional; I removed the entire section.

The non-defunct links were for the most part written in the 2008-2010 range. They may still be current enough to justify inclusion in an EL section, but a quick read told me that they were pretty outdated and should probably be replaced with writing that reflects the state of the field now.

Those links are:

--rahaeli (talk) 14:09, 18 May 2014 (UTC)

Symbolic links[edit]

Referring to TFS and Git - The 'features' table in the article says they support Symbolic Links, however I haven't found any reasonable way of adding a directory symbolic link (created using mklink /d) Yes, I know you can convert it to some special file with special attributes, but that defies the whole idea of using symbolic links. So either the table is wrong, or I am missing something. — Preceding unsigned comment added by Yossiz74 (talkcontribs) 06:40, 14 July 2014 (UTC)

Missing entries in various tables[edit]

The article has 8 tables and all of these tables have a different number of entry lines for a different set of programs.

If we like to make comparison more useful, all tables should list all programs. Schily (talk) 14:52, 1 August 2014 (UTC)

SCCSv5 (sic)[edit]

Schily's edit asserts that "The same history file format is still used in SCCSv5" and that this manpage contains the supporting information. Schily's close associate backs up the edit. However: the source does not mention "SCCSv5", nor does it refer to a specific version of SCCS. Rather, it gives an informal description of the format referred to as "Release 4.0 of SCCS/PWB". Whether "4.0" refers to "SCCS" or "PWB" or the combination is not given in either. However, Schily has interpreted it as the first of the three alternatives, and likely intends to use for self-promotion. (Barring copies of this page, "SCCSv5" appears only in documentation written by him - making it hard to find third-party sources). TEDickey (talk) 23:10, 14 August 2014 (UTC)

References to SCCS 5.0 are easy to find. SUN used the version 5.0 for SCCS back in 2006. If you have a look at the corresponding man pages and compare them with the SCCS 4.0 man page linked in the article, you can see that the history file format did not change. I agree with you that the citation could be improved but the claim itself is correct. --FUZxxl (talk) 12:11, 15 August 2014 (UTC)
@User:FUZxxl Thank you for doing this research. I in general did see many cases where User:Tedickey wrote claims that could have been avoided in case he did some research before. It would be nice and he would be more efficient for WP if he did write only about things that are cleanly researched. Schily (talk) 13:10, 15 August 2014 (UTC)
The "SCCSv5" which you used here does not appear in either source. If you want to disagree with my comment, you ought to provide a factual reference, rather than continuing to make derogatory remarks. The manpage tarball does not use "5.0". Along those lines, do you have a reliable source giving a release date for "5.0"? TEDickey (talk) 19:28, 15 August 2014 (UTC)
Again, please look at the source I provide before dismissing them as invalid. If you listed the contents of the tarball I linked to, you'd clearly see that all the source files for SCCS in it are in a directory named /usr/src/sccs_src/src/sccs5.0. I'm sorry if this wasn't obvious enough, I will try harder next time. The tarball itself contains SCCS identifiers with the most recent date being 06/12/12 which appears to be the release of this version. It might be possible that an earlier version of SCCS was already branded as 5.0, but I haven't found such a version. The manpages I linked to are the first published manpages that contain all documentation for SCCS, it seems like SUN forgot to ship SCCS documentation before. From the close temporal proximity one can conclude that these manpages belong to the SCCS version I linked and indeed the documentation matches. Please tell me what extra evidence you need, I'd be glad to help. --FUZxxl (talk) 21:17, 15 August 2014 (UTC)
Certainly: you should start by reading my comments rather than addressing small parts. The thread is "SCCSv5", not "sccs5.0" (which is only inferentially related). As noted, the "SCCSv5" is Schily's term, not (without a WP:RS) Sun's. Regarding "5.0": there is no release date, there is no explicit statement that this is a new major release. There are no release notes which state this, either in those sources or (more appropriate) in a newsletter. The copyright dates are several years before the file-modification dates, leaving doubt as to what changes were made for "Solaris 11" (the title in the sources, perhaps otherwise unchanged from 5.9). Perhaps someone who's knowledgeable about the history of the code could point to a usable source - Wikipedia isn't the place to publish editor's guesswork. TEDickey (talk) 22:24, 15 August 2014 (UTC)
You're starting to nit-pick here. I clearly showed man pages for an SCCS version 5.0 that demonstrate that the file format has not been changed (i.e. only expanded in a backwards-compatible way). The original source by User:Schily shows a man page for the file format of SCCS version 4. It's not my job to nit-pick about naming the version SCCSv5 or SCCS 5.0—these two naming conventions are often used synonymously in software development. Please provide source that demonstrates that the claims you reverted in the original article where in fact factually incorrect but please stop wasting other people's time with nit picking. If you have a tangible source that claims otherwise or if you can prove that the sources [[User:Schily] and I provide are incorrect, Please come forward and explain why. So far you were only able to complain about unimportant details. Does it matter if the tarball I linked to might not be the first tarball of a version 5.0? No, because if the file format is still the same as in version 4, it surely also was this way in a hypothetical earlier version 5. In the unlikely case that an incompatible format existed in between, there surely would be some sort of reference for that, but such a reference cannot be found. Also, people forget to update copyright notices. SCCS is an old program with multiple copyright notices tacked on top. It is entirely plausible that the software developers didn't update them. Again, all of this doesn't matter as the source tree clearly shows a version 5.0 to exist. The existance of this version cannot be denied, it's right in front of you, even without an explicit release announcement. Please also explain how the lack of an explicit release announcement relates to your claim that User:Schily's claims are invalid. I provided extensive documentation that both prove that a version 5.0 exists and that the history format of this version 5.0 is compatible (i.e. extended in backwards-comptible ways) to that of 4.0 by providing documentation of the file formats for both releases which, after reading it, shows that the claims indeed hold. --FUZxxl (talk) 22:55, 15 August 2014 (UTC)
You're not making any valid points (since you insist on arguing). The "5" in "sccs5.0" doesn't have a date, isn't demonstrated to be relevant to the "4" in the PWB manpage. Equally likely, "5" relates to SystemV, and that "2002" vs "2007" could be 1987. All of your comments are targeted towards promoting Schily's edits about a program which you have no firsthand knowledge of during the period of interest. You have in fact provided no "extensive documentation", but rather an extended, repetitive comment. Schily's "sccs6" (however one chooses to spell it) is in a sense much like someone coming along to write another chapter to a well-known author's work, and in doing so, making connections which may/may not have existed before the latest chapter. TEDickey (talk) 08:01, 16 August 2014 (UTC)
Why are you starting to talk about Jörg Schilling's SCCS fork? The reference we are talking about is not about Schily's SCCS fork which calls itself "SCCSv6". All references both the article and I provide are from well-known "official" projects. None of the references points in any way to Jörg Schilling's SCCS. Please don't try to construct an artificial connection where none exists. It is not required that I possess any "firsthand knowledge during the period of interest"; using such knowledge without backing documentation (documentation that somebody without "firsthand knowledge" could find, too) would be WP:OR.
Why do you give other people a hard time and argue about the invisibility of things that are clearly in sight? The product announcement linked as a source clearly describes that a version 4.0 of SCCS exists. PWB is the Programmers' Work Bench, a bundle of software. SCCS/PWB is the version of SCCS bundled in the Programmers' Work Bench. More evidence that this is the intended meaning can be found further below where the announcement talks about The SCCS Release 3 commands. That this 4 relates to anything but the SCCS version is implausible. Please also explain to me how the 5.0 in sccs5.0 refers to System V, which is never written as 5.0. The 5.0 is a version number and that the sccs sources are in a folder named sccs5.0 is about as clear as it can get. Please, again, explain to me how my references are invalid. If you are unable to do so, I'm going to remove the {{discuss}} tags and perhaps add the tarballs and manpages I shows you in to that reference. --FUZxxl (talk) 10:28, 16 August 2014 (UTC)
Just a note: There is no SCCSv6 yet, however SCCS version 5.03 introduced the first features from SCCSv6. SCCS will be published under the name v6 once the minimal completeness for the extensions for project support and for distributed features is achieved. Schily (talk) 11:03, 16 August 2014 (UTC)
If Schily had a WP:RS, he would not rely upon meat puppets to provide disingenuous arguments. TEDickey (talk) 09:24, 17 August 2014 (UTC)
So you're out of arguments and have to rely on wild accusations and ad-hominem attacks instead? Please read WP:NPA and WP:DR, especially the parts about no name calling. --FUZxxl (talk) 11:06, 17 August 2014 (UTC)

What about the version control software built into operating systems?[edit]

I would be interested to see how the software already listed in this article compares to the version control software built into operating systems. As an example of the latter, I am thinking of "Versions", the version control system that is built into OS X since the release of Mac OS X Lion (there is a brief technical overview of "Versions" in John Siracusa's review of Mac OS X Lion in Ars Technica). Would it be appropriate to list "Versions" on this page? Biogeographist (talk) 15:19, 20 May 2016 (UTC)

According to WP:NOTDIR and WP:NOTSOAPBOX, every entry in this page needs to have a Wikipedia article. If it has a Wikipedia article, then yes, adding it is appropriate.
Best regards,
Codename Lisa (talk) 15:34, 20 May 2016 (UTC)
Thanks for the response, Codename Lisa: I agree with the policy that every entry in this page needs to have a Wikipedia article. But I will point out that as it stands, not every entry in this page has a Wikipedia article; there are some red links. Also, I discovered that "Versions" in OS X is not appropriate for this page for another reason: it is considered (at least in Wikipedia) to be a versioning file system, which is usually more transparent to the user than the version control software listed on this page. Biogeographist (talk) 16:03, 20 May 2016 (UTC)