Talk:Journaling file system
|WikiProject Computing||(Rated Start-class)|
- 1 Pros and Cons
- 2 Spelling And Words
- 3 GoBack
- 4 Why Legacy P1 PCs cann't use ext3?
- 5 Physical vs logical journalling
- 6 Why vs. What
- 7 Sources
- 8 Available Sources
- 9 Moving, not solving problems?
- 10 cost?
- 11 Technical detail needs some clarifying
- 12 Copy on Write filesystems not possible until BTRFS paper?
- 13 Physical vs. logical
- 14 Alternatives
Pros and Cons
The benefit of a journaling file system has been well made, but there is no section which specifically compares benefits with limitations. I don't want to attempt this myself on a subject I came to the page to find out about. (^_-)
Spelling And Words
Properly this is a "Journalized" file system according to the dictionary (See OED entry for Journal and for Journalize). Journal(l)ing is a Linuxism and incorrect English 22.214.171.124 (talk) 23:17, 30 April 2008 (UTC)
No. It may use Journaling to accomplish what it does, but it's not a filesystem. It works on Windows so NTFS is the filesystem that it uses.
--08:00, 13 November 2005 (UTC)
Why Legacy P1 PCs cann't use ext3?
Could someone explain why my P1-266Mhz can't use ext3? I can install ext2 without errors, but ext3 always halts the Linux install process. I expect it is a bios issue. I recently tried to install a 250GB disk to expand my file system and was unable to partion anything over 8-GB. I know that some file systems will not work above 8-GB because of bios limitations. Any configuration clues would certainly help.
--1:45pm 25 Feburary 2006
Physical vs logical journalling
Theres an interesting comment on this issue by an ext3 developer, pointing out that logical journalling assumes disk block writes are atomic (they either happen or don't), whereas at least PC hardware is not so nice. <http://zork.net/~nick/mail/why-reiserfs-is-teh-sukc>. —The preceding unsigned comment was added by 126.96.36.199 (talk • contribs) 20:38, May 24, 2006 (UTC)
I'd be interested to see examples of Physical and Logical journalling. I know quite a lot of filesystems journal metadata, but FreeBSD's gjournal is the only example of full data journalling I'm aware of. —Preceding unsigned comment added by 188.8.131.52 (talk) 01:03, 26 August 2010 (UTC)
Why vs. What
Could someone please elucidate on the "Why" of journaled filesystems, in addition to the what and how?
Can anyone cite a paper or two on journaling?
Since the most commonly know use of journaling is in HFS+ file systems, it might be appropriate to cite the following document for this article.
http://docs.info.apple.com/article.html?artnum=107249 --Zerocool3001 20:31, 26 July 2007 (UTC)
Moving, not solving problems?
- Citation: A journaled file system maintains a journal of the changes it intends to make, ahead of time. After a crash, recovery simply involves replaying changes from this journal until the file system is consistent again.
- What I currently only see is that all problems of inconsistency are moved to the journal. What makes sure that the journal itself is not written to disk in an inconsistent manner? That is, when the system crashes (power loss etc.) while the journal is being written, isn't it left in an inconsistent state, giving rise to corrupt file systems the next time the journal is used to update the FS? --Abdull (talk) 09:54, 8 August 2008 (UTC)
If a journal is found to be inconsistent, it will just be ignored and nothing will be changed to the filesystem itself. I.e. the filesystem stays consistent. —Preceding unsigned comment added by 184.108.40.206 (talk) 20:19, 13 September 2008 (UTC)
- Is it correct to say that operations that require more than a single write to the disk are converted into one write to the journal? So either the journal was written successfully or not, and therefore any operations that were written to the journal, but were half-completed on the disk, can be completed and then removed from the journal? — Preceding unsigned comment added by 220.127.116.11 (talk) 19:47, 21 June 2011 (UTC)
- No, because logging a large change in the journal may actually require multiple writes to multiple disk blocks. The journal can still give an atomicity guarantee, though, because it appends checksums to each and every change that it logs. When remounting after a crash, it just skips over changes with invalid checksums.—18.104.22.168 (talk) 05:48, 24 June 2011 (UTC)
Technical detail needs some clarifying
"recovery simply involves reading the journal from the file system and replaying changes from this journal until the file system is consistent"
Where do you start reading the journal, that is, at what point? How is this determined? Is it possible to backtrack from the end to find the last change that had succeeded? Also, the fs is consistent by definition at all times. It's a question of performing the changes that weren't successful before the crash. --22.214.171.124 (talk) 01:21, 2 May 2010 (UTC)
Copy on Write filesystems not possible until BTRFS paper?
"... Such file systems, however, were not feasible until the recent discovery of the necessary copy-on-write-friendly data structures" and the reference to a 2009 paper on BTRFS is not accurate. ZFS was made available in November of 2005 and is a copy-on-write filesystem. Tpenta (talk) 06:58, 30 December 2010 (UTC)
Physical vs. logical
The current comparison if logical vs. physical logs is inaccurate. It seems to imply that physical logs always log all data (as opposed to only meta-data). This is not true, e.g. ext3 uses a physical log, but is generally run in a meta-data only journalling mode (
data=writeback). The difference with file systems using a logical journal is that ext3 always logs the full meta-data block, while file systems with a logical journal will only log a record describing which fields in the meta-data block have to be updated. Logical journals would in theory require less I/O bandwidth but more CPU time.
The confusion may have arisen because physical journalling seems to be a prerequisite for full data journalling (
data=journal). —Ruud 19:00, 8 December 2011 (UTC)
The confusion has arisen because "physical vs. logical" relates to what is being journaled, while "block vs. record" relates to how much is being logged, and not all combinations are optimal; both choices are also independent of whether data or metadata (which is really just another form of data) is being journaled. To explain:
- The file-system can log either the operation to perform (for example: delete a directory entry) or the intended content of the storage block when performing that operation (the relevant bytes of the directory without the directory entry to be deleted). This is the "logical vs. physical" choice. Put another way, physical journaling records storage level changes (modify bytes X to Y with Z), while logical journaling record filesystem level changes (perform operation O on entity E).
- The file system can log either a fixed size full storage block per journal entry or a smaller, variable length entry containing the minimum necessary to record the intended change. This can be done with either physical or logical journaling, but logical journaling with a full block per entry is usually not an optimal combination, because filesystem operations can usually be described in not many bytes.
- Quite independently the file system might be journaling metadata or data changes, or both. In theory it could use different journaling choices for metadata or data, but that so far has not happened.
Usually logical metadata journaling is associated with variable length entries, as logical operations as a rule require a small amount of data to record, while physical or data journaling is associated with fixed sizes, full block entries, as it is easy to record the whole block as modified by the operation.
For example the IBM/Linux JFS2 journal is logical, variable length, and metadata only, but ext3 uses physical, fixed length, metadata or metadata+data journaling. The main reason is that JFS2 was designed from the ground up around journaling, while journaling was retrofitted into ext3, and it was easier to do the latter to just log intended physical changes to whole blocks, and then data journaling sort of came for free from that.
Note that a journaling system that would only log a record describing which fields in the meta-data block have to be updated would still be a physical journaling system, but with variable length entries.
It sounds like the description of copy-on-write and "soft update" are pretty similar. Is that accurate? Should the two be merged (for example, CoW merged into soft update as a second example to UFS)? — Preceding unsigned comment added by 126.96.36.199 (talk) 14:49, 13 May 2013 (UTC)